summaryrefslogtreecommitdiff
path: root/include/crypto/internal/hash.h
AgeCommit message (Collapse)Author
2024-02-02crypto: ahash - unexport crypto_hash_alg_has_setkey()Eric Biggers
Since crypto_hash_alg_has_setkey() is only called from ahash.c itself, make it a static function. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: ahash - remove support for nonzero alignmaskEric Biggers
Currently, the ahash API checks the alignment of all key and result buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. First, since it does not apply to the message, its effect is much more limited than e.g. is the case for the alignmask for "skcipher". Second, the key and result buffers are given as virtual addresses and cannot (in general) be DMA'ed into, so drivers end up having to copy to/from them in software anyway. As a result it's easy to use memcpy() or the unaligned access helpers. The crypto_hash_walk_*() helper functions do use the alignmask to align the message. But with one exception those are only used for shash algorithms being exposed via the ahash API, not for native ahashes, and aligning the message is not required in this case, especially now that alignmask support has been removed from shash. The exception is the n2_core driver, which doesn't set an alignmask. In any case, no ahash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from ahash. The benefit is that all the code to handle "misaligned" buffers in the ahash API goes away, reducing the overhead of the ahash API. This follows the same change that was made to shash. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: shash - remove crypto_shash_ctx_aligned()Eric Biggers
crypto_shash_ctx_aligned() is no longer used, and it is useless now that shash algorithms don't support nonzero alignmasks, so remove it. Also remove crypto_tfm_ctx_aligned() which was only called by crypto_shash_ctx_aligned(). It's unlikely to be useful again, since it seems inappropriate to use cra_alignmask to represent alignment for the tfm context when it already means alignment for inputs/outputs. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-05-12crypto: hash - Make crypto_ahash_alg helper availableHerbert Xu
Move the crypto_ahash_alg helper into include/crypto/internal so that drivers can use it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-05-12crypto: hash - Add statesize to crypto_ahashHerbert Xu
As ahash drivers may need to use fallbacks, their state size is thus variable. Deal with this by making it an attribute of crypto_ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: hash - Add crypto_clone_ahash/shashHerbert Xu
This patch adds the helpers crypto_clone_ahash and crypto_clone_shash. They are the hash-specific counterparts of crypto_clone_tfm. This allows code paths that cannot otherwise allocate a hash tfm object to do so. Once a new tfm has been obtained its key could then be changed without impacting other users. Note that only algorithms that implement clone_tfm can be cloned. However, all keyless hashes can be cloned by simply reusing the tfm object. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: hash - Use crypto_request_completeHerbert Xu
Use the crypto_request_complete helper instead of calling the completion function directly. This patch also removes the voodoo programming previously used for unaligned ahash operations and replaces it with a sub-request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: hash - Add ctx helpers with DMA alignmentHerbert Xu
This patch adds helpers to access the ahash context structure and request context structure with an added alignment for DMA access. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25Revert "crypto: shash - avoid comparing pointers to exported functions under ↵Eric Biggers
CFI" This reverts commit 22ca9f4aaf431a9413dcc115dd590123307f274f because CFI no longer breaks cross-module function address equality, so crypto_shash_alg_has_setkey() can now be an inline function like before. This commit should not be backported to kernels that don't have the new CFI implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-17crypto: shash - avoid comparing pointers to exported functions under CFIArd Biesheuvel
crypto_shash_alg_has_setkey() is implemented by testing whether the .setkey() member of a struct shash_alg points to the default version, called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static inline, this requires shash_no_setkey() to be exported to modules. Unfortunately, when building with CFI, function pointers are routed via CFI stubs which are private to each module (or to the kernel proper) and so this function pointer comparison may fail spuriously. Let's fix this by turning crypto_shash_alg_has_setkey() into an out of line function. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-28crypto: ahash - Add ahash_alg_instanceHerbert Xu
This patch adds the helper ahash_alg_instance which is used to convert a crypto_ahash object into its corresponding ahash_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-21crypto: hash - Remove unused async iteratorsIra Weiny
Revert "crypto: hash - Add real ahash walk interface" This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4. The callers of the functions in this commit were removed in ab8085c130ed Remove these unused calls. Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd") Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - convert shash_free_instance() to new styleEric Biggers
Convert shash_free_instance() and its users to the new way of freeing instances, where a ->free() method is installed to the instance struct itself. This replaces the weakly-typed method crypto_template::free(). This will allow removing support for the old way of freeing instances. Also give shash_free_instance() a more descriptive name to reflect that it's only for instances with a single spawn, not for any instance. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: hash - add support for new way of freeing instancesEric Biggers
Add support to shash and ahash for the new way of freeing instances (already used for skcipher, aead, and akcipher) where a ->free() method is installed to the instance struct itself. These methods are more strongly-typed than crypto_template::free(), which they replace. This will allow removing support for the old way of freeing instances. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - unexport crypto_ahash_typeEric Biggers
Now that all the templates that need ahash spawns have been converted to use crypto_grab_ahash() rather than look up the algorithm directly, crypto_ahash_type is no longer used outside of ahash.c. Make it static. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - remove obsoleted instance creation helpersEric Biggers
Remove lots of helper functions that were previously used for instantiating crypto templates, but are now unused: - crypto_get_attr_alg() and similar functions looked up an inner algorithm directly from a template parameter. These were replaced with getting the algorithm's name, then calling crypto_grab_*(). - crypto_init_spawn2() and similar functions initialized a spawn, given an algorithm. Similarly, these were replaced with crypto_grab_*(). - crypto_alloc_instance() and similar functions allocated an instance with a single spawn, given the inner algorithm. These aren't useful anymore since crypto_grab_*() need the instance allocated first. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - introduce crypto_grab_ahash()Eric Biggers
Currently, ahash spawns are initialized by using ahash_attr_alg() or crypto_find_alg() to look up the ahash algorithm, then calling crypto_init_ahash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_ahash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - introduce crypto_grab_shash()Eric Biggers
Currently, shash spawns are initialized by using shash_attr_alg() or crypto_alg_mod_lookup() to look up the shash algorithm, then calling crypto_init_shash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_shash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - make struct ahash_instance be the full sizeEric Biggers
Define struct ahash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating ahash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct ahash_instance by simplifying the ahash_crypto_instance() and ahash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - make struct shash_instance be the full sizeEric Biggers
Define struct shash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating shash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct shash_instance by simplifying the shash_crypto_instance() and shash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-20crypto: algapi - make unregistration functions return voidEric Biggers
Some of the algorithm unregistration functions return -ENOENT when asked to unregister a non-registered algorithm, while others always return 0 or always return void. But no users check the return value, except for two of the bulk unregistration functions which print a message on error but still always return 0 to their caller, and crypto_del_alg() which calls crypto_unregister_instance() which always returns 0. Since unregistering a non-registered algorithm is always a kernel bug but there isn't anything callers should do to handle this situation at runtime, let's simplify things by making all the unregistration functions return void, and moving the error message into crypto_unregister_alg() and upgrading it to a WARN(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: hmac - Use init_tfm/exit_tfm interfaceHerbert Xu
This patch switches hmac over to the new init_tfm/exit_tfm interface as opposed to cra_init/cra_exit. This way the shash API can make sure that descsize does not exceed the maximum. This patch also adds the API helper shash_alg_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: shash - allow essiv and hmac to use OPTIONAL_KEY algorithmsEric Biggers
The essiv and hmac templates refuse to use any hash algorithm that has a ->setkey() function, which includes not just algorithms that always need a key, but also algorithms that optionally take a key. Previously the only optionally-keyed hash algorithms in the crypto API were non-cryptographic algorithms like crc32, so this didn't really matter. But that's changed with BLAKE2 support being added. BLAKE2 should work with essiv and hmac, just like any other cryptographic hash. Fix this by allowing the use of both algorithms without a ->setkey() function and algorithms that have the OPTIONAL_KEY flag set. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-08Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Here is the crypto update for 5.3: API: - Test shash interface directly in testmgr - cra_driver_name is now mandatory Algorithms: - Replace arc4 crypto_cipher with library helper - Implement 5 way interleave for ECB, CBC and CTR on arm64 - Add xxhash - Add continuous self-test on noise source to drbg - Update jitter RNG Drivers: - Add support for SHA204A random number generator - Add support for 7211 in iproc-rng200 - Fix fuzz test failures in inside-secure - Fix fuzz test failures in talitos - Fix fuzz test failures in qat" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits) crypto: stm32/hash - remove interruptible condition for dma crypto: stm32/hash - Fix hmac issue more than 256 bytes crypto: stm32/crc32 - rename driver file crypto: amcc - remove memset after dma_alloc_coherent crypto: ccp - Switch to SPDX license identifiers crypto: ccp - Validate the the error value used to index error messages crypto: doc - Fix formatting of new crypto engine content crypto: doc - Add parameter documentation crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR crypto: arm64/aes-ce - add 5 way interleave routines crypto: talitos - drop icv_ool crypto: talitos - fix hash on SEC1. crypto: talitos - move struct talitos_edesc into talitos.h lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE crypto/NX: Set receive window credits to max number of CRBs in RxFIFO crypto: asymmetric_keys - select CRYPTO_HASH where needed crypto: serpent - mark __serpent_setkey_sbox noinline crypto: testmgr - dynamically allocate crypto_shash crypto: testmgr - dynamically allocate testvec_config crypto: talitos - eliminate unneeded 'done' functions at build time ...
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30crypto: algapi - remove crypto_tfm_in_queue()Eric Biggers
Remove the crypto_tfm_in_queue() function, which is unused. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-11crypto: algapi - remove crypto_alloc_instance()Eric Biggers
Now that all "blkcipher" templates have been converted to "skcipher", crypto_alloc_instance() is no longer used. And it's not useful any longer as it creates an old-style weakly typed instance rather than a new-style strongly typed instance. So remove it, and now that the name is freed up rename crypto_alloc_instance2() to crypto_alloc_instance(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-15crypto: mcryptd - remove pointless wrapper functionsEric Biggers
There is no need for ahash_mcryptd_{update,final,finup,digest}(); we should just call crypto_ahash_*() directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: hash - introduce crypto_hash_alg_has_setkey()Eric Biggers
Templates that use an shash spawn can use crypto_shash_alg_has_setkey() to determine whether the underlying algorithm requires a key or not. But there was no corresponding function for ahash spawns. Add it. Note that the new function actually has to support both shash and ahash algorithms, since the ahash API can be used with either. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29crypto: hmac - require that the underlying hash algorithm is unkeyedEric Biggers
Because the HMAC template didn't check that its underlying hash algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))" through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC being used without having been keyed, resulting in sha3_update() being called without sha3_init(), causing a stack buffer overflow. This is a very old bug, but it seems to have only started causing real problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3) because the innermost hash's state is ->import()ed from a zeroed buffer, and it just so happens that other hash algorithms are fine with that, but SHA-3 is not. However, there could be arch or hardware-dependent hash algorithms also affected; I couldn't test everything. Fix the bug by introducing a function crypto_shash_alg_has_setkey() which tests whether a shash algorithm is keyed. Then update the HMAC template to require that its underlying hash algorithm is unkeyed. Here is a reproducer: #include <linux/if_alg.h> #include <sys/socket.h> int main() { int algfd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "hmac(hmac(sha3-512-generic))", }; char key[4096] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (const struct sockaddr *)&addr, sizeof(addr)); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key)); } Here was the KASAN report from syzbot: BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341 [inline] BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044 CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x257 lib/dump_stack.c:53 print_address_description+0x73/0x250 mm/kasan/report.c:252 kasan_report_error mm/kasan/report.c:351 [inline] kasan_report+0x25b/0x340 mm/kasan/report.c:409 check_memory_region_inline mm/kasan/kasan.c:260 [inline] check_memory_region+0x137/0x190 mm/kasan/kasan.c:267 memcpy+0x37/0x50 mm/kasan/kasan.c:303 memcpy include/linux/string.h:341 [inline] sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 crypto_shash_update+0xcb/0x220 crypto/shash.c:109 shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 hmac_finup+0x182/0x330 crypto/hmac.c:152 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172 crypto_shash_digest+0xc4/0x120 crypto/shash.c:186 hmac_setkey+0x36a/0x690 crypto/hmac.c:66 crypto_shash_setkey+0xad/0x190 crypto/shash.c:64 shash_async_setkey+0x47/0x60 crypto/shash.c:207 crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200 hash_setkey+0x40/0x90 crypto/algif_hash.c:446 alg_setkey crypto/af_alg.c:221 [inline] alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254 SYSC_setsockopt net/socket.c:1851 [inline] SyS_setsockopt+0x189/0x360 net/socket.c:1830 entry_SYSCALL_64_fastpath+0x1f/0x96 Reported-by: syzbot <syzkaller@googlegroups.com> Cc: <stable@vger.kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-22crypto: hash - add crypto_(un)register_ahashes()Rabin Vincent
There are already helpers to (un)register multiple normal and AEAD algos. Add one for ahashes too. Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-10crypto: ahash - Fix EINPROGRESS notification callbackHerbert Xu
The ahash API modifies the request's callback function in order to clean up after itself in some corner cases (unaligned final and missing finup). When the request is complete ahash will restore the original callback and everything is fine. However, when the request gets an EBUSY on a full queue, an EINPROGRESS callback is made while the request is still ongoing. In this case the ahash API will incorrectly call its own callback. This patch fixes the problem by creating a temporary request object on the stack which is used to relay EINPROGRESS back to the original completion function. This patch also adds code to preserve the original flags value. Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...") Cc: <stable@vger.kernel.org> Reported-by: Sabrina Dubroca <sd@queasysnail.net> Tested-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: sha1-mb - async implementation for sha1-mbMegha Dey
Herbert wants the sha1-mb algorithm to have an async implementation: https://lkml.org/lkml/2016/4/5/286. Currently, sha1-mb uses an async interface for the outer algorithm and a sync interface for the inner algorithm. This patch introduces a async interface for even the inner algorithm. Signed-off-by: Megha Dey <megha.dey@linux.intel.com> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-02-06crypto: hash - Remove crypto_hash interfaceHerbert Xu
This patch removes all traces of the crypto_hash interface, now that everyone has switched over to shash or ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-08-25crypto: sha-mb - multibuffer crypto infrastructureTim Chen
This patch introduces the multi-buffer crypto daemon which is responsible for submitting crypto jobs in a work queue to the responsible multi-buffer crypto algorithm. The idea of the multi-buffer algorihtm is to put data streams from multiple jobs in a wide (AVX2) register and then take advantage of SIMD instructions to do crypto computation on several buffers simultaneously. The multi-buffer crypto daemon is also responsbile for flushing the remaining buffers to complete the computation if no new buffers arrive for a while. Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21crypto: hash - Add real ahash walk interfaceHerbert Xu
Although the existing hash walk interface has already been used by a number of ahash crypto drivers, it turns out that none of them were really asynchronous. They were all essentially polling for completion. That's why nobody has noticed until now that the walk interface couldn't work with a real asynchronous driver since the memory is mapped using kmap_atomic. As we now have a use-case for a real ahash implementation on x86, this patch creates a minimal ahash walk interface. Basically it just calls kmap instead of kmap_atomic and does away with the crypto_yield call. Real ahash crypto drivers don't need to yield since by definition they won't be hogging the CPU. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-08-01crypto: add crypto_[un]register_shashes for [un]registering multiple shash ↵Jussi Kivilinna
entries at once Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash crypto modules that register multiple algorithms. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15crypto: ahash - Add unaligned handling and default operationsHerbert Xu
This patch exports the finup operation where available and adds a default finup operation for ahash. The operations final, finup and digest also will now deal with unaligned result pointers by copying it. Finally export/import operations are will now be exported too. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: ahash - Remove old_ahash_algHerbert Xu
Now that all ahash implementations have been converted to the new ahash type, we can remove old_ahash_alg and its associated support. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: crypto4xx - Switch to new style ahashHerbert Xu
This patch changes crypto4xx to use the new style ahash type. In particular, we now use ahash_alg to define ahash algorithms instead of crypto_alg. This is achieved by introducing a union that encapsulates the new type and the existing crypto_alg structure. They're told apart through a u32 field containing the type value. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: hash - Add helpers to free spawnsHerbert Xu
This patch adds the helpers crypto_drop_ahash and crypto_drop_shash so that these spawns can be dropped without ugly casts. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: ahash - Add instance/spawn supportHerbert Xu
This patch adds support for creating ahash instances and using ahash as spawns. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: ahash - Convert to new style algorithmsHerbert Xu
This patch converts crypto_ahash to the new style. The old ahash algorithm type is retained until the existing ahash implementations are also converted. All ahash users will automatically get the new crypto_ahash type. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: ahash - Add crypto_ahash_set_reqsizeHerbert Xu
This patch adds the helper crypto_ahash_set_reqsize so that implementations do not directly access the crypto_ahash structure. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: shash - Export async functionsHerbert Xu
This patch exports the async functions so that they can be reused by cryptd when it switches over to using shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: shash - Make descsize a run-time attributeHerbert Xu
This patch changes descsize to a run-time attribute so that implementations can change it in their init functions. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-09crypto: shash - Add shash_instance_ctxHerbert Xu
This patch adds the helper shash_instance_ctx which is the shash analogue of crypto_instance_ctx. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add __crypto_shash_castHerbert Xu
This patch adds __crypto_shash_cast which turns a crypto_tfm into crypto_shash. It's analogous to the other __crypto_*_cast functions. It hasn't been needed until now since no existing shash algorithms have had an init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add crypto_shash_ctx_alignedHerbert Xu
This patch adds crypto_shash_ctx_aligned which will be needed by hmac after its conversion to shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add shash_register_instanceHerbert Xu
This patch adds shash_register_instance so that shash instances can be registered without bypassing the shash checks applied to normal algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>