Age | Commit message (Collapse) | Author |
|
The crypto API is made up of the part facing users such as IPsec and the
low-level part which is used by cryptographic entities such as algorithms.
This patch splits out the latter so that the two APIs are more clearly
delineated. As a bonus the low-level API can now be modularised if all
algorithms are built as modules.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Up until now we've relied on module reference counting to ensure that the
crypto_alg structures don't disappear from under us. This was good enough
as long as each crypto_alg came from exactly one module.
However, with parameterised crypto algorithms a crypto_alg object may need
two or more modules to operate. This means that we need to count the
references to the crypto_alg object directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The functions crypto_alg_get and crypto_alg_put operates on the crypto
modules rather than the algorithms. Therefore it makes sense to call
them crypto_mod_get and crypto_alg_put respectively.
This is needed because we need to have real algorithm reference counters
for parameterised algorithms as they can be unregistered from below by
when their parameter algorithms are themselves unregistered.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The patch passed the trycpt tests and automated filesystem tests.
This rewrite resulted in some nice perfomance increase over my last patch.
Short summary of the tcrypt benchmarks:
Twofish Assembler vs. Twofish C (256bit 8kb block CBC)
encrypt: -27% Cycles
decrypt: -23% Cycles
Twofish Assembler vs. AES Assembler (128bit 8kb block CBC)
encrypt: +18% Cycles
decrypt: +15% Cycles
Twofish Assembler vs. AES Assembler (256bit 8kb block CBC)
encrypt: -9% Cycles
decrypt: -8% Cycles
Full Output:
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-c-x86_64.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-asm-x86_64.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-aes-asm-x86_64.txt
Here is another bonnie++ benchmark with encrypted filesystems. Most runs maxed
out the hd. It should give some idea what the module can do for encrypted filesystem
performance even though you can't see the full numbers.
http://homepages.tu-darmstadt.de/~fritschi/twofish/output_20060610_130806_x86_64.html
Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The patch passed the trycpt tests and automated filesystem tests.
This rewrite resulted in some nice perfomance increase over my last patch.
Short summary of the tcrypt benchmarks:
Twofish Assembler vs. Twofish C (256bit 8kb block CBC)
encrypt: -33% Cycles
decrypt: -45% Cycles
Twofish Assembler vs. AES Assembler (128bit 8kb block CBC)
encrypt: +3% Cycles
decrypt: -22% Cycles
Twofish Assembler vs. AES Assembler (256bit 8kb block CBC)
encrypt: -20% Cycles
decrypt: -36% Cycles
Full Output:
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-asm-i586.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-c-i586.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-aes-asm-i586.txt
Here is another bonnie++ benchmark with encrypted filesystems. All runs with
the twofish assembler modules max out the drivespeed. It should give some
idea what the module can do for encrypted filesystem performance even though
you can't see the full numbers.
http://homepages.tu-darmstadt.de/~fritschi/twofish/output_20060611_205432_x86.html
Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds a proper driver name and priority to the generic c
implemtation to allow coexistance of c and assembler modules.
Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch splits up the twofish crypto routine into a common part ( key
setup ) which will be uses by all twofish crypto modules ( generic-c , i586
assembler and x86_64 assembler ) and generic-c part. It also creates a new
header file which will be used by all 3 modules.
This eliminates all code duplication.
Correctness was verified with the tcrypt module and automated test scripts.
Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
It makes no sense to build tcrypt into the kernel. In fact, now that
the driver init function's return status is being checked, it is in
fact harmful to do so.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds speed tests (benchmarks) for digest algorithms.
Tests are run with different buffer sizes (16 bytes, ... 8 kBytes)
and with each buffer multiple tests are run with different update()
sizes (e.g. hash 64 bytes buffer in four 16 byte updates).
There is no correctness checking of the result and all tests and
algorithms use the same input buffer.
Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Intentionaly return -EAGAIN from module_init() to ensure
it doesn't stay loaded in the kernel. The module does all
its work from init() and doesn't offer any runtime
functionality => we don't need it in the memory, do we?
Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
We already allow asynchronous removal of existing algorithm modules. By
allowing the replacement of existing algorithms, we can replace algorithms
without having to wait for for all existing users to complete.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
We do need to change these names now and even more so in future with
instantiated algorithms. So let's stop lying to the compiler and get
rid of the const modifiers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds the hooks cra_init/cra_exit which are called during a tfm's
construction and destruction respectively. This will be used by the instances
to allocate child tfm's.
For now this lets us get rid of the coa_init/coa_exit functions which are
used for exactly that purpose (unlike the dia_init function which is called
for each transaction).
In fact the coa_exit path is currently buggy as it may get called twice
when an error is encountered during initialisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fix a few omissions in passing TFM instead of CTX to algorithms.
Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Up until now algorithms have been happy to get a context pointer since
they know everything that's in the tfm already (e.g., alignment, block
size).
However, once we have parameterised algorithms, such information will
be specific to each tfm. So the algorithm API needs to be changed to
pass the tfm structure instead of the context pointer.
This patch is basically a text substitution. The only tricky bit is
the assembly routines that need to get the context pointer offset
through asm-offsets.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Various digest algorithms operate one block at a time and therefore
keep a temporary buffer of partial blocks. This buffer does not need
to be initialised since there is a counter which indicates what is and
isn't valid in it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some hash modules load/store data words directly. The digest layer
should pass properly aligned buffer to update()/final() method. This
patch also add cra_alignmask to some hash modules.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
On 64-bit platform, reading 64-bit keys (which is supposed to be
32-bit aligned) at a time will result in unaligned access.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The AES setkey routine writes 64 bytes to the E_KEY area even though
there are only 60 bytes there. It is in fact safe since E_KEY is
immediately follwed by D_KEY which is initialised afterwards. However,
doing this may trigger undefined behaviour and makes Coverity unhappy.
So by combining E_KEY and D_KEY into one array we sidestep this issue
altogether.
This problem was reported by Adrian Bunk.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Force 32-bit alignment on keys in tcrypt test vectors. Also rearrange the
structure to prevent unnecessary padding.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The "des3_ede" and "serpent" lack cra_alignmask.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
this patch converts crypto/ to kzalloc usage.
Compile tested with allyesconfig.
Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since tfm contexts can contain arbitrary types we should provide at least
natural alignment (__attribute__ ((__aligned__))) for them. In particular,
this is needed on the Xscale which is a 32-bit architecture with a u64 type
that requires 64-bit alignment. This problem was reported by Ronen Shitrit.
The crypto_tfm structure's size was 44 bytes on 32-bit architectures and
80 bytes on 64-bit architectures. So adding this requirement only means
that we have to add an extra 4 bytes on 32-bit architectures.
On i386 the natural alignment is 16 bytes which also benefits the VIA
Padlock as it no longer has to manually align its context structure to
128 bits.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Convert open coded rotations to rol32/ror32.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
A bunch of asm/bug.h includes are both not needed (since it will get
pulled anyway) and bogus (since they are done too early). Removed.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Many cipher implementations use 4-byte/8-byte loads/stores which require
alignment on some architectures. This patch explicitly sets the alignment
requirements for them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The cipher code path may allocate up to two blocks of data on the stack.
Therefore we need to place limits on the maximum block size.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
After a partial update, the done pointer is off to the right by 64 bytes.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since the temporary buffer is used as an argument to cia_decrypt, it must be
aligned by cra_alignmask. This bug was found by linux@horizon.com.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch avoids shifting the count left and right needlessly for each
call to sha1_update(). It instead can be done only once at the end in
sha1_final().
Keeping the previous test example (sha1_update() successively called with
len=64), a 1.3% performance increase can be observed on i386, or 0.2% on
ARM. The generated code is also smaller on ARM.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch gives more descriptive names to the variables i and j.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The current code unconditionally copy the first block for every call to
sha1_update(). This can be avoided if there is no pending partial block.
This is always the case on the first call to sha1_update() (if the length
is >= 64 of course.
Furthermore, temp does need to be called if sha_transform is never invoked.
Also consolidate the sha_transform calls into one to reduce code size.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As the Crypto API now allows multiple implementations to be registered
for the same algorithm, we no longer have to play tricks with Kconfig
to select the right AES implementation.
This patch sets the driver name and priority for all the AES
implementations and removes the Kconfig conditions on the C implementation
for AES.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This is the first step on the road towards asynchronous support in
the Crypto API. It adds support for having multiple crypto_alg objects
for the same algorithm registered in the system.
For example, each device driver would register a crypto_alg object
for each algorithm that it supports. While at the same time the
user may load software implementations of those same algorithms.
Users of the Crypto API may then select a specific implementation
by name, or choose any implementation for a given algorithm with
the highest priority.
The priority field is a 32-bit signed integer. In future it will be
possible to modify it from user-space.
This also provides a solution to the problem of selecting amongst
various AES implementations, that is, aes vs. aes-i586 vs. aes-padlock.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
A lot of crypto code needs to read/write a 32-bit/64-bit words in a
specific gender. Many of them open code them by reading/writing one
byte at a time. This patch converts all the applicable usages over
to use the standard byte order macros.
This is based on a previous patch by Denis Vlasenko.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Sanitize some s390 Kconfig options. We have ARCH_S390, ARCH_S390X,
ARCH_S390_31, 64BIT, S390_SUPPORT and COMPAT. Replace these 6 options by
S390, 64BIT and COMPAT.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Add new test vectors to the AES test suite for AES CBC and AES with plaintext
larger than AES blocksize.
Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Add support for the hardware accelerated AES crypto algorithm.
Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Add support for the hardware accelerated sha256 crypto algorithm.
Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Replace all references to z990 by s390 in the in-kernel crypto files in
arch/s390/crypto. The code is not specific to a particular machine (z990) but
to the s390 platform. Big diff, does nothing..
Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
The cipher code relies on the fact that the block size is a multiple
of the required alignment. So we should check this at the time of
algorith registration. We also ensure that the block size is bounded
by the page size.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch rewrites various occurences of &sg[0] where sg is an array
of length one to simply sg.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch uses sg_set_buf/sg_init_one in some places where it was
duplicated.
Signed-off-by: David Hardeman <david@2gen.com>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Cc: Greg KH <greg@kroah.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The boundary check in the standard multi-block cipher processors are
broken when nbytes is not a multiple of bsize. In those cases it will
always process an extra block.
This patch corrects the check so that it processes at most nbytes of
data.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The crypto layer currently uses in_atomic() to determine whether it is
allowed to sleep. This is incorrect since spin locks don't always cause
in_atomic() to return true.
Instead of that, this patch returns to an earlier idea of a per-tfm flag
which determines whether sleeping is allowed. Unlike the earlier version,
the default is to not allow sleeping. This ensures that no existing code
can break.
As usual, this flag may either be set through crypto_alloc_tfm(), or
just before a specific crypto operation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The XTEA implementation was incorrect due to a misinterpretation of
operator precedence. Because of the wide-spread nature of this
error, the erroneous implementation will be kept, albeit under the
new name of XETA.
Signed-off-by: Aaron Grothe <ajgrothe@yahoo.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
`gcc -W' likes to complain if the static keyword is not at the beginning of
the declaration. This patch fixes all remaining occurrences of "inline
static" up with "static inline" in the entire kernel tree (140 occurrences in
47 files).
While making this change I came across a few lines with trailing whitespace
that I also fixed up, I have also added or removed a blank line or two here
and there, but there are no functional changes in the patch.
Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Noticed by Ken-ichirou MATSUZAWA.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
I've made a new implementation of DES to replace the old one in the kernel.
It provides faster encryption on all tested processors apart from the original
Pentium, and key setup is many times faster.
Speed relative to old kernel implementation
Processor des_setkey des_encrypt des3_ede_setkey des3_ede_encrypt
Pentium
120Mhz 6.8 0.82 7.2 0.86
Pentium III
1.266Ghz 5.6 1.19 5.8 1.34
Pentium M
1.3Ghz 5.7 1.15 6.0 1.31
Pentium 4
2.266Ghz 5.8 1.24 6.0 1.40
Pentium 4E
3Ghz 5.4 1.27 5.5 1.48
StrongARM 1110
206Mhz 4.3 1.03 4.4 1.14
Athlon XP
2Ghz 7.8 1.44 8.1 1.61
Athlon 64
2Ghz 7.8 1.34 8.3 1.49
Signed-off-by: Dag Arne Osvik <da@osvik.no>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The iv field in des_ctx/des3_ede_ctx/serpent_ctx has never been used.
This was noticed by Dag Arne Osvik.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|