<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/drivers/crypto, branch v2.6.32.56</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/drivers/crypto?h=v2.6.32.56</id>
<link rel='self' href='https://git.amat.us/linux/atom/drivers/crypto?h=v2.6.32.56'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2010-12-09T21:27:10Z</updated>
<entry>
<title>crypto: padlock - Fix AES-CBC handling on odd-block-sized input</title>
<updated>2010-12-09T21:27:10Z</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2010-11-04T18:38:39Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=86a48e105f8558d15df47d3a71144406337e7790'/>
<id>urn:sha1:86a48e105f8558d15df47d3a71144406337e7790</id>
<content type='text'>
commit c054a076a1bd4731820a9c4d638b13d5c9bf5935 upstream.

On certain VIA chipsets AES-CBC requires the input/output to be
a multiple of 64 bytes.  We had a workaround for this but it was
buggy as it sent the whole input for processing when it is meant
to only send the initial number of blocks which makes the rest
a multiple of 64 bytes.

As expected this causes memory corruption whenever the workaround
kicks in.

Reported-by: Phil Sutter &lt;phil@nwl.cc&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>crypto: padlock-sha - Add import/export support</title>
<updated>2010-02-23T15:37:54Z</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2010-01-31T22:17:56Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=fa56c7eaeca469055d79d1a2fe66a67829e44bf0'/>
<id>urn:sha1:fa56c7eaeca469055d79d1a2fe66a67829e44bf0</id>
<content type='text'>
commit a8d7ac279743077965afeca0c9ed748507b68e89 upstream.

As the padlock driver for SHA uses a software fallback to perform
partial hashing, it must implement custom import/export functions.
Otherwise hmac which depends on import/export for prehashing will
not work with padlock-sha.

Reported-by: Wolfgang Walter &lt;wolfgang.walter@stwm.de&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>crypto: padlock-aes - Use the correct mask when checking whether copying is required</title>
<updated>2009-11-03T15:32:03Z</updated>
<author>
<name>Chuck Ebbert</name>
<email>cebbert@redhat.com</email>
</author>
<published>2009-11-03T15:32:03Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=e8edb3cbd7dd8acf6c748a02d06ec1d82c4124ea'/>
<id>urn:sha1:e8edb3cbd7dd8acf6c748a02d06ec1d82c4124ea</id>
<content type='text'>
Masking with PAGE_SIZE is just wrong...

Signed-off-by: Chuck Ebbert &lt;cebbert@redhat.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: padlock-sha - Fix stack alignment</title>
<updated>2009-09-22T06:21:53Z</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2009-09-22T06:21:53Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4c6ab3ee4cdb86cbd4e9400dd22fad7701cbe795'/>
<id>urn:sha1:4c6ab3ee4cdb86cbd4e9400dd22fad7701cbe795</id>
<content type='text'>
The PadLock hardware requires the output buffer for SHA to be
128-bit aligned.  We currentply place the buffer on the stack,
and ask gcc to align it to 128 bits.  That doesn't work on i386
because the kernel stack is only aligned to 32 bits.  This patch
changes the code to align the buffer by hand so that the hardware
doesn't fault on unaligned buffers.

Reported-by: Séguier Régis &lt;rguier@e-teleport.net&gt;
Tested-by: Séguier Régis &lt;rguier@e-teleport.net&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: talitos - add support for 36 bit addressing</title>
<updated>2009-08-13T01:51:51Z</updated>
<author>
<name>Kim Phillips</name>
<email>kim.phillips@freescale.com</email>
</author>
<published>2009-08-13T01:51:51Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=81eb024c7e63f53b871797f6e2defccfd008dcd4'/>
<id>urn:sha1:81eb024c7e63f53b871797f6e2defccfd008dcd4</id>
<content type='text'>
Enabling extended addressing in the h/w requires we always assign the
extended address component (eptr) of the talitos h/w pointer.  This is
for e500 based platforms with large memories.

Signed-off-by: Kim Phillips &lt;kim.phillips@freescale.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: talitos - align locks on cache lines</title>
<updated>2009-08-13T01:50:38Z</updated>
<author>
<name>Kim Phillips</name>
<email>kim.phillips@freescale.com</email>
</author>
<published>2009-08-13T01:50:38Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4b992628812137e30cda3510510cf3c052345b30'/>
<id>urn:sha1:4b992628812137e30cda3510510cf3c052345b30</id>
<content type='text'>
align channel access locks onto separate cache lines (for performance
reasons).  This is done by placing per-channel variables into their own
private struct, and using the cacheline_aligned attribute within that
struct.

Signed-off-by: Kim Phillips &lt;kim.phillips@freescale.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: talitos - simplify hmac data size calculation</title>
<updated>2009-08-13T01:49:06Z</updated>
<author>
<name>Kim Phillips</name>
<email>kim.phillips@freescale.com</email>
</author>
<published>2009-08-13T01:49:06Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=e41256f139b9148cfa12d2d057fec39e3d181ff0'/>
<id>urn:sha1:e41256f139b9148cfa12d2d057fec39e3d181ff0</id>
<content type='text'>
don't do request-&gt;src vs. assoc pointer math - it's the same as adding
assoclen and ivsize (just with more effort).

Signed-off-by: Kim Phillips &lt;kim.phillips@freescale.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: mv_cesa - Add support for Orion5X crypto engine</title>
<updated>2009-08-10T02:50:03Z</updated>
<author>
<name>Sebastian Andrzej Siewior</name>
<email>sebastian@breakpoint.cc</email>
</author>
<published>2009-08-10T02:50:03Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=85a7f0ac5370901916a21935e1fafbe397b70f80'/>
<id>urn:sha1:85a7f0ac5370901916a21935e1fafbe397b70f80</id>
<content type='text'>
This adds support for Marvell's Cryptographic Engines and Security
Accelerator (CESA) which can be found on a few SoC.
Tested with dm-crypt.

Acked-by: Nicolas Pitre &lt;nico@marvell.com&gt;
Signed-off-by: Sebastian Andrzej Siewior &lt;sebastian@breakpoint.cc&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: padlock - Fix hashing of partial blocks</title>
<updated>2009-07-16T02:33:27Z</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2009-07-16T02:33:27Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=e9b25f16cda88b33fe15b30c009912e6c471edda'/>
<id>urn:sha1:e9b25f16cda88b33fe15b30c009912e6c471edda</id>
<content type='text'>
When we encounter partial blocks in finup, we'll invoke the xsha
instruction with a bogus count that is not a multiple of the block
size.  This patch fixes it.

Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>crypto: padlock - Fix compile error on i386</title>
<updated>2009-07-15T10:37:48Z</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2009-07-15T10:37:48Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=faae890883624e14a328863eafabf54a36698774'/>
<id>urn:sha1:faae890883624e14a328863eafabf54a36698774</id>
<content type='text'>
The previous change to allow hashing from states other than the
initial broke compilation on i386 because the inline assembly
tried to squeeze a u64 into a 32-bit register.  As we've already
checked for 32-bit overflows we can simply truncate it to u32,
or unsigned long so that we don't truncate at all on x86-64.

Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
</feed>
