<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/arch/arm/lib, branch v3.0.43</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/arch/arm/lib?h=v3.0.43</id>
<link rel='self' href='https://git.amat.us/linux/atom/arch/arm/lib?h=v3.0.43'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2011-05-27T21:56:53Z</updated>
<entry>
<title>ARM: 6945/1: Add unwinding support for division functions</title>
<updated>2011-05-27T21:56:53Z</updated>
<author>
<name>Laura Abbott</name>
<email>lauraa@codeaurora.org</email>
</author>
<published>2011-05-27T16:23:16Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=81479c246c07b703aeb4bf54933b7d928cb5b717'/>
<id>urn:sha1:81479c246c07b703aeb4bf54933b7d928cb5b717</id>
<content type='text'>
The software division functions never had unwinding annotations
added. Currently, when a division by zero occurs the backtrace shown
will stop at Ldiv0 or some completely unrelated function. Add
unwinding annotations in hopes of getting a more useful backtrace
when a division by zero occurs.

Signed-off-by: Laura Abbott &lt;lauraa@codeaurora.org&gt;
Acked-by: Dave Martin &lt;dave.martin@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>Merge branches 'fixes', 'pgt-next' and 'versatile' into devel</title>
<updated>2011-03-20T09:32:12Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@arm.linux.org.uk</email>
</author>
<published>2011-03-20T09:32:12Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=196f020fbbb83d246960548e73a40fd08f3e7866'/>
<id>urn:sha1:196f020fbbb83d246960548e73a40fd08f3e7866</id>
<content type='text'>
</content>
</entry>
<entry>
<title>ARM: pgtable: add pud-level code</title>
<updated>2011-02-21T19:24:14Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@arm.linux.org.uk</email>
</author>
<published>2010-11-21T16:27:49Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=516295e5ab4bf986865cfff856d484ec678e3b0b'/>
<id>urn:sha1:516295e5ab4bf986865cfff856d484ec678e3b0b</id>
<content type='text'>
Add pud_offset() et.al. between the pgd and pmd code in preparation of
using pgtable-nopud.h rather than 4level-fixup.h.

This incorporates a fix from Jamie Iles &lt;jamie@jamieiles.com&gt; for
uaccess_with_memcpy.c.

Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: 6653/1: bitops: Use BX instead of MOV PC,LR</title>
<updated>2011-02-19T16:07:21Z</updated>
<author>
<name>Dave Martin</name>
<email>dave.martin@linaro.org</email>
</author>
<published>2011-02-08T11:09:52Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=3ba6e69ad887f8a814267ed36fd4bfbddf8855a9'/>
<id>urn:sha1:3ba6e69ad887f8a814267ed36fd4bfbddf8855a9</id>
<content type='text'>
The kernel doesn't officially need to interwork, but using BX
wherever appropriate will help educate people into good assembler
coding habits.

BX is appropriate here because this code is predicated on
__LINUX_ARM_ARCH__ &gt;= 6

Signed-off-by: Dave Martin &lt;dave.martin@linaro.org&gt;
Acked-by: Nicolas Pitre &lt;nicolas.pitre@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: bitops: switch set/clear/change bitops to use ldrex/strex</title>
<updated>2011-02-02T21:23:25Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@arm.linux.org.uk</email>
</author>
<published>2011-01-16T18:02:17Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=6323f0ccedf756dfe5f46549cec69a2d6d97937b'/>
<id>urn:sha1:6323f0ccedf756dfe5f46549cec69a2d6d97937b</id>
<content type='text'>
Switch the set/clear/change bitops to use the word-based exclusive
operations, which are only present in a wider range of ARM architectures
than the byte-based exclusive operations.

Tested record:
- Nicolas Pitre: ext3,rw,le
- Sourav Poddar: nfs,le
- Will Deacon: ext3,rw,le
- Tony Lindgren: ext3+nfs,le

Reviewed-by: Nicolas Pitre &lt;nicolas.pitre@linaro.org&gt;
Tested-by: Sourav Poddar &lt;sourav.poddar@ti.com&gt;
Tested-by: Will Deacon &lt;will.deacon@arm.com&gt;
Tested-by: Tony Lindgren &lt;tony@atomide.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: bitops: ensure set/clear/change bitops take a word-aligned pointer</title>
<updated>2011-02-02T21:21:53Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@arm.linux.org.uk</email>
</author>
<published>2011-01-16T17:59:44Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=a16ede35a2659170c855c5d267776666c0630f1f'/>
<id>urn:sha1:a16ede35a2659170c855c5d267776666c0630f1f</id>
<content type='text'>
Add additional instructions to our assembly bitops functions to ensure
that they only operate on word-aligned pointers.  This will be necessary
when we switch these operations to use the word-based exclusive
operations.

Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: udelay: prevent math rounding resulting in short udelays</title>
<updated>2011-01-10T23:55:59Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@arm.linux.org.uk</email>
</author>
<published>2011-01-10T23:55:59Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=56949d414acd30353fdba4b64876a0a7953a7b77'/>
<id>urn:sha1:56949d414acd30353fdba4b64876a0a7953a7b77</id>
<content type='text'>
We perform the microseconds to loops calculation using a number of
multiplies and shift rights.  Each shift right rounds down the
resulting value, which can result in delays shorter than requested.
Ensure that we always round up.

Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>Merge branch 'smp' into misc</title>
<updated>2011-01-06T22:32:03Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@arm.linux.org.uk</email>
</author>
<published>2011-01-06T22:31:35Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4ec3eb13634529c0bc7466658d84d0bbe3244aea'/>
<id>urn:sha1:4ec3eb13634529c0bc7466658d84d0bbe3244aea</id>
<content type='text'>
Conflicts:
	arch/arm/kernel/entry-armv.S
	arch/arm/mm/ioremap.c
</content>
</entry>
<entry>
<title>ARM: 6482/2: Fix find_next_zero_bit and related assembly</title>
<updated>2010-11-24T20:17:46Z</updated>
<author>
<name>James Jones</name>
<email>jajones@nvidia.com</email>
</author>
<published>2010-11-23T23:21:37Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=0e91ec0c06d2cd15071a6021c94840a50e6671aa'/>
<id>urn:sha1:0e91ec0c06d2cd15071a6021c94840a50e6671aa</id>
<content type='text'>
The find_next_bit, find_first_bit, find_next_zero_bit
and find_first_zero_bit functions were not properly
clamping to the maxbit argument at the bit level. They
were instead only checking maxbit at the byte level.
To fix this, add a compare and a conditional move
instruction to the end of the common bit-within-the-
byte code used by all the functions and be sure not to
clobber the maxbit argument before it is used.

Cc: &lt;stable@kernel.org&gt;
Reviewed-by: Nicolas Pitre &lt;nicolas.pitre@linaro.org&gt;
Tested-by: Stephen Warren &lt;swarren@nvidia.com&gt;
Signed-off-by: James Jones &lt;jajones@nvidia.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: 6384/1: Remove the domain switching on ARMv6k/v7 CPUs</title>
<updated>2010-11-04T15:44:31Z</updated>
<author>
<name>Catalin Marinas</name>
<email>catalin.marinas@arm.com</email>
</author>
<published>2010-09-13T15:03:21Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=247055aa21ffef1c49dd64710d5e94c2aee19b58'/>
<id>urn:sha1:247055aa21ffef1c49dd64710d5e94c2aee19b58</id>
<content type='text'>
This patch removes the domain switching functionality via the set_fs and
__switch_to functions on cores that have a TLS register.

Currently, the ioremap and vmalloc areas share the same level 1 page
tables and therefore have the same domain (DOMAIN_KERNEL). When the
kernel domain is modified from Client to Manager (via the __set_fs or in
the __switch_to function), the XN (eXecute Never) bit is overridden and
newer CPUs can speculatively prefetch the ioremap'ed memory.

Linux performs the kernel domain switching to allow user-specific
functions (copy_to/from_user, get/put_user etc.) to access kernel
memory. In order for these functions to work with the kernel domain set
to Client, the patch modifies the LDRT/STRT and related instructions to
the LDR/STR ones.

The user pages access rights are also modified for kernel read-only
access rather than read/write so that the copy-on-write mechanism still
works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register
(CPU_32v6K is defined) since writing the TLS value to the high vectors page
isn't possible.

The user addresses passed to the kernel are checked by the access_ok()
function so that they do not point to the kernel space.

Tested-by: Anton Vorontsov &lt;cbouatmailru@gmail.com&gt;
Cc: Tony Lindgren &lt;tony@atomide.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
</feed>
