<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/arch/sh/include/asm/processor_32.h, branch v3.0.82</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/arch/sh/include/asm/processor_32.h?h=v3.0.82</id>
<link rel='self' href='https://git.amat.us/linux/atom/arch/sh/include/asm/processor_32.h?h=v3.0.82'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2011-01-11T05:39:35Z</updated>
<entry>
<title>sh: constify prefetch pointers.</title>
<updated>2011-01-11T05:39:35Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2011-01-11T05:39:35Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c81fc389255d287dbfb17a70cd172663f962341a'/>
<id>urn:sha1:c81fc389255d287dbfb17a70cd172663f962341a</id>
<content type='text'>
prefetch()/prefetchw() are supposed to take a const void * instead of a
straight void *, which the build recently started complaining about, fix
them up.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: Use GCC __builtin_prefetch() to implement prefetch().</title>
<updated>2010-11-18T05:53:18Z</updated>
<author>
<name>Giuseppe CAVALLARO</name>
<email>peppe.cavallaro@st.com</email>
</author>
<published>2010-11-17T06:50:17Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=d53e4307c2f3856167407a1d9b8f8fa001286066'/>
<id>urn:sha1:d53e4307c2f3856167407a1d9b8f8fa001286066</id>
<content type='text'>
GCC's __builtin_prefetch() was introduced a long time ago, all
supported GCC versions have it. So this patch is to use it for
implementing the prefetch on SH2A and SH4.

The current  prefetch implementation is almost equivalent with
__builtin_prefetch.
The third parameter in the __builtin_prefetch is the locality
that it's not supported on SH architectures.  It has been set
to three and it should be verified if it's suitable for SH2A
as well. I didn't test on this architecture.

The builtin usage should be more efficient that an __asm__
because less barriers, and because the compiler doesn't see the
inst as a "black box" allowing better code generation.

This has been already done on other architectures (see the commit:
0453fb3c528c5eb3483441a466b24a4cb409eec5).

Many thanks to Christian Bruel &lt;christain.bruel@st.com&gt; for his
support on evaluate the impact of the gcc built-in on SH4 arch.

No regressions found while testing with LMbench on STLinux targets.

Signed-off-by: Giuseppe Cavallaro &lt;peppe.cavallaro@st.com&gt;
Signed-off-by: Stuart Menefy &lt;stuart.menefy@st.com&gt;
Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: Add kprobe-based event tracer.</title>
<updated>2010-06-14T06:16:53Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-06-14T06:16:53Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=eaaaeef392cb245e415c31d480ed2d5a466fd88f'/>
<id>urn:sha1:eaaaeef392cb245e415c31d480ed2d5a466fd88f</id>
<content type='text'>
This follows the x86/ppc changes for kprobe-based event tracing on sh.
While kprobes is only supported on 32-bit sh, we provide the API for
HAVE_REGS_AND_STACK_ACCESS_API for both 32 and 64-bit.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: __cpuinit annotate the CPU init path.</title>
<updated>2010-04-21T03:20:42Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-04-21T03:20:42Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4a6feab0ee5240c4bd5378d9f8a46b85718c68a7'/>
<id>urn:sha1:4a6feab0ee5240c4bd5378d9f8a46b85718c68a7</id>
<content type='text'>
All of the regular CPU init path needs to be __cpuinit annotated for CPU
hotplug.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: wire up SET/GET_UNALIGN_CTL.</title>
<updated>2010-02-23T03:56:30Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-02-23T03:56:30Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=94ea5e449ae834af058ef005d16a8ad44fcf13d6'/>
<id>urn:sha1:94ea5e449ae834af058ef005d16a8ad44fcf13d6</id>
<content type='text'>
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from
the PPC and ia64 implementations. The thread flags happen to be the
logical inverse of what the global fault mode is set to, so this works
out pretty cleanly. By default the global fault mode is used, with tasks
now being able to override their own settings via prctl().

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh64: Fix up the build for the thread_xstate changes.</title>
<updated>2010-01-19T06:40:03Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-01-19T06:40:03Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=3ef2932b8c1fc89408ef1fd4b1e1c2caabc7f07d'/>
<id>urn:sha1:3ef2932b8c1fc89408ef1fd4b1e1c2caabc7f07d</id>
<content type='text'>
This updates the sh64 processor info with the sh32 changes in order to
tie in to the generic task_xstate management code.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>Merge branches 'sh/xstate', 'sh/hw-breakpoints' and 'sh/stable-updates'</title>
<updated>2010-01-13T04:02:55Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-01-13T04:02:55Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=644755e7867710a23e6243dcc69cfc071985f560'/>
<id>urn:sha1:644755e7867710a23e6243dcc69cfc071985f560</id>
<content type='text'>
</content>
</entry>
<entry>
<title>sh: Move over to dynamically allocated FPU context.</title>
<updated>2010-01-13T03:51:40Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-01-13T03:51:40Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=0ea820cf9bf58f735ed40ec67947159c4f170012'/>
<id>urn:sha1:0ea820cf9bf58f735ed40ec67947159c4f170012</id>
<content type='text'>
This follows the x86 xstate changes and implements a task_xstate slab
cache that is dynamically sized to match one of hard FP/soft FP/FPU-less.

This also tidies up and consolidates some of the SH-2A/SH-4 FPU
fragmentation. Now fpu state restorers are commonly defined, with the
init_fpu()/fpu_init() mess reworked to follow the x86 convention.
The fpu_init() register initialization has been replaced by xstate setup
followed by writing out to hardware via the standard restore path.

As init_fpu() now performs a slab allocation a secondary lighterweight
restorer is also introduced for the context switch.

In the future the DSP state will be rolled in here, too.

More work remains for math emulation and the SH-5 FPU, which presently
uses its own special (UP-only) interfaces.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: Move start_thread() out of line.</title>
<updated>2010-01-12T09:52:00Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-01-12T09:52:00Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=70e068eef97d05c97c3512f82352f39fdadfa8cb'/>
<id>urn:sha1:70e068eef97d05c97c3512f82352f39fdadfa8cb</id>
<content type='text'>
start_thread() will become a bit heavier with the xstate freeing to be
added in, so move it out-of-line in preparation.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: Abstracted SH-4A UBC support on hw-breakpoint core.</title>
<updated>2010-01-05T10:06:45Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-01-05T10:06:45Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4352fc1b12fae4c753a063a2f162ddf9277af774'/>
<id>urn:sha1:4352fc1b12fae4c753a063a2f162ddf9277af774</id>
<content type='text'>
This is the next big chunk of hw_breakpoint support. This decouples
the SH-4A support from the core and moves it out in to its own stub,
following many of the conventions established with the perf events
layering.

In addition to extending SH-4A support to encapsulate the remainder
of the UBC channels, clock framework support for handling the UBC
interface clock is added as well, allowing for dynamic clock gating.

This also fixes up a regression introduced by the SIGTRAP handling that
broke the ksym_tracer, to the extent that the current support works well
with all of the ksym_tracer/ptrace/kgdb. The kprobes singlestep code will
follow in turn.

With this in place, the remaining UBC variants (SH-2A and SH-4) can now
be trivially plugged in.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
</feed>
