<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/arch/arc/include, branch v3.12-rc2</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/arch/arc/include?h=v3.12-rc2</id>
<link rel='self' href='https://git.amat.us/linux/atom/arch/arc/include?h=v3.12-rc2'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2013-09-12T14:40:08Z</updated>
<entry>
<title>ARC: SMP failed to boot due to missing IVT setup</title>
<updated>2013-09-12T14:40:08Z</updated>
<author>
<name>Noam Camus</name>
<email>noamc@ezchip.com</email>
</author>
<published>2013-09-12T07:37:39Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c3567f8a359b7917dcffa442301f88ed0a75211f'/>
<id>urn:sha1:c3567f8a359b7917dcffa442301f88ed0a75211f</id>
<content type='text'>
Commit 05b016ecf5e7a "ARC: Setup Vector Table Base in early boot" moved
the Interrupt vector Table setup out of arc_init_IRQ() which is called
for all CPUs, to entry point of boot cpu only, breaking booting of others.

Fix by adding the same to entry point of non-boot CPUs too.

read_arc_build_cfg_regs() printing IVT Base Register didn't help the
casue since it prints a synthetic value if zero which is totally bogus,
so fix that to print the exact Register.

[vgupta: Remove the now stale comment from header of arc_init_IRQ and
also added the commentary for halt-on-reset]

Cc: Gilad Ben-Yossef &lt;gilad@benyossef.com&gt;
Cc: Cc: &lt;stable@vger.kernel.org&gt; #3.11
Signed-off-by: Noam Camus &lt;noamc@ezchip.com&gt;
Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>ARC: fix new Section mismatches in build (post __cpuinit cleanup)</title>
<updated>2013-09-05T13:49:06Z</updated>
<author>
<name>Vineet Gupta</name>
<email>vgupta@synopsys.com</email>
</author>
<published>2013-09-05T13:49:06Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=07b9b65147d1d7cc03b9ff1e1f3b1c163ba4d067'/>
<id>urn:sha1:07b9b65147d1d7cc03b9ff1e1f3b1c163ba4d067</id>
<content type='text'>
---------------&gt;8--------------------
WARNING: vmlinux.o(.text+0x708): Section mismatch in reference from the
function read_arc_build_cfg_regs() to the function
.init.text:read_decode_cache_bcr()

WARNING: vmlinux.o(.text+0x702): Section mismatch in reference from the
function read_arc_build_cfg_regs() to the function
.init.text:read_decode_mmu_bcr()
---------------&gt;8--------------------

Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
<entry>
<title>ARC: Fix __udelay calculation</title>
<updated>2013-09-05T05:01:12Z</updated>
<author>
<name>Mischa Jonker</name>
<email>mjonker@synopsys.com</email>
</author>
<published>2013-08-30T09:56:25Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=7efd0da2d17360e1cef91507dbe619db0ee2c691'/>
<id>urn:sha1:7efd0da2d17360e1cef91507dbe619db0ee2c691</id>
<content type='text'>
Cast usecs to u64, to ensure that the (usecs * 4295 * HZ)
multiplication is 64 bit.

Initially, the (usecs * 4295 * HZ) part was done as a 32 bit
multiplication, with the result casted to 64 bit. This led to some bits
falling off, causing a "DMA initialization error" in the stmmac Ethernet
driver, due to a premature timeout.

Signed-off-by: Mischa Jonker &lt;mjonker@synopsys.com&gt;
Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
<entry>
<title>ARC: Add read*_relaxed to asm/io.h</title>
<updated>2013-09-05T05:01:11Z</updated>
<author>
<name>Mischa Jonker</name>
<email>mjonker@synopsys.com</email>
</author>
<published>2013-08-28T18:32:50Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=6532b02fe5affb962b267e3c12e87ec16311aebf'/>
<id>urn:sha1:6532b02fe5affb962b267e3c12e87ec16311aebf</id>
<content type='text'>
Some drivers require these, and ARC didn't had them yet.

Signed-off-by: Mischa Jonker &lt;mjonker@synopsys.com&gt;
Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
<entry>
<title>ARC: [ASID] Track ASID allocation cycles/generations</title>
<updated>2013-08-30T16:12:19Z</updated>
<author>
<name>Vineet Gupta</name>
<email>vgupta@synopsys.com</email>
</author>
<published>2013-07-25T22:45:50Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=947bf103fcd2defa3bc4b7ebc6b05d0427bcde2d'/>
<id>urn:sha1:947bf103fcd2defa3bc4b7ebc6b05d0427bcde2d</id>
<content type='text'>
This helps remove asid-to-mm reverse map

While mm-&gt;context.id contains the ASID assigned to a process, our ASID
allocator also used asid_mm_map[] reverse map. In a new allocation
cycle (mm-&gt;ASID &gt;= @asid_cache), the Round Robin ASID allocator used this
to check if new @asid_cache belonged to some mm2 (from prev cycle).
If so, it could locate that mm using the ASID reverse map, and mark that
mm as unallocated ASID, to force it to refresh at the time of switch_mm()

However, for SMP, the reverse map has to be maintained per CPU, so
becomes 2 dimensional, hence got rid of it.

With reverse map gone, it is NOT possible to reach out to current
assignee. So we track the ASID allocation generation/cycle and
on every switch_mm(), check if the current generation of CPU ASID is
same as mm's ASID; If not it is refreshed.

(Based loosely on arch/sh implementation)

Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
<entry>
<title>ARC: [ASID] activate_mm() == switch_mm()</title>
<updated>2013-08-30T16:12:19Z</updated>
<author>
<name>Vineet Gupta</name>
<email>vgupta@synopsys.com</email>
</author>
<published>2013-07-25T00:31:08Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c60115537c96d78a884d2a4bd78839a57266d48b'/>
<id>urn:sha1:c60115537c96d78a884d2a4bd78839a57266d48b</id>
<content type='text'>
ASID allocation changes/2

Use the fact that switch_mm() and activate_mm() are exactly same code
now while acknowledging the semantical difference in comment

Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
<entry>
<title>ARC: [ASID] get_new_mmu_context() to conditionally allocate new ASID</title>
<updated>2013-08-30T16:12:18Z</updated>
<author>
<name>Vineet Gupta</name>
<email>vgupta@synopsys.com</email>
</author>
<published>2013-07-24T20:53:45Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=3daa48d1d9bc44baa079d65e72ef2e3f1139ac03'/>
<id>urn:sha1:3daa48d1d9bc44baa079d65e72ef2e3f1139ac03</id>
<content type='text'>
ASID allocation changes/1

This patch does 2 things:

(1) get_new_mmu_context() NOW moves mm-&gt;ASID to a new value ONLY if it
    was from a prev allocation cycle/generation OR if mm had no ASID
    allocated (vs. before would unconditionally moving to a new ASID)

    Callers desiring unconditional update of ASID, e.g.local_flush_tlb_mm()
    (for parent's address space invalidation at fork) need to first force
    the parent to an unallocated ASID.

(2) get_new_mmu_context() always sets the MMU PID reg with unchanged/new
    ASID value.

The gains are:
- consolidation of all asid alloc logic into get_new_mmu_context()
- avoiding code duplication in switch_mm() for PID reg setting
- Enables future change to fold activate_mm() into switch_mm()

Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
<entry>
<title>ARC: [ASID] Refactor the TLB paranoid debug code</title>
<updated>2013-08-30T16:12:18Z</updated>
<author>
<name>Vineet Gupta</name>
<email>vgupta@synopsys.com</email>
</author>
<published>2013-08-23T12:07:18Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=5bd87adf9b2ae5fa1bb469c68029b4eec06d6e03'/>
<id>urn:sha1:5bd87adf9b2ae5fa1bb469c68029b4eec06d6e03</id>
<content type='text'>
-Asm code already has values of SW and HW ASID values, so they can be
 passed to the printing routine.

Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
<entry>
<title>ARC: [ASID] Remove legacy/unused debug code</title>
<updated>2013-08-30T16:12:17Z</updated>
<author>
<name>Vineet Gupta</name>
<email>vgupta@synopsys.com</email>
</author>
<published>2013-07-26T01:11:50Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ade922f8e269115252d199bf6c524a10379cf716'/>
<id>urn:sha1:ade922f8e269115252d199bf6c524a10379cf716</id>
<content type='text'>
Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
<entry>
<title>ARC: MMUv4 preps/3 - Abstract out TLB Insert/Delete</title>
<updated>2013-08-30T04:52:48Z</updated>
<author>
<name>Vineet Gupta</name>
<email>vgupta@synopsys.com</email>
</author>
<published>2013-07-01T12:42:28Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=483e9bcb01432ce66448c214bd0afc231da48b4b'/>
<id>urn:sha1:483e9bcb01432ce66448c214bd0afc231da48b4b</id>
<content type='text'>
This reorganizes the current TLB operations into psuedo-ops to better
pair with MMUv4's native Insert/Delete operations

Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
</content>
</entry>
</feed>
