Age | Commit message (Collapse) | Author |
|
If we can be sure that elevating the page_count on a pagecache page will
pin it, we can speculatively run this operation, and subsequently check to
see if we hit the right page rather than relying on holding a lock or
otherwise pinning a reference to the page.
This can be done if get_page/put_page behaves consistently throughout the
whole tree (ie. if we "get" the page after it has been used for something
else, we must be able to free it with a put_page).
Actually, there is a period where the count behaves differently: when the
page is free or if it is a constituent page of a compound page. We need
an atomic_inc_not_zero operation to ensure we don't try to grab the page
in either case.
This patch introduces the core locking protocol to the pagecache (ie.
adds page_cache_get_speculative, and tweaks some update-side code to make
it work).
Thanks to Hugh for pointing out an improvement to the algorithm setting
page_count to zero when we have control of all references, in order to
hold off speculative getters.
[kamezawa.hiroyu@jp.fujitsu.com: fix migration_entry_wait()]
[hugh@veritas.com: fix add_to_page_cache]
[akpm@linux-foundation.org: repair a comment]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
architecture does:
This enables us to cleanly fix the Calgary IOMMU issue that some devices
are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).
I think that per-device dma_mapping_ops support would be also helpful for
KVM people to support PCI passthrough but Andi thinks that this makes it
difficult to support the PCI passthrough (see the above thread). So I
CC'ed this to KVM camp. Comments are appreciated.
A pointer to dma_mapping_ops to struct dev_archdata is added. If the
pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's
NULL, the system-wide dma_ops pointer is used as before.
If it's useful for KVM people, I plan to implement a mechanism to register
a hook called when a new pci (or dma capable) device is created (it works
with hot plugging). It enables IOMMUs to set up an appropriate
dma_mapping_ops per device.
The major obstacle is that dma_mapping_error doesn't take a pointer to the
device unlike other DMA operations. So x86 can't have dma_mapping_ops per
device. Note all the POWER IOMMUs use the same dma_mapping_error function
so this is not a problem for POWER but x86 IOMMUs use different
dma_mapping_error functions.
The first patch adds the device argument to dma_mapping_error. The patch
is trivial but large since it touches lots of drivers and dma-mapping.h in
all the architecture.
This patch:
dma_mapping_error() doesn't take a pointer to the device unlike other DMA
operations. So we can't have dma_mapping_ops per device.
Note that POWER already has dma_mapping_ops per device but all the POWER
IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device
argument.
[akpm@linux-foundation.org: fix sge]
[akpm@linux-foundation.org: fix svc_rdma]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix bnx2x]
[akpm@linux-foundation.org: fix s2io]
[akpm@linux-foundation.org: fix pasemi_mac]
[akpm@linux-foundation.org: fix sdhci]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix sparc]
[akpm@linux-foundation.org: fix ibmvscsi]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Avi Kivity <avi@qumranet.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (34 commits)
powerpc: Wireup new syscalls
Move update_mmu_cache() declaration from tlbflush.h to pgtable.h
powerpc/pseries: Remove kmalloc call in handling writes to lparcfg
powerpc/pseries: Update arch vector to indicate support for CMO
ibmvfc: Add support for collaborative memory overcommit
ibmvscsi: driver enablement for CMO
ibmveth: enable driver for CMO
ibmveth: Automatically enable larger rx buffer pools for larger mtu
powerpc/pseries: Verify CMO memory entitlement updates with virtual I/O
powerpc/pseries: vio bus support for CMO
powerpc/pseries: iommu enablement for CMO
powerpc/pseries: Add CMO paging statistics
powerpc/pseries: Add collaborative memory manager
powerpc/pseries: Utilities to set firmware page state
powerpc/pseries: Enable CMO feature during platform setup
powerpc/pseries: Split retrieval of processor entitlement data into a helper routine
powerpc/pseries: Add memory entitlement capabilities to /proc/ppc64/lparcfg
powerpc/pseries: Split processor entitlement retrieval and gathering to helper routines
powerpc/pseries: Remove extraneous error reporting for hcall failures in lparcfg
powerpc: Fix compile error with binutils 2.15
...
Fixed up conflict in arch/powerpc/platforms/52xx/Kconfig manually.
|
|
Enable ibmveth for Cooperative Memory Overcommitment (CMO). For this driver
it means calculating a desired amount of IO memory based on the current MTU
and updating this value with the bus when MTU changes occur. Because DMA
mappings can fail, we have added a bounce buffer for temporary cases where
the driver can not map IO memory for the buffer pool.
The following changes are made to enable the driver for CMO:
* DMA mapping errors will not result in error messages if entitlement has
been exceeded and resources were not available.
* DMA mapping errors are handled gracefully, ibmveth_replenish_buffer_pool()
is corrected to check the return from dma_map_single and fail gracefully.
* The driver will have a get_desired_dma function defined to function
in a CMO environment.
* When the MTU is changed, the driver will update the device IO entitlement
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Santiago Leon <santil@us.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Activates larger rx buffer pools when the MTU is changed to a larger
value. This patch de-activates the large rx buffer pools when the MTU
changes to a smaller value.
Signed-off-by: Santiago Leon <santil@us.ibm.com>
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
If we hack the virtio_net driver to always allocate full-sized (64k+)
skbuffs, the driver slows down (lguest numbers):
Time to receive 1GB (small buffers): 10.85 seconds
Time to receive 1GB (64k+ buffers): 24.75 seconds
Of course, large buffers use up more space in the ring, so we increase
that from 128 to 2048:
Time to receive 1GB (64k+ buffers, 2k ring): 16.61 seconds
If we recycle pages rather than using alloc_page/free_page:
Time to receive 1GB (64k+ buffers, 2k ring, recycle pages): 10.81 seconds
This demonstrates that with efficient allocation, we don't need to
have a separate "small buffer" queue.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Finally this patch lets virtio_net receive GSO packets in addition
to sending them. This can definitely be optimised for the non-GSO
case. For comparison the Xen approach stores one page in each skb
and uses subsequent skb's pages to construct an SG skb instead of
preallocating the maximum amount of pages per skb.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (added feature bits)
|
|
This patch adds some basic ethtool operations to virtio_net so
I could test SG without GSO (which was really useful because TSO
turned out to be buggy :)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (remove MTU setting)
|
|
On Mon, 2008-05-26 at 17:42 +1000, Rusty Russell wrote:
> If we fail to transmit a packet, we assume the queue is full and put
> the skb into last_xmit_skb. However, if more space frees up before we
> xmit it, we loop, and the result can be transmitting the same skb twice.
>
> Fix is simple: set skb to NULL if we've used it in some way, and check
> before sending.
...
> diff -r 564237b31993 drivers/net/virtio_net.c
> --- a/drivers/net/virtio_net.c Mon May 19 12:22:00 2008 +1000
> +++ b/drivers/net/virtio_net.c Mon May 19 12:24:58 2008 +1000
> @@ -287,21 +287,25 @@ again:
> free_old_xmit_skbs(vi);
>
> /* If we has a buffer left over from last time, send it now. */
> - if (vi->last_xmit_skb) {
> + if (unlikely(vi->last_xmit_skb)) {
> if (xmit_skb(vi, vi->last_xmit_skb) != 0) {
> /* Drop this skb: we only queue one. */
> vi->dev->stats.tx_dropped++;
> kfree_skb(skb);
> + skb = NULL;
> goto stop_queue;
> }
> vi->last_xmit_skb = NULL;
With this, may drop an skb and then later in the function discover that
we could have sent it after all. Poor wee skb :)
How about the incremental patch below?
Cheers,
Mark.
Subject: [PATCH] virtio_net: Delay dropping tx skbs
Currently we drop the skb in start_xmit() if we have a
queued buffer and fail to transmit it.
However, if we delay dropping it until we've stopped the
queue and enabled the tx notification callback, then there
is a chance space might become available for it.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband:
MAINTAINERS: Remove Glenn Streiff from NetEffect entry
mlx4_core: Improve error message when not enough UAR pages are available
IB/mlx4: Add support for memory management extensions and local DMA L_Key
IB/mthca: Keep free count for MTT buddy allocator
mlx4_core: Keep free count for MTT buddy allocator
mlx4_code: Add missing FW status return code
IB/mlx4: Rename struct mlx4_lso_seg to mlx4_wqe_lso_seg
mlx4_core: Add module parameter to enable QoS support
RDMA/iwcm: Remove IB_ACCESS_LOCAL_WRITE from remote QP attributes
IPoIB: Include err code in trace message for ib_sa_path_rec_get() failures
IB/sa_query: Check if sm_ah is NULL in ib_sa_remove_one()
IB/ehca: Release mutex in error path of alloc_small_queue_page()
IB/ehca: Use default value for Local CA ACK Delay if FW returns 0
IB/ehca: Filter PATH_MIG events if QP was never armed
IB/iser: Add support for RDMA_CM_EVENT_ADDR_CHANGE event
RDMA/cma: Add RDMA_CM_EVENT_TIMEWAIT_EXIT event
RDMA/cma: Add RDMA_CM_EVENT_ADDR_CHANGE event
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu:
m68knommu: put ColdFire head code into .text.head section
m68knommu: remove last use of CONFIG_FADS and CONFIG_RPXCLASSIC
m68knommu: remove RPXCLASSIC from the m68k tree
m68knommu: fec: remove FADS
m68knommu: MCF5307 PIT GENERIC_CLOCKEVENTS support
m68knommu: add read_barrier_depends() and irqs_disabled_flags()
m68knommu: add byteswap assembly opcode for ISA A+
m68knommu: add ffs and __ffs plattform which support ISA A+ or ISA C
m68knommu: add sched_clock() for the DMA timer
m68knommu: complete generic time
m68knommu: move code within time.c
m68knommu: m68knommu: add old stack trace method
m68knommu: Add Coldfire DMA Timer support
m68knommu: defconfig for M5407C3 board
m68knommu: defconfig for M5307C3 board
m68knommu: defconfig for M5275EVB board
m68knommu: defconfig for M5249EVB board
m68knommu: change to a configs directory for board configurations
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6:
pkt_sched: sch_sfq: dump a real number of flows
atm: [fore200e] use MODULE_FIRMWARE() and other suggested cleanups
netfilter: make security table depend on NETFILTER_ADVANCED
tcp: Clear probes_out more aggressively in tcp_ack().
e1000e: fix e1000_netpoll(), remove extraneous e1000_clean_tx_irq() call
net: Update entry in af_family_clock_key_strings
netdev: Remove warning from __netif_schedule().
sky2: don't stop queue on shutdown
|
|
On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit
boundary. For example:
u64 val = PAGE_ALIGN(size);
always returns a value < 4GB even if size is greater than 4GB.
The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for
example):
#define PAGE_SHIFT 12
#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1))
...
#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
The "~" is performed on a 32-bit value, so everything in "and" with
PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary.
Using the ALIGN() macro seems to be the right way, because it uses
typeof(addr) for the mask.
Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in
include/linux/mm.h.
See also lkml discussion: http://lkml.org/lkml/2008/6/11/237
[akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c]
[akpm@linux-foundation.org: fix v850]
[akpm@linux-foundation.org: fix powerpc]
[akpm@linux-foundation.org: fix arm]
[akpm@linux-foundation.org: fix mips]
[akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c]
[akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c]
[akpm@linux-foundation.org: fix powerpc]
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (85 commits)
[ARM] pxa: add base support for PXA930 Handheld Platform (aka SAAR)
[ARM] pxa: add base support for PXA930 Evaluation Board (aka TavorEVB)
[ARM] pxa: add base support for PXA930 (aka Tavor-P)
[ARM] Update mach-types
[ARM] pxa: make littleton to use the new smc91x platform data
[ARM] pxa: make zylonite to use the new smc91x platform data
[ARM] pxa: make mainstone to use the new smc91x platform data
[ARM] pxa: make lubbock to use new smc91x platform data
[NET] smc91x: prepare SMC_USE_PXA_DMA to be specified in platform data
[NET] smc91x: prepare for SMC_IO_SHIFT to be a platform configurable variable
[NET] smc91x: add SMC91X_NOWAIT flag to platform data
[NET] smc91x: favor the use of SMC91X_USE_* instead of SMC_CAN_USE_*
[NET] smc91x: remove "irq_flags" from "struct smc91x_platdata"
[ARM] 5146/1: pxa2xx: convert all boards to call pxa2xx_transceiver_mode helper
Support for LCD on e740 e750 e400 and e800 e-series PDAs
E-series UDC support
PXA UDC - allow use of inverted GPIO for pullup
Add e350 support
Fix broken e-series build
E-series GPIO / IRQ definitions.
...
|
|
Evgeniy Polyakov noticed that drivers/net/e1000e/netdev.c:e1000_netpoll()
was calling e1000_clean_tx_irq() without taking the TX lock.
David Miller suggested to remove the call altogether: since in this
callpah there's periodic calls to ->poll() anyway which will do
e1000_clean_tx_irq() and will garbage-collect any finished TX ring
descriptors.
This fix solved the e1000e+netconsole crashes i've been seeing:
=============================================================================
BUG skbuff_head_cache: Poison overwritten
-----------------------------------------------------------------------------
INFO: 0xf658ae9c-0xf658ae9c. First byte 0x6a instead of 0x6b
INFO: Allocated in __alloc_skb+0x2c/0x110 age=0 cpu=0 pid=5098
INFO: Freed in __kfree_skb+0x31/0x80 age=0 cpu=1 pid=4440
INFO: Slab 0xc16cc140 objects=16 used=1 fp=0xf658ae00 flags=0x400000c3
INFO: Object 0xf658ae00 @offset=3584 fp=0xf658af00
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If an mlx4 device with default FW (which gives a UAR BAR size of 8 MB)
is used in a system with 64 KB pages, then there are only 8192/64==128
UAR pages available. However, the first 128 UAR pages are reserved
for use with event queue doorbells, so no UAR pages are available to
do anything else with, which means that the driver cannot work.
The current driver fails with a fairly cryptic "Failed to allocate
driver access region, aborting" message in this situation. Fix the
driver to detect the problem earlier and print out a clearer
description of the problem and a suggestion of how to fix it (use a
new firmware image).
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
Add support for the following operations to mlx4 when device firmware
supports them:
- Send with invalidate and local invalidate send queue work requests;
- Allocate/free fast register MRs;
- Allocate/free fast register MR page lists;
- Fast register MR send queue work requests;
- Local DMA L_Key.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
They have never been used in this port of the driver. It is has only
ever been used on the ColdFire SoC ethernet core.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
|
|
This ifdefs are leftovers from the time as the driver was running
on a ppc.
Signed-off-by: Sebastian Siewior <bigeasy@linutronix.de>
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
|
|
I found config FADS only in ppc/Kconfig. Bye bye relic.
Signed-off-by: Sebastian Siewior <bigeasy@linutronix.de>
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
|
|
It is unnecessary, to stop queue and turn off carrier in shutdown
routine. With new netdev_queue this causes warnings.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (82 commits)
ipw2200: Call netif_*_queue() interfaces properly.
netxen: Needs to include linux/vmalloc.h
[netdrvr] atl1d: fix !CONFIG_PM build
r6040: rework init_one error handling
r6040: bump release number to 0.18
r6040: handle RX fifo full and no descriptor interrupts
r6040: change the default waiting time
r6040: use definitions for magic values in descriptor status
r6040: completely rework the RX path
r6040: call napi_disable when puting down the interface and set lp->dev accordingly.
mv643xx_eth: fix NETPOLL build
r6040: rework the RX buffers allocation routine
r6040: fix scheduling while atomic in r6040_tx_timeout
r6040: fix null pointer access and tx timeouts
r6040: prefix all functions with r6040
rndis_host: support WM6 devices as modems
at91_ether: use netstats in net_device structure
sfc: Create one RX queue and interrupt per CPU package by default
sfc: Use a separate workqueue for resets
sfc: I2C adapter initialisation fixes
...
|
|
netif_carrier_{on,off}() handles starting and stopping packet
flow into the driver. So there is no reason to invoke netif_stop_queue()
and netif_wake_queue() in response to link status events.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6
|
|
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6
|
|
This patch reworks the error handling in r6040_init_one
in order not to leak resources and correcly unmap and release
PCI regions of the MAC. Also prefix printk's with the driver name
for clarity.
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This patch bumps the release of the r6040 driver. There has been
quite some versions of it out there, but this one is the one
people should report bugs against.
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This patch allows the MAC to handle the RX FIFO full
and no descriptor available interrupts. While we are at it
replace the TX interrupt with its corresponding definition.
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This patch changes the default waiting time of a packet, which
along with our previous r6040_rx path, was causing huge delays
with another host (160 to 230 ms).
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Define all the descriptor status the MAC can set.
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This patch completely reworks the RX path in order to be
more accurate about what is going on with the MAC.
We no longer read the error from the MLSR register instead read
the descriptor status register which reflects, the error per descriptor.
We now allocate skbs on the fly in r6040_rx, and we handle allocation
failure instead of simply dropping the packet. Remove the
rx_free_desc counter of r6040_private structure since we
allocate skbs in the RX path.
r6040_rx_buf_alloc is now removed and becomes unuseless.
Signed-Off-By: Joerg Albert <jal2@gmx.de>
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
accordingly.
We did not call napi_disabled when putting down the interface
which should be done. Finally initialize lp->dev when everything
is set.
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Joseph Fannin <jfannin@gmail.com> and Takashi Iwai <tiwai@suse.de>
noticed that commit 073a345c04b01da0cc5b79ac7be0c7c8b1691ef5
("mv643xx_eth: clarify irq masking and unmasking") broke the
mv643xx_eth build when NETPOLL is enabled, due to it not renaming
one instance of INT_CAUSE_EXT in mv643xx_eth_netpoll(). This patch
takes care of that instance as well.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Cc: Dale Farnsworth <dale@farnsworth.org>
Cc: Joseph Fannin <jfannin@gmail.com>
Cc: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Rework the RX buffers allocation function so that we do not
leak memory in the case we could not allocate skbs for the
RX path. Propagate the errors to the r6040_up function
where we call the RX buffers allocation function.
Also rename the r6040_alloc_txbufs function to
r6040_init_txbufs, to reflect what it really does.
Signed-Off-By: Joerg Albert <jal2@gmx.de>
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Add a helper function which only modifies R6040 MAC registers
use it when we timeout, and on adapter initialization. Fix
the scheduling while atomic but in the timeout routine due
to the reallocation of rx/tx buffers.
Signed-Off-By: Joerg Albert <jal2@gmx.de>
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This patch fixes a null pointer access in r6040_rx due
to lp->dev not being initialized.
Fix the TX timeouts, TX irq was not re-enabled on RX irq
Signed-Off-By: Joerg Albert <jal2@gmx.de>
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Prefix all functions inside the r6040 driver with r6040 to
avoid namespace clashing.
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This patch allows Windows Mobile 6 devices to be used for
tethering -- that is, used as modems. It was requested by
AdamW in kernel bugzilla:
http://bugzilla.kernel.org/show_bug.cgi?id=11119
and Mandriva kernel-discuss list. It is tested and confirmed
to work by Peterl:
http://forum.eeeuser.com/viewtopic.php?pid=323543#p323543
This patch is based on the patch in the above kernel bugzilla,
which is from the usb-rndis-lite tree.
[ dbrownell@users.sourceforge.net: misc fixes ]
Signed-off-by: Thomas Backlund <tmb@mandriva.org>
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Use net_device_stats from net_device structure instead of local.
Signed-off-by: Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
Tested-by: Marc Pignat <marc.pignat@hevs.ch>
Acked-by: Andrew Victor <linux@maxim.org.za>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Using multiple cores in the same package to handle received traffic
does not appear to provide a performance benefit. Therefore use CPU
topology information to count CPU packages and use that as the default
number of RX queues and interrupts. We rely on interrupt balancing to
spread the interrupts across packages.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This avoids deadlock in case a reset is triggered during self-test.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
As recommended by Jean Delvare:
- Increase timeout to 50 ms
- Leave adapter class clear so that unwanted drivers do not probe our bus
- Use strlcpy() for name initialisation
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This patch makes e1000 driver ioport-free.
This corrects behavior in probe function so as not to request ioport
resources as long as they are not really needed. This is based on the
ioport-free patch of e1000 driver from Auke Kok and Tomohiro Kusumi.
Signed-off-by: Tomohiro Kusumi <kusumi.tomohiro@jp.fujitsu.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com>
Signed-off-by: Jeff Kirsher<jeffrey.t.kirsher@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Compile-tested only.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
The email linux-nics@intel.com is no longer available, remove all
references.
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Redefine DPRINTK macro using do while(0)
__FUNCTION__ to __func__
structs {} on separate lines
Surround negative constants with ()
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|