| Age | Commit message (Collapse) | Author |
|
Commit 93e6f119c0ce ("ipc/mqueue: cleanup definition names and
locations") added global hardcoded limits to the amount of message
queues that can be created. While these limits are per-namespace,
reality is that it ends up breaking userspace applications.
Historically users have, at least in theory, been able to create up to
INT_MAX queues, and limiting it to just 1024 is way too low and dramatic
for some workloads and use cases. For instance, Madars reports:
"This update imposes bad limits on our multi-process application. As
our app uses approaches that each process opens its own set of queues
(usually something about 3-5 queues per process). In some scenarios
we might run up to 3000 processes or more (which of-course for linux
is not a problem). Thus we might need up to 9000 queues or more. All
processes run under one user."
Other affected users can be found in launchpad bug #1155695:
https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695
Instead of increasing this limit, revert it entirely and fallback to the
original way of dealing queue limits -- where once a user's resource
limit is reached, and all memory is used, new queues cannot be created.
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Reported-by: Madars Vitolins <m@silodev.com>
Acked-by: Doug Ledford <dledford@redhat.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: <stable@vger.kernel.org> [3.5+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit c02cecb92ed4 ("ARM: orion: move platform_data definitions")
moved the file to the current location but forgot to remove the pointer
to its previous location. Clean it up. While at it also change the header
file protection macros appropriately.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Chris Ball <chris@printf.net>
|
|
Commit 1ef21f6343ff ("ARM: msm: move platform_data definitions")
moved the file to the current location but forgot to remove the pointer
to its previous location. Clean it up. While at it also change the header
file protection macros appropriately.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Chris Ball <chris@printf.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci into next/drivers
A patch to break dependency of DaVinci NAND
driver with mach-davinci. Required for reuse
of driver on other platforms (keystone).
* tag 'davinci-for-v3.15/nand' of git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci:
ARM: davinci: aemif: get rid of davinci-nand driver dependency on aemif
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Send Channel Availability Check time as a parameter
of start_radar_detection() callback.
Get CAC time from regulatory database.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Introduce DFS CAC time as a regd param, configured per REG_RULE and
set per channel in cfg80211. DFS CAC time is close connected with
regulatory database configuration. Instead of using hardcoded values,
get DFS CAC time form regulatory database. Pass DFS CAC time to user
mode (mainly for iw reg get, iw list, iw info). Allow setting DFS CAC
time via CRDA. Add support for internal regulatory database.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
[rewrap commit log]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
As mount() and kill_sb() is not a one-to-one match, we shoudn't get
ns refcnt unconditionally in sysfs_mount(), and instead we should
get the refcnt only when kernfs_mount() allocated a new superblock.
v2:
- Changed the name of the new argument, suggested by Tejun.
- Made the argument optional, suggested by Tejun.
v3:
- Make the new argument as second-to-last arg, suggested by Tejun.
Signed-off-by: Li Zefan <lizefan@huawei.com>
Acked-by: Tejun Heo <tj@kernel.org>
---
fs/kernfs/mount.c | 8 +++++++-
fs/sysfs/mount.c | 5 +++--
include/linux/kernfs.h | 9 +++++----
3 files changed, 15 insertions(+), 7 deletions(-)
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
For optimization, task_lock() is additionally used to protect
task->cgroups. The optimization is pretty dubious as either
css_set_rwsem is grabbed anyway or PF_EXITING already protects
task->cgroups. It adds only overhead and confusion at this point.
Let's drop task_[un]lock() and update comments accordingly.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
|
|
Currently, process / task migration is a single operation which may
fail depending on memory pressure or the involved controllers'
->can_attach() callbacks. One problem with this approach is migration
of multiple targets. It's impossible to tell whether a given target
will be successfully migrated beforehand and cgroup core can't keep
track of enough states to roll back after intermediate failure.
This is already an issue with cgroup_transfer_tasks(). Also, we're
gonna need multiple target migration for unified hierarchy.
This patch splits migration into four stages -
cgroup_migrate_add_src(), cgroup_migrate_prepare_dst(),
cgroup_migrate() and cgroup_migrate_finish(), where
cgroup_migrate_prepare_dst() performs all the operations which may
fail due to allocation failure without actually migrating the target.
The four separate stages mean that, disregarding ->can_attach()
failures, the success or failure of multi target migration can be
determined before performing any actual migration. If preparations of
all targets succeed, the whole thing will succeed. If not, the whole
operation can fail without any side-effect.
Since the previous patch to use css_set->mg_tasks to keep track of
migration targets, the only thing which may need memory allocation
during migration is the target css_sets. cgroup_migrate_prepare()
pins all source and target css_sets and link them up. Note that this
can be performed without holding threadgroup_lock even if the target
is a process. As long as cgroup_mutex is held, no new css_set can be
put into play.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
|
|
Currently, while migrating tasks from one cgroup to another,
cgroup_attach_task() builds a flex array of all target tasks;
unfortunately, this has a couple issues.
* Flex array has size limit. On 64bit, struct task_and_cgroup is
24bytes making the flex element limit around 87k. It is a high
number but not impossible to hit. This means that the current
cgroup implementation can't migrate a process with more than 87k
threads.
* Process migration involves memory allocation whose size is dependent
on the number of threads the process has. This means that cgroup
core can't guarantee success or failure of multi-process migrations
as memory allocation failure can happen in the middle. This is in
part because cgroup can't grab threadgroup locks of multiple
processes at the same time, so when there are multiple processes to
migrate, it is imposible to tell how many tasks are to be migrated
beforehand.
Note that this already affects cgroup_transfer_tasks(). cgroup
currently cannot guarantee atomic success or failure of the
operation. It may fail in the middle and after such failure cgroup
doesn't have enough information to roll back properly. It just
aborts with some tasks migrated and others not.
To resolve the situation, this patch updates the migration path to use
task->cg_list to track target tasks. The previous patch already added
css_set->mg_tasks and updated iterations in non-migration paths to
include them during task migration. This patch updates migration path
to actually make use of it.
Instead of putting onto a flex_array, each target task is moved from
its css_set->tasks list to css_set->mg_tasks and the migration path
keeps trace of all the source css_sets and the associated cgroups.
Once all source css_sets are determined, the destination css_set for
each is determined, linked to the matching source css_set and put on a
separate list.
To iterate the target tasks, migration path just needs to iterat
through either the source or target css_sets, depending on whether
migration has been committed or not, and the tasks on their ->mg_tasks
lists. cgroup_taskset is updated to contain the list_heads for source
and target css_sets and the iteration cursor. cgroup_taskset_*() are
accordingly updated to walk through css_sets and their ->mg_tasks.
This resolves the above listed issues with moderate additional
complexity.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
|
|
Currently, while migrating tasks from one cgroup to another,
cgroup_attach_task() builds a flex array of all target tasks;
unfortunately, this has a couple issues.
* Flex array has size limit. On 64bit, struct task_and_cgroup is
24bytes making the flex element limit around 87k. It is a high
number but not impossible to hit. This means that the current
cgroup implementation can't migrate a process with more than 87k
threads.
* Process migration involves memory allocation whose size is dependent
on the number of threads the process has. This means that cgroup
core can't guarantee success or failure of multi-process migrations
as memory allocation failure can happen in the middle. This is in
part because cgroup can't grab threadgroup locks of multiple
processes at the same time, so when there are multiple processes to
migrate, it is imposible to tell how many tasks are to be migrated
beforehand.
Note that this already affects cgroup_transfer_tasks(). cgroup
currently cannot guarantee atomic success or failure of the
operation. It may fail in the middle and after such failure cgroup
doesn't have enough information to roll back properly. It just
aborts with some tasks migrated and others not.
To resolve the situation, we're going to use task->cg_list during
migration too. Instead of building a separate array, target tasks
will be linked into a dedicated migration list_head on the owning
css_set. Tasks on the migration list are treated the same as tasks on
the usual tasks list; however, being on a separate list allows cgroup
migration code path to keep track of the target tasks by simply
keeping the list of css_sets with tasks being migrated, making
unpredictable dynamic allocation unnecessary.
In prepartion of such migration path update, this patch introduces
css_set->mg_tasks list and updates css_set task iterations so that
they walk both css_set->tasks and ->mg_tasks. Note that ->mg_tasks
isn't used yet.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
|
|
Remove file references rendered invalid due to relocation.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
Remove file references rendered invalid due to relocation.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
A few code cleanups and optimizations. In addition, drop
snd_device_disconnect() that isn't used at all, and drop the return
values from snd_device_free*().
Another slight difference by this change is that now the device state
will become always SNDRV_DEV_REGISTERED no matter whether dev_register
ops is present or not. It's for better consistency. There should be
no impact for the current tree, as the state isn't checked.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Basically, the device type specifies the priority of the device to be
registered / freed, too. However, the priority value isn't well
utilized but only it's checked as a group. This results in
inconsistent register and free order (where each of them should be in
reversed direction).
This patch simplifies the device list management code by simply
inserting a list entry at creation time in an incremental order for
the priority value. Since we can just follow the link for register,
disconnect and free calls, we don't have to specify the group; so the
whole enum definitions are also simplified as well.
The visible change to outside is that the priorities of some object
types are revisited. For example, now the SNDRV_DEV_LOWLEVEL object
is registered before others (control, PCM, etc) and, in return,
released after others. Similarly, SNDRV_DEV_CODEC is in a lower
priority than SNDRV_DEV_BUS for ensuring the dependency.
Also, the unused SNDRV_DEV_TOPLEVEL, SNDRV_DEV_LOWLEVEL_PRE and
SNDRV_DEV_LOWLEVEL_NORMAL are removed as a cleanup.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Just like PCM, allow hwdep to be assigned to a different parent device
than the card. It'll be used for the HD-audio codec device in the
later patches.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Instead of calling each time device_create_file(), create the groups
of sysfs attribute files at once in a normal way. Add a new helper
function, snd_get_device(), to return the associated device object,
so that we can handle the sysfs addition locally.
Since the sysfs file addition is done differently now,
snd_add_device_sysfs_file() helper function is removed.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
|
|
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Add a lockdep_nfnl_is_held() function and a nfnl_dereference() macro for
RCU dereferences protected by a NFNL subsystem mutex.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Commit 7053aee26a35 "fsnotify: do not share events between notification
groups" used overflow event statically allocated in a group with the
size of the generic notification event. This causes problems because
some code looks at type specific parts of event structure and gets
confused by a random data it sees there and causes crashes.
Fix the problem by allocating overflow event with type corresponding to
the group type so code cannot get confused.
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
|
|
into clk-fixes
|
|
This was used from vti and is replaced by the IPsec protocol
multiplexer hooks. It is now unused, so remove it.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
|
|
IPsec vti_rcv needs to remind the tunnel pointer to
check it later at the vti_rcv_cb callback. So add
this pointer to the IPsec common buffer, initialize
it and check it to avoid transport state matching of
a tunneled packet.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
|
|
This patch add an IPsec protocol multiplexer. With this
it is possible to add alternative protocol handlers as
needed for IPsec virtual tunnel interfaces.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
|
|
Both the pr_err and the additional kerneldoc aim to help when debugging
errors thrown from within a clock rate-change notifier callback.
Reported-by: Sören Brinkmann <soren.brinkmann@xilinx.com>
Acked-by: Sören Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
|
|
This patch makes it possible to set the chipidea udc into full-speed only mode.
It is set by the oftree property "maximum-speed = full-speed".
Signed-off-by: Peter Chen <peter.chen@freescale.com>
Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Use a resource for the hyperv mmio region instead of start/size
variables. Register the region properly so it shows up in
/proc/iomem.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
We want the tty revert here as well.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next
Steffen Klassert says:
====================
1) Introduce skb_to_sgvec_nomark function to add further data to the sg list
without calling sg_unmark_end first. Needed to add extended sequence
number informations. From Fan Du.
2) Add IPsec extended sequence numbers support to the Authentication Header
protocol for ipv4 and ipv6. From Fan Du.
3) Make the IPsec flowcache namespace aware, from Fan Du.
4) Avoid creating temporary SA for every packet when no key manager is
registered. From Horia Geanta.
5) Support filtering of SA dumps to show only the SAs that match a
given filter. From Nicolas Dichtel.
6) Remove caching of xfrm_policy_sk_bundles. The cached socket policy bundles
are never used, instead we create a new cache entry whenever xfrm_lookup()
is called on a socket policy. Most protocols cache the used routes to the
socket, so this caching is not needed.
7) Fix a forgotten SADB_X_EXT_FILTER length check in pfkey, from Nicolas
Dichtel.
8) Cleanup error handling of xfrm_state_clone.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The name __smp_call_function_single() doesn't tell much about the
properties of this function, especially when compared to
smp_call_function_single().
The comments above the implementation are also misleading. The main
point of this function is actually not to be able to embed the csd
in an object. This is actually a requirement that result from the
purpose of this function which is to raise an IPI asynchronously.
As such it can be called with interrupts disabled. And this feature
comes at the cost of the caller who then needs to serialize the
IPIs on this csd.
Lets rename the function and enhance the comments so that they reflect
these properties.
Suggested-by: Christoph Hellwig <hch@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
The main point of calling __smp_call_function_single() is to send
an IPI in a pure asynchronous way. By embedding a csd in an object,
a caller can send the IPI without waiting for a previous one to complete
as is required by smp_call_function_single() for example. As such,
sending this kind of IPI can be safe even when irqs are disabled.
This flexibility comes at the expense of the caller who then needs to
synchronize the csd lifecycle by himself and make sure that IPIs on a
single csd are serialized.
This is how __smp_call_function_single() works when wait = 0 and this
usecase is relevant.
Now there don't seem to be any usecase with wait = 1 that can't be
covered by smp_call_function_single() instead, which is safer. Lets look
at the two possible scenario:
1) The user calls __smp_call_function_single(wait = 1) on a csd embedded
in an object. It looks like a nice and convenient pattern at the first
sight because we can then retrieve the object from the IPI handler easily.
But actually it is a waste of memory space in the object since the csd
can be allocated from the stack by smp_call_function_single(wait = 1)
and the object can be passed an the IPI argument.
Besides that, embedding the csd in an object is more error prone
because the caller must take care of the serialization of the IPIs
for this csd.
2) The user calls __smp_call_function_single(wait = 1) on a csd that
is allocated on the stack. It's ok but smp_call_function_single()
can do it as well and it already takes care of the allocation on the
stack. Again it's more simple and less error prone.
Therefore, using the underscore prepend API version with wait = 1
is a bad pattern and a sign that the caller can do safer and more
simple.
There was a single user of that which has just been converted.
So lets remove this option to discourage further users.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
Align __smp_call_function_single() with smp_call_function_single() so
that it also checks whether requested cpu is still online.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
Now that we got rid of all the remaining code which fiddled with csd.list,
lets remove it.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
rq_fifo_clear() reset the csd.list through INIT_LIST_HEAD for no clear
purpose. The csd.list doesn't need to be initialized as a list head
because it's only ever used as a list node.
Lets remove this useless initialization.
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
Block layer currently abuses rq->csd.list.next for storing fifo_time.
That is a terrible hack and completely unnecessary as well. Union
achieves the same space saving in a cleaner way.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
We want these fixes here as well.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
We want those fixes here as well.
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next
|
|
Once mgmt_set_powered(off) starts doing disconnections we'll need to
care about any disconnections in mgmt.c and not just those with the
MGMT_CONNECTED flag set. Therefore, move the check into mgmt.c from
hci_event.c.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
We'll soon need to make decisions on toggling the HCI_ADVERTISING flag
based on pending mgmt_set_powered commands. Therefore, move the handling
from hci_event.c into mgmt.c.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
This patch adds a convenience function to return the number of
connections in the conn_hash list. This will be useful once we update
the power off procedure to disconnect any open connections.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
Add code for device tree support of clocks in the BCM281xx family of
SoCs. Machines in this family use peripheral clocks implemented by
"Kona" clock control units (CCUs). (Other Broadcom SoC families use
Kona style CCUs as well, but support for them is not yet upstream.)
A BCM281xx SoC has multiple CCUs, each of which manages a set of
clocks on the SoC. A Kona peripheral clock is composite clock that
may include a gate, a parent clock multiplexor, and zero, one
or two dividers. There is a variety of gate types, and many gates
implement hardware-managed gating (often called "auto-gating").
Most dividers divide their input clock signal by an integer value
(one or more). There are also "fractional" dividers which allow
division by non-integer values. To accomodate such dividers,
clock rates and dividers are generally maintained by the code in
"scaled" form, which allows integer and fractional dividers to
be handled in a uniform way.
If present, the gate for a Kona peripheral clock must be enabled
when a change is made to its multiplexor or one of its dividers.
Additionally, dividers and multiplexors have trigger registers which
must be used whenever the divider value or selected parent clock is
changed. The same trigger is often used for a divider and
multiplexor, and a BCM281xx peripheral clock occasionally has two
triggers.
The gate, dividers, and parent clock selector are treated in this
code as "components" of a peripheral clock. Their functionality is
implemented directly--e.g. the common clock framework gate
implementation is not used for a Kona peripheral clock gate. (This
has being considered though, and the intention is to evolve this
code to leverage common code as much as possible.)
The source code is divided into three general portions:
drivers/clk/bcm/clk-kona.h
drivers/clk/bcm/clk-kona.c
These implement the basic Kona clock functionality,
including the clk_ops methods and various routines to
manipulate registers and interpret their values. This
includes some functions used to set clocks to a desired
initial state (though this feature is only partially
implemented here).
drivers/clk/bcm/clk-kona-setup.c
This contains generic run-time initialization code for
data structures representing Kona CCUs and clocks. This
encapsulates the clock structure initialization that can't
be done statically. Note that there is a great deal of
validity-checking code here, making explicit certain
assumptions in the code. This is mostly useful for adding
new clock definitions and could possibly be disabled for
production use.
drivers/clk/bcm/clk-bcm281xx.c
This file defines the specific CCUs used by BCM281XX family
SoCs, as well as the specific clocks implemented by each.
It declares a device tree clock match entry for each CCU
defined.
include/dt-bindings/clock/bcm281xx.h
This file defines the selector (index) values used to
identify a particular clock provided by a CCU. It consists
entirely of C preprocessor constants, to be used by both the
C source and device tree source files.
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Tim Kryger <tim.kryger@linaro.org>
Reviewed-by: Matt Porter <mporter@linaro.org>
Acked-by: Mike Turquette <mturquette@linaro.org>
Signed-off-by: Matt Porter <mporter@linaro.org>
|
|
SET_REPORT and GET_REPORT are mandatory in the HID specification.
Make the corresponding API in hid-core mandatory too, which removes the
need to test against it in some various places.
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
Add support for 32-bit ioctls with v4l-subdev device nodes.
Rather than keep adding new ioctls to the list in v4l2-compat-ioctl32.c, just check
if the ioctl is a non-private V4L2 ioctl and if so, call the conversion code.
We keep forgetting to add new ioctls, so this is a more robust solution.
In addition extend the subdev API with support for a compat32 function to
convert custom v4l-subdev ioctls.
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Tested-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
|
Add the sched_setattr and sched_getattr syscalls to the generic syscall
list, which is used by the following architectures: arc, arm64, c6x,
hexagon, metag, openrisc, score, tile, unicore32.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arch@vger.kernel.org
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Mark Salter <msalter@redhat.com>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: linux-c6x-dev@linux-c6x.org
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: linux-hexagon@vger.kernel.org
Cc: linux-metag@vger.kernel.org
Cc: Jonas Bonn <jonas@southpole.se>
Cc: linux@lists.openrisc.net
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
|
|
This cleanup series gets rid of <mach/timex.h> for platforms not using
ARCH_MULTIPLATFORM. (For multi-platform code it's already unused since
387798b (ARM: initial multiplatform support).)
To make this work some code out of arch/arm needed to be adapted. The
respective changes got acks by their maintainers to be taken via armsoc
(with Andrew Morton substituting for Alessandro Zummo as rtc maintainer).
Compared to the previous pull request there was another patch added that
fixes a (non-critical) regression on ixp4xx. Olof Johansson asked to not
squash this fix into the original commit to save him from the need to
reverify the series.
Conflicts:
arch/arm/mach-at91/at91sam9260.c
arch/arm/mach-at91/at91sam9261.c
arch/arm/mach-at91/at91sam9263.c
arch/arm/mach-at91/at91sam9rl.c
arch/arm/mach-mmp/time.c
arch/arm/mach-sa1100/time.c
|
|
The RPA needs to be stored to know which is the current one. Otherwise
it is impossible to ensure that always the correct RPA can be programmed
into the controller when it is needed.
Current code checks if the address in the controller is a RPA, but that
can potentially lead to using a RPA that can not be resolved with the
IRK that has been distributed.
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
|
|
When running active scanning during LE discovery, do not reveal the own
identity to the peer devices. In case LE privacy has been enabled, then
a resolvable private address is used. If the LE privacy option is off,
then use an unresolvable private address.
The public address or static random address is never used in active
scanning anymore. This ensures that scan request are send using a
random address.
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
|