format. For the single-planar API, applications must set <structfield> plane
</structfield> to zero. Additional flags may be posted in the <structfield>
flags </structfield> field. Refer to a manual for open() for details.
-Currently only O_CLOEXEC is supported. All other fields must be set to zero.
+Currently only O_CLOEXEC, O_RDONLY, O_WRONLY, and O_RDWR are supported. All
+other fields must be set to zero.
In the case of multi-planar API, every plane is exported separately using
multiple <constant> VIDIOC_EXPBUF </constant> calls. </para>
<entry>__u32</entry>
<entry><structfield>flags</structfield></entry>
<entry>Flags for the newly created file, currently only <constant>
-O_CLOEXEC </constant> is supported, refer to the manual of open() for more
-details.</entry>
+O_CLOEXEC </constant>, <constant>O_RDONLY</constant>, <constant>O_WRONLY
+</constant>, and <constant>O_RDWR</constant> are supported, refer to the manual
+of open() for more details.</entry>
</row>
<row>
<entry>__s32</entry>
(4) Diff the index keys of two objects.
- int (*diff_objects)(const void *a, const void *b);
+ int (*diff_objects)(const void *object, const void *index_key);
- Return the bit position at which the index keys of two objects differ or
- -1 if they are the same.
+ Return the bit position at which the index key of the specified object
+ differs from the given index key or -1 if they are the same.
(5) Free an object.
Invalidation is removing an entry from the cache without writing it
back. Cache blocks can be invalidated via the invalidate_cblocks
message, which takes an arbitrary number of cblock ranges. Each cblock
-must be expressed as a decimal value, in the future a variant message
-that takes cblock ranges expressed in hexidecimal may be needed to
-better support efficient invalidation of larger caches. The cache must
-be in passthrough mode when invalidate_cblocks is used.
+range's end value is "one past the end", meaning 5-10 expresses a range
+of values from 5 to 9. Each cblock must be expressed as a decimal
+value, in the future a variant message that takes cblock ranges
+expressed in hexidecimal may be needed to better support efficient
+invalidation of larger caches. The cache must be in passthrough mode
+when invalidate_cblocks is used.
invalidate_cblocks [<cblock>|<cblock begin>-<cblock end>]*
for the davinci_emac interface contains.
Required properties:
-- compatible: "ti,davinci-dm6467-emac";
+- compatible: "ti,davinci-dm6467-emac" or "ti,am3517-emac"
- reg: Offset and length of the register set for the device
- ti,davinci-ctrl-reg-offset: offset to control register
- ti,davinci-ctrl-mod-reg-offset: offset to control module register
Optional properties:
- phy-device : phandle to Ethernet phy
- local-mac-address : Ethernet mac address to use
+- reg-io-width : Mask of sizes (in bytes) of the IO accesses that
+ are supported on the device. Valid value for SMSC LAN91c111 are
+ 1, 2 or 4. If it's omitted or invalid, the size would be 2 meaning
+ 16-bit access only.
--- /dev/null
+ ==============================
+ KERNEL MODULE SIGNING FACILITY
+ ==============================
+
+CONTENTS
+
+ - Overview.
+ - Configuring module signing.
+ - Generating signing keys.
+ - Public keys in the kernel.
+ - Manually signing modules.
+ - Signed modules and stripping.
+ - Loading signed modules.
+ - Non-valid signatures and unsigned modules.
+ - Administering/protecting the private key.
+
+
+========
+OVERVIEW
+========
+
+The kernel module signing facility cryptographically signs modules during
+installation and then checks the signature upon loading the module. This
+allows increased kernel security by disallowing the loading of unsigned modules
+or modules signed with an invalid key. Module signing increases security by
+making it harder to load a malicious module into the kernel. The module
+signature checking is done by the kernel so that it is not necessary to have
+trusted userspace bits.
+
+This facility uses X.509 ITU-T standard certificates to encode the public keys
+involved. The signatures are not themselves encoded in any industrial standard
+type. The facility currently only supports the RSA public key encryption
+standard (though it is pluggable and permits others to be used). The possible
+hash algorithms that can be used are SHA-1, SHA-224, SHA-256, SHA-384, and
+SHA-512 (the algorithm is selected by data in the signature).
+
+
+==========================
+CONFIGURING MODULE SIGNING
+==========================
+
+The module signing facility is enabled by going to the "Enable Loadable Module
+Support" section of the kernel configuration and turning on
+
+ CONFIG_MODULE_SIG "Module signature verification"
+
+This has a number of options available:
+
+ (1) "Require modules to be validly signed" (CONFIG_MODULE_SIG_FORCE)
+
+ This specifies how the kernel should deal with a module that has a
+ signature for which the key is not known or a module that is unsigned.
+
+ If this is off (ie. "permissive"), then modules for which the key is not
+ available and modules that are unsigned are permitted, but the kernel will
+ be marked as being tainted.
+
+ If this is on (ie. "restrictive"), only modules that have a valid
+ signature that can be verified by a public key in the kernel's possession
+ will be loaded. All other modules will generate an error.
+
+ Irrespective of the setting here, if the module has a signature block that
+ cannot be parsed, it will be rejected out of hand.
+
+
+ (2) "Automatically sign all modules" (CONFIG_MODULE_SIG_ALL)
+
+ If this is on then modules will be automatically signed during the
+ modules_install phase of a build. If this is off, then the modules must
+ be signed manually using:
+
+ scripts/sign-file
+
+
+ (3) "Which hash algorithm should modules be signed with?"
+
+ This presents a choice of which hash algorithm the installation phase will
+ sign the modules with:
+
+ CONFIG_SIG_SHA1 "Sign modules with SHA-1"
+ CONFIG_SIG_SHA224 "Sign modules with SHA-224"
+ CONFIG_SIG_SHA256 "Sign modules with SHA-256"
+ CONFIG_SIG_SHA384 "Sign modules with SHA-384"
+ CONFIG_SIG_SHA512 "Sign modules with SHA-512"
+
+ The algorithm selected here will also be built into the kernel (rather
+ than being a module) so that modules signed with that algorithm can have
+ their signatures checked without causing a dependency loop.
+
+
+=======================
+GENERATING SIGNING KEYS
+=======================
+
+Cryptographic keypairs are required to generate and check signatures. A
+private key is used to generate a signature and the corresponding public key is
+used to check it. The private key is only needed during the build, after which
+it can be deleted or stored securely. The public key gets built into the
+kernel so that it can be used to check the signatures as the modules are
+loaded.
+
+Under normal conditions, the kernel build will automatically generate a new
+keypair using openssl if one does not exist in the files:
+
+ signing_key.priv
+ signing_key.x509
+
+during the building of vmlinux (the public part of the key needs to be built
+into vmlinux) using parameters in the:
+
+ x509.genkey
+
+file (which is also generated if it does not already exist).
+
+It is strongly recommended that you provide your own x509.genkey file.
+
+Most notably, in the x509.genkey file, the req_distinguished_name section
+should be altered from the default:
+
+ [ req_distinguished_name ]
+ O = Magrathea
+ CN = Glacier signing key
+ emailAddress = slartibartfast@magrathea.h2g2
+
+The generated RSA key size can also be set with:
+
+ [ req ]
+ default_bits = 4096
+
+
+It is also possible to manually generate the key private/public files using the
+x509.genkey key generation configuration file in the root node of the Linux
+kernel sources tree and the openssl command. The following is an example to
+generate the public/private key files:
+
+ openssl req -new -nodes -utf8 -sha256 -days 36500 -batch -x509 \
+ -config x509.genkey -outform DER -out signing_key.x509 \
+ -keyout signing_key.priv
+
+
+=========================
+PUBLIC KEYS IN THE KERNEL
+=========================
+
+The kernel contains a ring of public keys that can be viewed by root. They're
+in a keyring called ".system_keyring" that can be seen by:
+
+ [root@deneb ~]# cat /proc/keys
+ ...
+ 223c7853 I------ 1 perm 1f030000 0 0 keyring .system_keyring: 1
+ 302d2d52 I------ 1 perm 1f010000 0 0 asymmetri Fedora kernel signing key: d69a84e6bce3d216b979e9505b3e3ef9a7118079: X509.RSA a7118079 []
+ ...
+
+Beyond the public key generated specifically for module signing, any file
+placed in the kernel source root directory or the kernel build root directory
+whose name is suffixed with ".x509" will be assumed to be an X.509 public key
+and will be added to the keyring.
+
+Further, the architecture code may take public keys from a hardware store and
+add those in also (e.g. from the UEFI key database).
+
+Finally, it is possible to add additional public keys by doing:
+
+ keyctl padd asymmetric "" [.system_keyring-ID] <[key-file]
+
+e.g.:
+
+ keyctl padd asymmetric "" 0x223c7853 <my_public_key.x509
+
+Note, however, that the kernel will only permit keys to be added to
+.system_keyring _if_ the new key's X.509 wrapper is validly signed by a key
+that is already resident in the .system_keyring at the time the key was added.
+
+
+=========================
+MANUALLY SIGNING MODULES
+=========================
+
+To manually sign a module, use the scripts/sign-file tool available in
+the Linux kernel source tree. The script requires 4 arguments:
+
+ 1. The hash algorithm (e.g., sha256)
+ 2. The private key filename
+ 3. The public key filename
+ 4. The kernel module to be signed
+
+The following is an example to sign a kernel module:
+
+ scripts/sign-file sha512 kernel-signkey.priv \
+ kernel-signkey.x509 module.ko
+
+The hash algorithm used does not have to match the one configured, but if it
+doesn't, you should make sure that hash algorithm is either built into the
+kernel or can be loaded without requiring itself.
+
+
+============================
+SIGNED MODULES AND STRIPPING
+============================
+
+A signed module has a digital signature simply appended at the end. The string
+"~Module signature appended~." at the end of the module's file confirms that a
+signature is present but it does not confirm that the signature is valid!
+
+Signed modules are BRITTLE as the signature is outside of the defined ELF
+container. Thus they MAY NOT be stripped once the signature is computed and
+attached. Note the entire module is the signed payload, including any and all
+debug information present at the time of signing.
+
+
+======================
+LOADING SIGNED MODULES
+======================
+
+Modules are loaded with insmod, modprobe, init_module() or finit_module(),
+exactly as for unsigned modules as no processing is done in userspace. The
+signature checking is all done within the kernel.
+
+
+=========================================
+NON-VALID SIGNATURES AND UNSIGNED MODULES
+=========================================
+
+If CONFIG_MODULE_SIG_FORCE is enabled or enforcemodulesig=1 is supplied on
+the kernel command line, the kernel will only load validly signed modules
+for which it has a public key. Otherwise, it will also load modules that are
+unsigned. Any module for which the kernel has a key, but which proves to have
+a signature mismatch will not be permitted to load.
+
+Any module that has an unparseable signature will be rejected.
+
+
+=========================================
+ADMINISTERING/PROTECTING THE PRIVATE KEY
+=========================================
+
+Since the private key is used to sign modules, viruses and malware could use
+the private key to sign modules and compromise the operating system. The
+private key must be either destroyed or moved to a secure location and not kept
+in the root node of the kernel source tree.
Default: 64 (as recommended by RFC1700)
ip_no_pmtu_disc - BOOLEAN
- Disable Path MTU Discovery.
- default FALSE
+ Disable Path MTU Discovery. If enabled and a
+ fragmentation-required ICMP is received, the PMTU to this
+ destination will be set to min_pmtu (see below). You will need
+ to raise min_pmtu to the smallest interface MTU on your system
+ manually if you want to avoid locally generated fragments.
+ Default: FALSE
min_pmtu - INTEGER
default 552 - minimum discovered Path MTU
[shutdown] close() --------> destruction of the transmission socket and
deallocation of all associated resources.
+Socket creation and destruction is also straight forward, and is done
+the same way as in capturing described in the previous paragraph:
+
+ int fd = socket(PF_PACKET, mode, 0);
+
+The protocol can optionally be 0 in case we only want to transmit
+via this socket, which avoids an expensive call to packet_rcv().
+In this case, you also need to bind(2) the TX_RING with sll_protocol = 0
+set. Otherwise, htons(ETH_P_ALL) or any other protocol, for example.
+
Binding the socket to your network interface is mandatory (with zero copy) to
know the header size of frames used in the circular buffer.
F: arch/arm/mach-footbridge/
ARM/FREESCALE IMX / MXC ARM ARCHITECTURE
+M: Shawn Guo <shawn.guo@linaro.org>
M: Sascha Hauer <kernel@pengutronix.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-T: git git://git.pengutronix.de/git/imx/linux-2.6.git
+T: git git://git.linaro.org/people/shawnguo/linux-2.6.git
F: arch/arm/mach-imx/
+F: arch/arm/boot/dts/imx*
F: arch/arm/configs/imx*_defconfig
-ARM/FREESCALE IMX6
-M: Shawn Guo <shawn.guo@linaro.org>
-L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-S: Maintained
-T: git git://git.linaro.org/people/shawnguo/linux-2.6.git
-F: arch/arm/mach-imx/*imx6*
-
ARM/FREESCALE MXS ARM ARCHITECTURE
M: Shawn Guo <shawn.guo@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
GPIO SUBSYSTEM
M: Linus Walleij <linus.walleij@linaro.org>
-S: Maintained
+M: Alexandre Courbot <gnurou@gmail.com>
L: linux-gpio@vger.kernel.org
-F: Documentation/gpio.txt
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git
+S: Maintained
+F: Documentation/gpio/
F: drivers/gpio/
F: include/linux/gpio*
F: include/asm-generic/gpio.h
S: Maintained
F: drivers/media/usb/gspca/
+GUID PARTITION TABLE (GPT)
+M: Davidlohr Bueso <davidlohr@hp.com>
+L: linux-efi@vger.kernel.org
+S: Maintained
+F: block/partitions/efi.*
+
STK1160 USB VIDEO CAPTURE DRIVER
M: Ezequiel Garcia <elezegarcia@gmail.com>
L: linux-media@vger.kernel.org
M: Carolyn Wyborny <carolyn.wyborny@intel.com>
M: Don Skidmore <donald.c.skidmore@intel.com>
M: Greg Rose <gregory.v.rose@intel.com>
-M: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
M: Alex Duyck <alexander.h.duyck@intel.com>
M: John Ronciak <john.ronciak@intel.com>
-M: Tushar Dave <tushar.n.dave@intel.com>
L: e1000-devel@lists.sourceforge.net
W: http://www.intel.com/support/feedback.htm
W: http://e1000.sourceforge.net/
M: Herbert Xu <herbert@gondor.apana.org.au>
M: "David S. Miller" <davem@davemloft.net>
L: netdev@vger.kernel.org
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next.git
S: Maintained
F: net/xfrm/
F: net/key/
F: net/ipv4/xfrm*
+F: net/ipv4/esp4.c
+F: net/ipv4/ah4.c
+F: net/ipv4/ipcomp.c
+F: net/ipv4/ip_vti.c
F: net/ipv6/xfrm*
+F: net/ipv6/esp6.c
+F: net/ipv6/ah6.c
+F: net/ipv6/ipcomp6.c
+F: net/ipv6/ip6_vti.c
F: include/uapi/linux/xfrm.h
F: include/net/xfrm.h
F: include/linux/pci*
F: arch/x86/pci/
+PCI DRIVER FOR IMX6
+M: Richard Zhu <r65037@freescale.com>
+M: Shawn Guo <shawn.guo@linaro.org>
+L: linux-pci@vger.kernel.org
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: drivers/pci/host/*imx6*
+
+PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support)
+M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+M: Jason Cooper <jason@lakedaemon.net>
+L: linux-pci@vger.kernel.org
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: drivers/pci/host/*mvebu*
+
PCI DRIVER FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@gmail.com>
L: linux-tegra@vger.kernel.org
+L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
F: drivers/pci/host/pci-tegra.c
+PCI DRIVER FOR RENESAS R-CAR
+M: Simon Horman <horms@verge.net.au>
+L: linux-pci@vger.kernel.org
+L: linux-sh@vger.kernel.org
+S: Maintained
+F: drivers/pci/host/*rcar*
+
PCI DRIVER FOR SAMSUNG EXYNOS
M: Jingoo Han <jg1.han@samsung.com>
L: linux-pci@vger.kernel.org
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
S: Maintained
F: drivers/pci/host/pci-exynos.c
+PCI DRIVER FOR SYNOPSIS DESIGNWARE
+M: Mohit Kumar <mohit.kumar@st.com>
+M: Jingoo Han <jg1.han@samsung.com>
+L: linux-pci@vger.kernel.org
+S: Maintained
+F: drivers/pci/host/*designware*
+
PCMCIA SUBSYSTEM
P: Linux PCMCIA Team
L: linux-pcmcia@lists.infradead.org
VERSION = 3
PATCHLEVEL = 13
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc4
NAME = One Giant Leap for Frogkind
# *DOCUMENTATION*
# Select initial ramdisk compression format, default is gzip(1).
# This shall be used by the dracut(8) tool while creating an initramfs image.
#
-INITRD_COMPRESS=gzip
-ifeq ($(CONFIG_RD_BZIP2), y)
- INITRD_COMPRESS=bzip2
-else ifeq ($(CONFIG_RD_LZMA), y)
- INITRD_COMPRESS=lzma
-else ifeq ($(CONFIG_RD_XZ), y)
- INITRD_COMPRESS=xz
-else ifeq ($(CONFIG_RD_LZO), y)
- INITRD_COMPRESS=lzo
-else ifeq ($(CONFIG_RD_LZ4), y)
- INITRD_COMPRESS=lz4
-endif
-export INITRD_COMPRESS
+INITRD_COMPRESS-y := gzip
+INITRD_COMPRESS-$(CONFIG_RD_BZIP2) := bzip2
+INITRD_COMPRESS-$(CONFIG_RD_LZMA) := lzma
+INITRD_COMPRESS-$(CONFIG_RD_XZ) := xz
+INITRD_COMPRESS-$(CONFIG_RD_LZO) := lzo
+INITRD_COMPRESS-$(CONFIG_RD_LZ4) := lz4
+export INITRD_COMPRESS := $(INITRD_COMPRESS-y)
ifdef CONFIG_MODULE_SIG_ALL
MODSECKEY = ./signing_key.priv
config ARC
def_bool y
+ select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
# ARC Busybox based initramfs absolutely relies on DEVTMPFS for /dev
select DEVTMPFS if !INITRAMFS_SOURCE=""
/******** no-legacy-syscalls-ABI *******/
+#ifndef _UAPI_ASM_ARC_UNISTD_H
+#define _UAPI_ASM_ARC_UNISTD_H
+
#define __ARCH_WANT_SYS_EXECVE
#define __ARCH_WANT_SYS_CLONE
#define __ARCH_WANT_SYS_VFORK
/* Generic syscall (fs/filesystems.c - lost in asm-generic/unistd.h */
#define __NR_sysfs (__NR_arch_specific_syscall + 3)
__SYSCALL(__NR_sysfs, sys_sysfs)
+
+#endif
cache_result = (config >> 16) & 0xff;
if (cache_type >= PERF_COUNT_HW_CACHE_MAX)
return -EINVAL;
- if (cache_type >= PERF_COUNT_HW_CACHE_OP_MAX)
+ if (cache_op >= PERF_COUNT_HW_CACHE_OP_MAX)
return -EINVAL;
- if (cache_type >= PERF_COUNT_HW_CACHE_RESULT_MAX)
+ if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
return -EINVAL;
ret = arc_pmu_cache_map[cache_type][cache_op][cache_result];
*/
/dts-v1/;
-#include "omap34xx.dtsi"
+#include "am3517.dtsi"
/ {
- model = "TI AM3517 EVM (AM3517/05)";
- compatible = "ti,am3517-evm", "ti,omap3";
+ model = "TI AM3517 EVM (AM3517/05 TMDSEVM3517)";
+ compatible = "ti,am3517-evm", "ti,am3517", "ti,omap3";
memory {
device_type = "memory";
--- /dev/null
+/*
+ * Device Tree Source for am3517 SoC
+ *
+ * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include "omap3.dtsi"
+
+/ {
+ aliases {
+ serial3 = &uart4;
+ };
+
+ ocp {
+ am35x_otg_hs: am35x_otg_hs@5c040000 {
+ compatible = "ti,omap3-musb";
+ ti,hwmods = "am35x_otg_hs";
+ status = "disabled";
+ reg = <0x5c040000 0x1000>;
+ interrupts = <71>;
+ interrupt-names = "mc";
+ };
+
+ davinci_emac: ethernet@0x5c000000 {
+ compatible = "ti,am3517-emac";
+ ti,hwmods = "davinci_emac";
+ status = "disabled";
+ reg = <0x5c000000 0x30000>;
+ interrupts = <67 68 69 70>;
+ ti,davinci-ctrl-reg-offset = <0x10000>;
+ ti,davinci-ctrl-mod-reg-offset = <0>;
+ ti,davinci-ctrl-ram-offset = <0x20000>;
+ ti,davinci-ctrl-ram-size = <0x2000>;
+ ti,davinci-rmii-en = /bits/ 8 <1>;
+ local-mac-address = [ 00 00 00 00 00 00 ];
+ };
+
+ davinci_mdio: ethernet@0x5c030000 {
+ compatible = "ti,davinci_mdio";
+ ti,hwmods = "davinci_mdio";
+ status = "disabled";
+ reg = <0x5c030000 0x1000>;
+ bus_freq = <1000000>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ uart4: serial@4809e000 {
+ compatible = "ti,omap3-uart";
+ ti,hwmods = "uart4";
+ status = "disabled";
+ reg = <0x4809e000 0x400>;
+ interrupts = <84>;
+ dmas = <&sdma 55 &sdma 54>;
+ dma-names = "tx", "rx";
+ clock-frequency = <48000000>;
+ };
+ };
+};
/dts-v1/;
-#include "omap34xx.dtsi"
+#include "omap34xx-hs.dtsi"
/ {
model = "Nokia N900";
* published by the Free Software Foundation.
*/
-#include "omap36xx.dtsi"
+#include "omap36xx-hs.dtsi"
/ {
cpus {
--- /dev/null
+/* Disabled modules for secure omaps */
+
+#include "omap34xx.dtsi"
+
+/* Secure omaps have some devices inaccessible depending on the firmware */
+&aes {
+ status = "disabled";
+};
+
+&sham {
+ status = "disabled";
+};
+
+&timer12 {
+ status = "disabled";
+};
--- /dev/null
+/* Disabled modules for secure omaps */
+
+#include "omap36xx.dtsi"
+
+/* Secure omaps have some devices inaccessible depending on the firmware */
+&aes {
+ status = "disabled";
+};
+
+&sham {
+ status = "disabled";
+};
+
+&timer12 {
+ status = "disabled";
+};
pio: pinctrl@01c20800 {
compatible = "allwinner,sun6i-a31-pinctrl";
reg = <0x01c20800 0x400>;
- interrupts = <0 11 1>, <0 15 1>, <0 16 1>, <0 17 1>;
+ interrupts = <0 11 4>,
+ <0 15 4>,
+ <0 16 4>,
+ <0 17 4>;
clocks = <&apb1_gates 5>;
gpio-controller;
interrupt-controller;
timer@01c20c00 {
compatible = "allwinner,sun4i-timer";
reg = <0x01c20c00 0xa0>;
- interrupts = <0 18 1>,
- <0 19 1>,
- <0 20 1>,
- <0 21 1>,
- <0 22 1>;
+ interrupts = <0 18 4>,
+ <0 19 4>,
+ <0 20 4>,
+ <0 21 4>,
+ <0 22 4>;
clocks = <&osc24M>;
};
uart0: serial@01c28000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28000 0x400>;
- interrupts = <0 0 1>;
+ interrupts = <0 0 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 16>;
uart1: serial@01c28400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28400 0x400>;
- interrupts = <0 1 1>;
+ interrupts = <0 1 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 17>;
uart2: serial@01c28800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28800 0x400>;
- interrupts = <0 2 1>;
+ interrupts = <0 2 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 18>;
uart3: serial@01c28c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28c00 0x400>;
- interrupts = <0 3 1>;
+ interrupts = <0 3 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 19>;
uart4: serial@01c29000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29000 0x400>;
- interrupts = <0 4 1>;
+ interrupts = <0 4 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 20>;
uart5: serial@01c29400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29400 0x400>;
- interrupts = <0 5 1>;
+ interrupts = <0 5 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb2_gates 21>;
emac: ethernet@01c0b000 {
compatible = "allwinner,sun4i-emac";
reg = <0x01c0b000 0x1000>;
- interrupts = <0 55 1>;
+ interrupts = <0 55 4>;
clocks = <&ahb_gates 17>;
status = "disabled";
};
pio: pinctrl@01c20800 {
compatible = "allwinner,sun7i-a20-pinctrl";
reg = <0x01c20800 0x400>;
- interrupts = <0 28 1>;
+ interrupts = <0 28 4>;
clocks = <&apb0_gates 5>;
gpio-controller;
interrupt-controller;
timer@01c20c00 {
compatible = "allwinner,sun4i-timer";
reg = <0x01c20c00 0x90>;
- interrupts = <0 22 1>,
- <0 23 1>,
- <0 24 1>,
- <0 25 1>,
- <0 67 1>,
- <0 68 1>;
+ interrupts = <0 22 4>,
+ <0 23 4>,
+ <0 24 4>,
+ <0 25 4>,
+ <0 67 4>,
+ <0 68 4>;
clocks = <&osc24M>;
};
uart0: serial@01c28000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28000 0x400>;
- interrupts = <0 1 1>;
+ interrupts = <0 1 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 16>;
uart1: serial@01c28400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28400 0x400>;
- interrupts = <0 2 1>;
+ interrupts = <0 2 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 17>;
uart2: serial@01c28800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28800 0x400>;
- interrupts = <0 3 1>;
+ interrupts = <0 3 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 18>;
uart3: serial@01c28c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c28c00 0x400>;
- interrupts = <0 4 1>;
+ interrupts = <0 4 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 19>;
uart4: serial@01c29000 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29000 0x400>;
- interrupts = <0 17 1>;
+ interrupts = <0 17 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 20>;
uart5: serial@01c29400 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29400 0x400>;
- interrupts = <0 18 1>;
+ interrupts = <0 18 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 21>;
uart6: serial@01c29800 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29800 0x400>;
- interrupts = <0 19 1>;
+ interrupts = <0 19 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 22>;
uart7: serial@01c29c00 {
compatible = "snps,dw-apb-uart";
reg = <0x01c29c00 0x400>;
- interrupts = <0 20 1>;
+ interrupts = <0 20 4>;
reg-shift = <2>;
reg-io-width = <4>;
clocks = <&apb1_gates 23>;
i2c0: i2c@01c2ac00 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2ac00 0x400>;
- interrupts = <0 7 1>;
+ interrupts = <0 7 4>;
clocks = <&apb1_gates 0>;
clock-frequency = <100000>;
status = "disabled";
i2c1: i2c@01c2b000 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b000 0x400>;
- interrupts = <0 8 1>;
+ interrupts = <0 8 4>;
clocks = <&apb1_gates 1>;
clock-frequency = <100000>;
status = "disabled";
i2c2: i2c@01c2b400 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b400 0x400>;
- interrupts = <0 9 1>;
+ interrupts = <0 9 4>;
clocks = <&apb1_gates 2>;
clock-frequency = <100000>;
status = "disabled";
i2c3: i2c@01c2b800 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2b800 0x400>;
- interrupts = <0 88 1>;
+ interrupts = <0 88 4>;
clocks = <&apb1_gates 3>;
clock-frequency = <100000>;
status = "disabled";
i2c4: i2c@01c2bc00 {
compatible = "allwinner,sun4i-i2c";
reg = <0x01c2bc00 0x400>;
- interrupts = <0 89 1>;
+ interrupts = <0 89 4>;
clocks = <&apb1_gates 15>;
clock-frequency = <100000>;
status = "disabled";
#define TASK_UNMAPPED_BASE UL(0x00000000)
#endif
-#ifndef PHYS_OFFSET
-#define PHYS_OFFSET UL(CONFIG_DRAM_BASE)
-#endif
-
#ifndef END_MEM
#define END_MEM (UL(CONFIG_DRAM_BASE) + CONFIG_DRAM_SIZE)
#endif
#ifndef PAGE_OFFSET
-#define PAGE_OFFSET (PHYS_OFFSET)
+#define PAGE_OFFSET PLAT_PHYS_OFFSET
#endif
/*
* The module can be at any place in ram in nommu mode.
*/
#define MODULES_END (END_MEM)
-#define MODULES_VADDR (PHYS_OFFSET)
+#define MODULES_VADDR PAGE_OFFSET
#define XIP_VIRT_ADDR(physaddr) (physaddr)
#endif
#define ARCH_PGD_MASK ((1 << ARCH_PGD_SHIFT) - 1)
+/*
+ * PLAT_PHYS_OFFSET is the offset (from zero) of the start of physical
+ * memory. This is used for XIP and NoMMU kernels, or by kernels which
+ * have their own mach/memory.h. Assembly code must always use
+ * PLAT_PHYS_OFFSET and not PHYS_OFFSET.
+ */
+#ifndef PLAT_PHYS_OFFSET
+#define PLAT_PHYS_OFFSET UL(CONFIG_PHYS_OFFSET)
+#endif
+
#ifndef __ASSEMBLY__
/*
#else
+#define PHYS_OFFSET PLAT_PHYS_OFFSET
+
static inline phys_addr_t __virt_to_phys(unsigned long x)
{
return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET;
#endif
#endif
-#endif /* __ASSEMBLY__ */
-
-#ifndef PHYS_OFFSET
-#ifdef PLAT_PHYS_OFFSET
-#define PHYS_OFFSET PLAT_PHYS_OFFSET
-#else
-#define PHYS_OFFSET UL(CONFIG_PHYS_OFFSET)
-#endif
-#endif
-
-#ifndef __ASSEMBLY__
/*
* PFNs are used to describe any physical page; this means
#ifdef CONFIG_ARM_MPU
/* Calculate the size of a region covering just the kernel */
- ldr r5, =PHYS_OFFSET @ Region start: PHYS_OFFSET
+ ldr r5, =PLAT_PHYS_OFFSET @ Region start: PHYS_OFFSET
ldr r6, =(_end) @ Cover whole kernel
sub r6, r6, r5 @ Minimum size of region to map
clz r6, r6 @ Region size must be 2^N...
set_region_nr r0, #MPU_RAM_REGION
isb
/* Full access from PL0, PL1, shared for CONFIG_SMP, cacheable */
- ldr r0, =PHYS_OFFSET @ RAM starts at PHYS_OFFSET
+ ldr r0, =PLAT_PHYS_OFFSET @ RAM starts at PHYS_OFFSET
ldr r5,=(MPU_AP_PL1RW_PL0RW | MPU_RGN_NORMAL)
setup_region r0, r5, r6, MPU_DATA_SIDE @ PHYS_OFFSET, shared, enabled
sub r4, r3, r4 @ (PHYS_OFFSET - PAGE_OFFSET)
add r8, r8, r4 @ PHYS_OFFSET
#else
- ldr r8, =PHYS_OFFSET @ always constant in this case
+ ldr r8, =PLAT_PHYS_OFFSET @ always constant in this case
#endif
/*
unsigned long get_wchan(struct task_struct *p)
{
struct stackframe frame;
+ unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
return 0;
frame.sp = thread_saved_sp(p);
frame.lr = 0; /* recovered from the stack */
frame.pc = thread_saved_pc(p);
+ stack_page = (unsigned long)task_stack_page(p);
do {
- int ret = unwind_frame(&frame);
- if (ret < 0)
+ if (frame.sp < stack_page ||
+ frame.sp >= stack_page + THREAD_SIZE ||
+ unwind_frame(&frame) < 0)
return 0;
if (!in_sched_functions(frame.pc))
return frame.pc;
machine_desc = mdesc;
machine_name = mdesc->name;
- setup_dma_zone(mdesc);
-
if (mdesc->reboot_mode != REBOOT_HARD)
reboot_mode = mdesc->reboot_mode;
sort(&meminfo.bank, meminfo.nr_banks, sizeof(meminfo.bank[0]), meminfo_cmp, NULL);
early_paging_init(mdesc, lookup_processor_type(read_cpuid_id()));
+ setup_dma_zone(mdesc);
sanity_check_meminfo();
arm_memblock_init(&meminfo, mdesc);
high = ALIGN(low, THREAD_SIZE);
/* check current frame pointer is within bounds */
- if (fp < (low + 12) || fp + 4 >= high)
+ if (fp < low + 12 || fp > high - 4)
return -EINVAL;
/* restore the registers from the stack frame */
__do_cache_op(unsigned long start, unsigned long end)
{
int ret;
- unsigned long chunk = PAGE_SIZE;
do {
+ unsigned long chunk = min(PAGE_SIZE, end - start);
+
if (signal_pending(current)) {
struct thread_info *ti = current_thread_info();
static struct resource da830_mcasp1_resources[] = {
{
- .name = "mcasp1",
+ .name = "mpu",
.start = DAVINCI_DA830_MCASP1_REG_BASE,
.end = DAVINCI_DA830_MCASP1_REG_BASE + (SZ_1K * 12) - 1,
.flags = IORESOURCE_MEM,
static struct resource da850_mcasp_resources[] = {
{
- .name = "mcasp",
+ .name = "mpu",
.start = DAVINCI_DA8XX_MCASP0_REG_BASE,
.end = DAVINCI_DA8XX_MCASP0_REG_BASE + (SZ_1K * 12) - 1,
.flags = IORESOURCE_MEM,
static struct resource dm355_asp1_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_ASP1_BASE,
.end = DAVINCI_ASP1_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
int __init dm355_gpio_register(void)
{
return davinci_gpio_register(dm355_gpio_resources,
- sizeof(dm355_gpio_resources),
+ ARRAY_SIZE(dm355_gpio_resources),
&dm355_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
int __init dm365_gpio_register(void)
{
return davinci_gpio_register(dm365_gpio_resources,
- sizeof(dm365_gpio_resources),
+ ARRAY_SIZE(dm365_gpio_resources),
&dm365_gpio_platform_data);
}
static struct resource dm365_asp_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_DM365_ASP0_BASE,
.end = DAVINCI_DM365_ASP0_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
/* DM6446 EVM uses ASP0; line-out is a pair of RCA jacks */
static struct resource dm644x_asp_resources[] = {
{
+ .name = "mpu",
.start = DAVINCI_ASP0_BASE,
.end = DAVINCI_ASP0_BASE + SZ_8K - 1,
.flags = IORESOURCE_MEM,
int __init dm644x_gpio_register(void)
{
return davinci_gpio_register(dm644_gpio_resources,
- sizeof(dm644_gpio_resources),
+ ARRAY_SIZE(dm644_gpio_resources),
&dm644_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
static struct resource dm646x_mcasp0_resources[] = {
{
- .name = "mcasp0",
+ .name = "mpu",
.start = DAVINCI_DM646X_MCASP0_REG_BASE,
.end = DAVINCI_DM646X_MCASP0_REG_BASE + (SZ_1K << 1) - 1,
.flags = IORESOURCE_MEM,
static struct resource dm646x_mcasp1_resources[] = {
{
- .name = "mcasp1",
+ .name = "mpu",
.start = DAVINCI_DM646X_MCASP1_REG_BASE,
.end = DAVINCI_DM646X_MCASP1_REG_BASE + (SZ_1K << 1) - 1,
.flags = IORESOURCE_MEM,
int __init dm646x_gpio_register(void)
{
return davinci_gpio_register(dm646x_gpio_resources,
- sizeof(dm646x_gpio_resources),
+ ARRAY_SIZE(dm646x_gpio_resources),
&dm646x_gpio_platform_data);
}
/*----------------------------------------------------------------------*/
#include <linux/clkdev.h>
#include <linux/clocksource.h>
#include <linux/dma-mapping.h>
+#include <linux/input.h>
#include <linux/io.h>
#include <linux/irqchip.h>
+#include <linux/mailbox.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/of_address.h>
+#include <linux/reboot.h>
#include <linux/amba/bus.h>
#include <linux/platform_device.h>
.name = "cpuidle-calxeda",
};
+static int hb_keys_notifier(struct notifier_block *nb, unsigned long event, void *data)
+{
+ u32 key = *(u32 *)data;
+
+ if (event != 0x1000)
+ return 0;
+
+ if (key == KEY_POWER)
+ orderly_poweroff(false);
+ else if (key == 0xffff)
+ ctrl_alt_del();
+
+ return 0;
+}
+static struct notifier_block hb_keys_nb = {
+ .notifier_call = hb_keys_notifier,
+};
+
static void __init highbank_init(void)
{
struct device_node *np;
bus_register_notifier(&platform_bus_type, &highbank_platform_nb);
bus_register_notifier(&amba_bustype, &highbank_amba_nb);
+ pl320_ipc_register_notifier(&hb_keys_nb);
+
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
if (psci_ops.cpu_suspend)
.dt_compat = omap3_gp_boards_compat,
.restart = omap3xxx_restart,
MACHINE_END
+
+static const char *am3517_boards_compat[] __initdata = {
+ "ti,am3517",
+ NULL,
+};
+
+DT_MACHINE_START(AM3517_DT, "Generic AM3517 (Flattened Device Tree)")
+ .reserve = omap_reserve,
+ .map_io = omap3_map_io,
+ .init_early = am35xx_init_early,
+ .init_irq = omap_intc_of_init,
+ .handle_irq = omap3_intc_handle_irq,
+ .init_machine = omap_generic_init,
+ .init_late = omap3_init_late,
+ .init_time = omap3_gptimer_timer_init,
+ .dt_compat = am3517_boards_compat,
+ .restart = omap3xxx_restart,
+MACHINE_END
#endif
#ifdef CONFIG_SOC_AM33XX
odbfd_exit1:
kfree(hwmods);
odbfd_exit:
+ /* if data/we are at fault.. load up a fail handler */
+ if (ret)
+ pdev->dev.pm_domain = &omap_device_fail_pm_domain;
+
return ret;
}
return pm_generic_runtime_resume(dev);
}
+
+static int _od_fail_runtime_suspend(struct device *dev)
+{
+ dev_warn(dev, "%s: FIXME: missing hwmod/omap_dev info\n", __func__);
+ return -ENODEV;
+}
+
+static int _od_fail_runtime_resume(struct device *dev)
+{
+ dev_warn(dev, "%s: FIXME: missing hwmod/omap_dev info\n", __func__);
+ return -ENODEV;
+}
+
#endif
#ifdef CONFIG_SUSPEND
#define _od_resume_noirq NULL
#endif
+struct dev_pm_domain omap_device_fail_pm_domain = {
+ .ops = {
+ SET_RUNTIME_PM_OPS(_od_fail_runtime_suspend,
+ _od_fail_runtime_resume, NULL)
+ }
+};
+
struct dev_pm_domain omap_device_pm_domain = {
.ops = {
SET_RUNTIME_PM_OPS(_od_runtime_suspend, _od_runtime_resume,
#include "omap_hwmod.h"
extern struct dev_pm_domain omap_device_pm_domain;
+extern struct dev_pm_domain omap_device_fail_pm_domain;
/* omap_device._state values */
#define OMAP_DEVICE_STATE_UNKNOWN 0
}
/**
- * _set_softreset: set OCP_SYSCONFIG.CLOCKACTIVITY bits in @v
+ * _set_softreset: set OCP_SYSCONFIG.SOFTRESET bit in @v
* @oh: struct omap_hwmod *
* @v: pointer to register contents to modify
*
return 0;
}
+/**
+ * _clear_softreset: clear OCP_SYSCONFIG.SOFTRESET bit in @v
+ * @oh: struct omap_hwmod *
+ * @v: pointer to register contents to modify
+ *
+ * Clear the SOFTRESET bit in @v for hwmod @oh. Returns -EINVAL upon
+ * error or 0 upon success.
+ */
+static int _clear_softreset(struct omap_hwmod *oh, u32 *v)
+{
+ u32 softrst_mask;
+
+ if (!oh->class->sysc ||
+ !(oh->class->sysc->sysc_flags & SYSC_HAS_SOFTRESET))
+ return -EINVAL;
+
+ if (!oh->class->sysc->sysc_fields) {
+ WARN(1,
+ "omap_hwmod: %s: sysc_fields absent for sysconfig class\n",
+ oh->name);
+ return -EINVAL;
+ }
+
+ softrst_mask = (0x1 << oh->class->sysc->sysc_fields->srst_shift);
+
+ *v &= ~softrst_mask;
+
+ return 0;
+}
+
/**
* _wait_softreset_complete - wait for an OCP softreset to complete
* @oh: struct omap_hwmod * to wait on
pr_warning("omap_hwmod: %s: cannot clk_get interface_clk %s\n",
oh->name, os->clk);
ret = -EINVAL;
+ continue;
}
os->_clk = c;
/*
pr_warning("omap_hwmod: %s: cannot clk_get opt_clk %s\n",
oh->name, oc->clk);
ret = -EINVAL;
+ continue;
}
oc->_clk = c;
/*
ret = _set_softreset(oh, &v);
if (ret)
goto dis_opt_clks;
+
+ _write_sysconfig(v, oh);
+ ret = _clear_softreset(oh, &v);
+ if (ret)
+ goto dis_opt_clks;
+
_write_sysconfig(v, oh);
if (oh->class->sysc->srst_udelay)
return 0;
}
+static int of_dev_find_hwmod(struct device_node *np,
+ struct omap_hwmod *oh)
+{
+ int count, i, res;
+ const char *p;
+
+ count = of_property_count_strings(np, "ti,hwmods");
+ if (count < 1)
+ return -ENODEV;
+
+ for (i = 0; i < count; i++) {
+ res = of_property_read_string_index(np, "ti,hwmods",
+ i, &p);
+ if (res)
+ continue;
+ if (!strcmp(p, oh->name)) {
+ pr_debug("omap_hwmod: dt %s[%i] uses hwmod %s\n",
+ np->name, i, oh->name);
+ return i;
+ }
+ }
+
+ return -ENODEV;
+}
+
/**
* of_dev_hwmod_lookup - look up needed hwmod from dt blob
* @np: struct device_node *
* @oh: struct omap_hwmod *
+ * @index: index of the entry found
+ * @found: struct device_node * found or NULL
*
* Parse the dt blob and find out needed hwmod. Recursive function is
* implemented to take care hierarchical dt blob parsing.
- * Return: The device node on success or NULL on failure.
+ * Return: Returns 0 on success, -ENODEV when not found.
*/
-static struct device_node *of_dev_hwmod_lookup(struct device_node *np,
- struct omap_hwmod *oh)
+static int of_dev_hwmod_lookup(struct device_node *np,
+ struct omap_hwmod *oh,
+ int *index,
+ struct device_node **found)
{
- struct device_node *np0 = NULL, *np1 = NULL;
- const char *p;
+ struct device_node *np0 = NULL;
+ int res;
+
+ res = of_dev_find_hwmod(np, oh);
+ if (res >= 0) {
+ *found = np;
+ *index = res;
+ return 0;
+ }
for_each_child_of_node(np, np0) {
- if (of_find_property(np0, "ti,hwmods", NULL)) {
- p = of_get_property(np0, "ti,hwmods", NULL);
- if (!strcmp(p, oh->name))
- return np0;
- np1 = of_dev_hwmod_lookup(np0, oh);
- if (np1)
- return np1;
+ struct device_node *fc;
+ int i;
+
+ res = of_dev_hwmod_lookup(np0, oh, &i, &fc);
+ if (res == 0) {
+ *found = fc;
+ *index = i;
+ return 0;
}
}
- return NULL;
+
+ *found = NULL;
+ *index = 0;
+
+ return -ENODEV;
}
/**
* _init_mpu_rt_base - populate the virtual address for a hwmod
* @oh: struct omap_hwmod * to locate the virtual address
* @data: (unused, caller should pass NULL)
+ * @index: index of the reg entry iospace in device tree
* @np: struct device_node * of the IP block's device node in the DT data
*
* Cache the virtual address used by the MPU to access this IP block's
* -ENXIO on absent or invalid register target address space.
*/
static int __init _init_mpu_rt_base(struct omap_hwmod *oh, void *data,
- struct device_node *np)
+ int index, struct device_node *np)
{
struct omap_hwmod_addr_space *mem;
void __iomem *va_start = NULL;
if (!np)
return -ENXIO;
- va_start = of_iomap(np, oh->mpu_rt_idx);
+ va_start = of_iomap(np, index + oh->mpu_rt_idx);
} else {
va_start = ioremap(mem->pa_start, mem->pa_end - mem->pa_start);
}
if (!va_start) {
- pr_err("omap_hwmod: %s: Could not ioremap\n", oh->name);
+ if (mem)
+ pr_err("omap_hwmod: %s: Could not ioremap\n", oh->name);
+ else
+ pr_err("omap_hwmod: %s: Missing dt reg%i for %s\n",
+ oh->name, index, np->full_name);
return -ENXIO;
}
*/
static int __init _init(struct omap_hwmod *oh, void *data)
{
- int r;
+ int r, index;
struct device_node *np = NULL;
if (oh->_state != _HWMOD_STATE_REGISTERED)
return 0;
- if (of_have_populated_dt())
- np = of_dev_hwmod_lookup(of_find_node_by_name(NULL, "ocp"), oh);
+ if (of_have_populated_dt()) {
+ struct device_node *bus;
+
+ bus = of_find_node_by_name(NULL, "ocp");
+ if (!bus)
+ return -ENODEV;
+
+ r = of_dev_hwmod_lookup(bus, oh, &index, &np);
+ if (r)
+ pr_debug("omap_hwmod: %s missing dt data\n", oh->name);
+ else if (np && index)
+ pr_warn("omap_hwmod: %s using broken dt data from %s\n",
+ oh->name, np->name);
+ }
if (oh->class->sysc) {
- r = _init_mpu_rt_base(oh, NULL, np);
+ r = _init_mpu_rt_base(oh, NULL, index, np);
if (r < 0) {
WARN(1, "omap_hwmod: %s: doesn't have mpu register target base\n",
oh->name);
goto error;
_write_sysconfig(v, oh);
+ ret = _clear_softreset(oh, &v);
+ if (ret)
+ goto error;
+ _write_sysconfig(v, oh);
+
error:
return ret;
}
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_CLOCKACTIVITY |
SYSC_HAS_SIDLEMODE | SYSC_HAS_ENAWAKEUP |
- SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE),
+ SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE |
+ SYSS_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART),
.sysc_fields = &omap_hwmod_sysc_type1,
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
};
/*
.sysc_offs = 0x0010,
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_SIDLEMODE |
- SYSC_HAS_SOFTRESET),
+ SYSC_HAS_SOFTRESET | SYSC_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO |
MSTANDBY_SMART | MSTANDBY_SMART_WKUP),
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
};
/*
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_RESET_STATUS |
- SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET),
+ SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
+ SYSC_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO |
MSTANDBY_SMART | MSTANDBY_SMART_WKUP),
* hence HWMOD_SWSUP_MSTANDBY
*/
- /*
- * During system boot; If the hwmod framework resets the module
- * the module will have smart idle settings; which can lead to deadlock
- * (above Errata Id:i660); so, dont reset the module during boot;
- * Use HWMOD_INIT_NO_RESET.
- */
-
- .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY |
- HWMOD_INIT_NO_RESET,
+ .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY,
.main_clk = "l3init_60m_fclk",
.prcm = {
.omap4 = {
#include <mach/regs-ost.h>
#include <mach/reset.h>
+#include <mach/smemc.h>
unsigned int reset_status;
EXPORT_SYMBOL(reset_status);
writel_relaxed(OSSR_M3, OSSR);
/* ... in 100 ms */
writel_relaxed(readl_relaxed(OSCR) + 368640, OSMR3);
+ /*
+ * SDRAM hangs on watchdog reset on Marvell PXA270 (erratum 71)
+ * we put SDRAM into self-refresh to prevent that
+ */
+ while (1)
+ writel_relaxed(MDREFR_SLFRSH, MDREFR);
}
void pxa_restart(enum reboot_mode mode, const char *cmd)
break;
}
}
-
* Tosa Keyboard
*/
static const uint32_t tosakbd_keymap[] = {
- KEY(0, 2, KEY_W),
- KEY(0, 6, KEY_K),
- KEY(0, 7, KEY_BACKSPACE),
- KEY(0, 8, KEY_P),
- KEY(1, 1, KEY_Q),
- KEY(1, 2, KEY_E),
- KEY(1, 3, KEY_T),
- KEY(1, 4, KEY_Y),
- KEY(1, 6, KEY_O),
- KEY(1, 7, KEY_I),
- KEY(1, 8, KEY_COMMA),
- KEY(2, 1, KEY_A),
- KEY(2, 2, KEY_D),
- KEY(2, 3, KEY_G),
- KEY(2, 4, KEY_U),
- KEY(2, 6, KEY_L),
- KEY(2, 7, KEY_ENTER),
- KEY(2, 8, KEY_DOT),
- KEY(3, 1, KEY_Z),
- KEY(3, 2, KEY_C),
- KEY(3, 3, KEY_V),
- KEY(3, 4, KEY_J),
- KEY(3, 5, TOSA_KEY_ADDRESSBOOK),
- KEY(3, 6, TOSA_KEY_CANCEL),
- KEY(3, 7, TOSA_KEY_CENTER),
- KEY(3, 8, TOSA_KEY_OK),
- KEY(3, 9, KEY_LEFTSHIFT),
- KEY(4, 1, KEY_S),
- KEY(4, 2, KEY_R),
- KEY(4, 3, KEY_B),
- KEY(4, 4, KEY_N),
- KEY(4, 5, TOSA_KEY_CALENDAR),
- KEY(4, 6, TOSA_KEY_HOMEPAGE),
- KEY(4, 7, KEY_LEFTCTRL),
- KEY(4, 8, TOSA_KEY_LIGHT),
- KEY(4, 10, KEY_RIGHTSHIFT),
- KEY(5, 1, KEY_TAB),
- KEY(5, 2, KEY_SLASH),
- KEY(5, 3, KEY_H),
- KEY(5, 4, KEY_M),
- KEY(5, 5, TOSA_KEY_MENU),
- KEY(5, 7, KEY_UP),
- KEY(5, 11, TOSA_KEY_FN),
- KEY(6, 1, KEY_X),
- KEY(6, 2, KEY_F),
- KEY(6, 3, KEY_SPACE),
- KEY(6, 4, KEY_APOSTROPHE),
- KEY(6, 5, TOSA_KEY_MAIL),
- KEY(6, 6, KEY_LEFT),
- KEY(6, 7, KEY_DOWN),
- KEY(6, 8, KEY_RIGHT),
+ KEY(0, 1, KEY_W),
+ KEY(0, 5, KEY_K),
+ KEY(0, 6, KEY_BACKSPACE),
+ KEY(0, 7, KEY_P),
+ KEY(1, 0, KEY_Q),
+ KEY(1, 1, KEY_E),
+ KEY(1, 2, KEY_T),
+ KEY(1, 3, KEY_Y),
+ KEY(1, 5, KEY_O),
+ KEY(1, 6, KEY_I),
+ KEY(1, 7, KEY_COMMA),
+ KEY(2, 0, KEY_A),
+ KEY(2, 1, KEY_D),
+ KEY(2, 2, KEY_G),
+ KEY(2, 3, KEY_U),
+ KEY(2, 5, KEY_L),
+ KEY(2, 6, KEY_ENTER),
+ KEY(2, 7, KEY_DOT),
+ KEY(3, 0, KEY_Z),
+ KEY(3, 1, KEY_C),
+ KEY(3, 2, KEY_V),
+ KEY(3, 3, KEY_J),
+ KEY(3, 4, TOSA_KEY_ADDRESSBOOK),
+ KEY(3, 5, TOSA_KEY_CANCEL),
+ KEY(3, 6, TOSA_KEY_CENTER),
+ KEY(3, 7, TOSA_KEY_OK),
+ KEY(3, 8, KEY_LEFTSHIFT),
+ KEY(4, 0, KEY_S),
+ KEY(4, 1, KEY_R),
+ KEY(4, 2, KEY_B),
+ KEY(4, 3, KEY_N),
+ KEY(4, 4, TOSA_KEY_CALENDAR),
+ KEY(4, 5, TOSA_KEY_HOMEPAGE),
+ KEY(4, 6, KEY_LEFTCTRL),
+ KEY(4, 7, TOSA_KEY_LIGHT),
+ KEY(4, 9, KEY_RIGHTSHIFT),
+ KEY(5, 0, KEY_TAB),
+ KEY(5, 1, KEY_SLASH),
+ KEY(5, 2, KEY_H),
+ KEY(5, 3, KEY_M),
+ KEY(5, 4, TOSA_KEY_MENU),
+ KEY(5, 6, KEY_UP),
+ KEY(5, 10, TOSA_KEY_FN),
+ KEY(6, 0, KEY_X),
+ KEY(6, 1, KEY_F),
+ KEY(6, 2, KEY_SPACE),
+ KEY(6, 3, KEY_APOSTROPHE),
+ KEY(6, 4, TOSA_KEY_MAIL),
+ KEY(6, 5, KEY_LEFT),
+ KEY(6, 6, KEY_DOWN),
+ KEY(6, 7, KEY_RIGHT),
};
static struct matrix_keymap_data tosakbd_keymap_data = {
switch (tegra_chip_id) {
case TEGRA20:
tegra20_fuse_init_randomness();
+ break;
case TEGRA30:
case TEGRA114:
default:
tegra30_fuse_init_randomness();
+ break;
}
pr_info("Tegra Revision: %s SKU: %d CPU Process: %d Core Process: %d\n",
};
EXPORT_SYMBOL(arm_coherent_dma_ops);
+static int __dma_supported(struct device *dev, u64 mask, bool warn)
+{
+ unsigned long max_dma_pfn;
+
+ /*
+ * If the mask allows for more memory than we can address,
+ * and we actually have that much memory, then we must
+ * indicate that DMA to this device is not supported.
+ */
+ if (sizeof(mask) != sizeof(dma_addr_t) &&
+ mask > (dma_addr_t)~0 &&
+ dma_to_pfn(dev, ~0) < max_pfn) {
+ if (warn) {
+ dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n",
+ mask);
+ dev_warn(dev, "Driver did not use or check the return value from dma_set_coherent_mask()?\n");
+ }
+ return 0;
+ }
+
+ max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
+
+ /*
+ * Translate the device's DMA mask to a PFN limit. This
+ * PFN number includes the page which we can DMA to.
+ */
+ if (dma_to_pfn(dev, mask) < max_dma_pfn) {
+ if (warn)
+ dev_warn(dev, "Coherent DMA mask %#llx (pfn %#lx-%#lx) covers a smaller range of system memory than the DMA zone pfn 0x0-%#lx\n",
+ mask,
+ dma_to_pfn(dev, 0), dma_to_pfn(dev, mask) + 1,
+ max_dma_pfn + 1);
+ return 0;
+ }
+
+ return 1;
+}
+
static u64 get_coherent_dma_mask(struct device *dev)
{
u64 mask = (u64)DMA_BIT_MASK(32);
if (dev) {
- unsigned long max_dma_pfn;
-
mask = dev->coherent_dma_mask;
/*
return 0;
}
- max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
-
- /*
- * If the mask allows for more memory than we can address,
- * and we actually have that much memory, then fail the
- * allocation.
- */
- if (sizeof(mask) != sizeof(dma_addr_t) &&
- mask > (dma_addr_t)~0 &&
- dma_to_pfn(dev, ~0) > max_dma_pfn) {
- dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n",
- mask);
- dev_warn(dev, "Driver did not use or check the return value from dma_set_coherent_mask()?\n");
- return 0;
- }
-
- /*
- * Now check that the mask, when translated to a PFN,
- * fits within the allowable addresses which we can
- * allocate.
- */
- if (dma_to_pfn(dev, mask) < max_dma_pfn) {
- dev_warn(dev, "Coherent DMA mask %#llx (pfn %#lx-%#lx) covers a smaller range of system memory than the DMA zone pfn 0x0-%#lx\n",
- mask,
- dma_to_pfn(dev, 0), dma_to_pfn(dev, mask) + 1,
- arm_dma_pfn_limit + 1);
+ if (!__dma_supported(dev, mask, true))
return 0;
- }
}
return mask;
*/
int dma_supported(struct device *dev, u64 mask)
{
- unsigned long limit;
-
- /*
- * If the mask allows for more memory than we can address,
- * and we actually have that much memory, then we must
- * indicate that DMA to this device is not supported.
- */
- if (sizeof(mask) != sizeof(dma_addr_t) &&
- mask > (dma_addr_t)~0 &&
- dma_to_pfn(dev, ~0) > arm_dma_pfn_limit)
- return 0;
-
- /*
- * Translate the device's DMA mask to a PFN limit. This
- * PFN number includes the page which we can DMA to.
- */
- limit = dma_to_pfn(dev, mask);
-
- if (limit < arm_dma_pfn_limit)
- return 0;
-
- return 1;
+ return __dma_supported(dev, mask, false);
}
EXPORT_SYMBOL(dma_supported);
#ifdef CONFIG_ZONE_DMA
if (mdesc->dma_zone_size) {
arm_dma_zone_size = mdesc->dma_zone_size;
- arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
+ arm_dma_limit = __pv_phys_offset + arm_dma_zone_size - 1;
} else
arm_dma_limit = 0xffffffff;
arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
*/
retval = clk_round_rate(pll1,
CONFIG_BOARD_FAVR32_ABDAC_RATE * 256 * 16);
- if (retval < 0)
+ if (retval <= 0) {
+ retval = -EINVAL;
goto out_abdac;
+ }
retval = clk_set_rate(pll1, retval);
if (retval != 0)
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
CONFIG_MTD_CONCAT=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
-CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
static struct irqaction timer_irqaction = {
.handler = timer_interrupt,
/* Oprofile uses the same irq as the timer, so allow it to be shared */
- .flags = IRQF_TIMER | IRQF_DISABLED | IRQF_SHARED,
+ .flags = IRQF_TIMER | IRQF_SHARED,
.name = "avr32_comparator",
};
.enter = avr32_pm_enter,
};
-static unsigned long avr32_pm_offset(void *symbol)
+static unsigned long __init avr32_pm_offset(void *symbol)
{
extern u8 pm_exception[];
int64_t opal_pci_poll(uint64_t phb_id);
int64_t opal_return_cpu(void);
-int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, uint64_t *val);
+int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, __be64 *val);
int64_t opal_xscom_write(uint32_t gcid, uint32_t pcb_addr, uint64_t val);
int64_t opal_lpc_write(uint32_t chip_id, enum OpalLPCAddressType addr_type,
uint32_t addr, uint32_t data, uint32_t sz);
int64_t opal_lpc_read(uint32_t chip_id, enum OpalLPCAddressType addr_type,
- uint32_t addr, uint32_t *data, uint32_t sz);
+ uint32_t addr, __be32 *data, uint32_t sz);
int64_t opal_validate_flash(uint64_t buffer, uint32_t *size, uint32_t *result);
int64_t opal_manage_flash(uint8_t op);
int64_t opal_update_flash(uint64_t blk_list);
void crash_free_reserved_phys_range(unsigned long begin, unsigned long end)
{
unsigned long addr;
- const u32 *basep, *sizep;
+ const __be32 *basep, *sizep;
unsigned int rtas_start = 0, rtas_end = 0;
basep = of_get_property(rtas.dev, "linux,rtas-base", NULL);
sizep = of_get_property(rtas.dev, "rtas-size", NULL);
if (basep && sizep) {
- rtas_start = *basep;
- rtas_end = *basep + *sizep;
+ rtas_start = be32_to_cpup(basep);
+ rtas_end = rtas_start + be32_to_cpup(sizep);
}
for (addr = begin; addr < end; addr += PAGE_SIZE) {
flush_fp_to_thread(child);
if (fpidx < (PT_FPSCR - PT_FPR0))
- memcpy(&tmp, &child->thread.fp_state.fpr,
+ memcpy(&tmp, &child->thread.TS_FPR(fpidx),
sizeof(long));
else
tmp = child->thread.fp_state.fpscr;
flush_fp_to_thread(child);
if (fpidx < (PT_FPSCR - PT_FPR0))
- memcpy(&child->thread.fp_state.fpr, &data,
+ memcpy(&child->thread.TS_FPR(fpidx), &data,
sizeof(long));
else
child->thread.fp_state.fpscr = data;
if (machine_is(pseries) && firmware_has_feature(FW_FEATURE_LPAR) &&
(dn = of_find_node_by_path("/rtas"))) {
int num_addr_cell, num_size_cell, maxcpus;
- const unsigned int *ireg;
+ const __be32 *ireg;
num_addr_cell = of_n_addr_cells(dn);
num_size_cell = of_n_size_cells(dn);
if (!ireg)
goto out;
- maxcpus = ireg[num_addr_cell + num_size_cell];
+ maxcpus = be32_to_cpup(ireg + num_addr_cell + num_size_cell);
/* Double maxcpus for processors which have SMT capability */
if (cpu_has_feature(CPU_FTR_SMT))
int cpu_to_core_id(int cpu)
{
struct device_node *np;
- const int *reg;
+ const __be32 *reg;
int id = -1;
np = of_get_cpu_node(cpu, NULL);
if (!reg)
goto out;
- id = *reg;
+ id = be32_to_cpup(reg);
out:
of_node_put(np);
return id;
static u8 opal_lpc_inb(unsigned long port)
{
int64_t rc;
- uint32_t data;
+ __be32 data;
if (opal_lpc_chip_id < 0 || port > 0xffff)
return 0xff;
rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 1);
- return rc ? 0xff : data;
+ return rc ? 0xff : be32_to_cpu(data);
}
static __le16 __opal_lpc_inw(unsigned long port)
{
int64_t rc;
- uint32_t data;
+ __be32 data;
if (opal_lpc_chip_id < 0 || port > 0xfffe)
return 0xffff;
if (port & 1)
return (__le16)opal_lpc_inb(port) << 8 | opal_lpc_inb(port + 1);
rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 2);
- return rc ? 0xffff : data;
+ return rc ? 0xffff : be32_to_cpu(data);
}
static u16 opal_lpc_inw(unsigned long port)
{
static __le32 __opal_lpc_inl(unsigned long port)
{
int64_t rc;
- uint32_t data;
+ __be32 data;
if (opal_lpc_chip_id < 0 || port > 0xfffc)
return 0xffffffff;
(__le32)opal_lpc_inb(port + 2) << 8 |
opal_lpc_inb(port + 3);
rc = opal_lpc_read(opal_lpc_chip_id, OPAL_LPC_IO, port, &data, 4);
- return rc ? 0xffffffff : data;
+ return rc ? 0xffffffff : be32_to_cpu(data);
}
static u32 opal_lpc_inl(unsigned long port)
{
struct opal_scom_map *m = map;
int64_t rc;
+ __be64 v;
reg = opal_scom_unmangle(reg);
- rc = opal_xscom_read(m->chip, m->addr + reg, (uint64_t *)__pa(value));
+ rc = opal_xscom_read(m->chip, m->addr + reg, (__be64 *)__pa(&v));
+ *value = be64_to_cpu(v);
return opal_xscom_err_xlate(rc);
}
{
struct hvcall_ppp_data ppp_data;
struct device_node *root;
- const int *perf_level;
+ const __be32 *perf_level;
int rc;
rc = h_get_ppp(&ppp_data);
perf_level = of_get_property(root,
"ibm,partition-performance-parameters-level",
NULL);
- if (perf_level && (*perf_level >= 1)) {
+ if (perf_level && (be32_to_cpup(perf_level) >= 1)) {
seq_printf(m,
"physical_procs_allocated_to_virtualization=%d\n",
ppp_data.phys_platform_procs);
int partition_potential_processors;
int partition_active_processors;
struct device_node *rtas_node;
- const int *lrdrp = NULL;
+ const __be32 *lrdrp = NULL;
rtas_node = of_find_node_by_path("/rtas");
if (rtas_node)
if (lrdrp == NULL) {
partition_potential_processors = vdso_data->processorCount;
} else {
- partition_potential_processors = *(lrdrp + 4);
+ partition_potential_processors = be32_to_cpup(lrdrp + 4);
}
of_node_put(rtas_node);
const char *model = "";
const char *system_id = "";
const char *tmp;
- const unsigned int *lp_index_ptr;
+ const __be32 *lp_index_ptr;
unsigned int lp_index = 0;
seq_printf(m, "%s %s\n", MODULE_NAME, MODULE_VERS);
lp_index_ptr = of_get_property(rootdn, "ibm,partition-no",
NULL);
if (lp_index_ptr)
- lp_index = *lp_index_ptr;
+ lp_index = be32_to_cpup(lp_index_ptr);
of_node_put(rootdn);
}
seq_printf(m, "serial_number=%s\n", system_id);
{
struct device_node *dn;
struct pci_dn *pdn;
- const u32 *req_msi;
+ const __be32 *p;
+ u32 req_msi;
pdn = pci_get_pdn(pdev);
if (!pdn)
dn = pdn->node;
- req_msi = of_get_property(dn, prop_name, NULL);
- if (!req_msi) {
+ p = of_get_property(dn, prop_name, NULL);
+ if (!p) {
pr_debug("rtas_msi: No %s on %s\n", prop_name, dn->full_name);
return -ENOENT;
}
- if (*req_msi < nvec) {
+ req_msi = be32_to_cpup(p);
+ if (req_msi < nvec) {
pr_debug("rtas_msi: %s requests < %d MSIs\n", prop_name, nvec);
- if (*req_msi == 0) /* Be paranoid */
+ if (req_msi == 0) /* Be paranoid */
return -ENOSPC;
- return *req_msi;
+ return req_msi;
}
return 0;
static struct device_node *find_pe_total_msi(struct pci_dev *dev, int *total)
{
struct device_node *dn;
- const u32 *p;
+ const __be32 *p;
dn = of_node_get(pci_device_to_OF_node(dev));
while (dn) {
if (p) {
pr_debug("rtas_msi: found prop on dn %s\n",
dn->full_name);
- *total = *p;
+ *total = be32_to_cpup(p);
return dn;
}
static void *count_non_bridge_devices(struct device_node *dn, void *data)
{
struct msi_counts *counts = data;
- const u32 *p;
+ const __be32 *p;
u32 class;
pr_debug("rtas_msi: counting %s\n", dn->full_name);
p = of_get_property(dn, "class-code", NULL);
- class = p ? *p : 0;
+ class = p ? be32_to_cpup(p) : 0;
if ((class >> 8) != PCI_CLASS_BRIDGE_PCI)
counts->num_devices++;
static void *count_spare_msis(struct device_node *dn, void *data)
{
struct msi_counts *counts = data;
- const u32 *p;
+ const __be32 *p;
int req;
if (dn == counts->requestor)
req = 0;
p = of_get_property(dn, "ibm,req#msi", NULL);
if (p)
- req = *p;
+ req = be32_to_cpup(p);
p = of_get_property(dn, "ibm,req#msi-x", NULL);
if (p)
- req = max(req, (int)*p);
+ req = max(req, (int)be32_to_cpup(p));
}
if (req < counts->quota)
static DEFINE_SPINLOCK(nvram_lock);
struct err_log_info {
- int error_type;
- unsigned int seq_num;
+ __be32 error_type;
+ __be32 seq_num;
};
struct nvram_os_partition {
};
struct oops_log_info {
- u16 version;
- u16 report_length;
- u64 timestamp;
+ __be16 version;
+ __be16 report_length;
+ __be64 timestamp;
} __attribute__((packed));
static void oops_to_nvram(struct kmsg_dumper *dumper,
length = part->size;
}
- info.error_type = err_type;
- info.seq_num = error_log_cnt;
+ info.error_type = cpu_to_be32(err_type);
+ info.seq_num = cpu_to_be32(error_log_cnt);
tmp_index = part->index;
}
if (part->os_partition) {
- *error_log_cnt = info.seq_num;
- *err_type = info.error_type;
+ *error_log_cnt = be32_to_cpu(info.seq_num);
+ *err_type = be32_to_cpu(info.error_type);
}
return 0;
pr_err("nvram: logging uncompressed oops/panic report\n");
return -1;
}
- oops_hdr->version = OOPS_HDR_VERSION;
- oops_hdr->report_length = (u16) zipped_len;
- oops_hdr->timestamp = get_seconds();
+ oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION);
+ oops_hdr->report_length = cpu_to_be16(zipped_len);
+ oops_hdr->timestamp = cpu_to_be64(get_seconds());
return 0;
}
clobbering_unread_rtas_event())
return -1;
- oops_hdr->version = OOPS_HDR_VERSION;
- oops_hdr->report_length = (u16) size;
- oops_hdr->timestamp = get_seconds();
+ oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION);
+ oops_hdr->report_length = cpu_to_be16(size);
+ oops_hdr->timestamp = cpu_to_be64(get_seconds());
if (compressed)
err_type = ERR_TYPE_KERNEL_PANIC_GZ;
size_t length, hdr_size;
oops_hdr = (struct oops_log_info *)buff;
- if (oops_hdr->version < OOPS_HDR_VERSION) {
+ if (be16_to_cpu(oops_hdr->version) < OOPS_HDR_VERSION) {
/* Old format oops header had 2-byte record size */
hdr_size = sizeof(u16);
- length = oops_hdr->version;
+ length = be16_to_cpu(oops_hdr->version);
time->tv_sec = 0;
time->tv_nsec = 0;
} else {
hdr_size = sizeof(*oops_hdr);
- length = oops_hdr->report_length;
- time->tv_sec = oops_hdr->timestamp;
+ length = be16_to_cpu(oops_hdr->report_length);
+ time->tv_sec = be64_to_cpu(oops_hdr->timestamp);
time->tv_nsec = 0;
}
*buf = kmalloc(length, GFP_KERNEL);
kmsg_dump_get_buffer(dumper, false,
oops_data, oops_data_sz, &text_len);
err_type = ERR_TYPE_KERNEL_PANIC;
- oops_hdr->version = OOPS_HDR_VERSION;
- oops_hdr->report_length = (u16) text_len;
- oops_hdr->timestamp = get_seconds();
+ oops_hdr->version = cpu_to_be16(OOPS_HDR_VERSION);
+ oops_hdr->report_length = cpu_to_be16(text_len);
+ oops_hdr->timestamp = cpu_to_be64(get_seconds());
}
(void) nvram_write_os_partition(&oops_log_partition, oops_buf,
- (int) (sizeof(*oops_hdr) + oops_hdr->report_length), err_type,
+ (int) (sizeof(*oops_hdr) + text_len), err_type,
++oops_count);
spin_unlock_irqrestore(&lock, flags);
{
struct device_node *dn, *pdn;
struct pci_bus *bus;
- const uint32_t *pcie_link_speed_stats;
+ const __be32 *pcie_link_speed_stats;
bus = bridge->bus;
return 0;
for (pdn = dn; pdn != NULL; pdn = of_get_next_parent(pdn)) {
- pcie_link_speed_stats = (const uint32_t *) of_get_property(pdn,
+ pcie_link_speed_stats = of_get_property(pdn,
"ibm,pcie-link-speed-stats", NULL);
if (pcie_link_speed_stats)
break;
return 0;
}
- switch (pcie_link_speed_stats[0]) {
+ switch (be32_to_cpup(pcie_link_speed_stats)) {
case 0x01:
bus->max_bus_speed = PCIE_SPEED_2_5GT;
break;
break;
}
- switch (pcie_link_speed_stats[1]) {
+ switch (be32_to_cpup(pcie_link_speed_stats)) {
case 0x01:
bus->cur_bus_speed = PCIE_SPEED_2_5GT;
break;
Even if you don't know what to do here, say Y.
config NR_CPUS
- int "Maximum number of CPUs (2-64)"
- range 2 64
+ int "Maximum number of CPUs (2-256)"
+ range 2 256
depends on SMP
default "32" if !64BIT
default "64" if 64BIT
help
This allows you to specify the maximum number of CPUs which this
- kernel will support. The maximum supported value is 64 and the
+ kernel will support. The maximum supported value is 256 and the
minimum value which makes sense is 2.
This is purely to save memory - each supported CPU adds
#include <linux/types.h>
#include <asm/chpid.h>
+#include <asm/cpu.h>
#define SCLP_CHP_INFO_MASK_SIZE 32
unsigned int standby;
unsigned int combined;
int has_cpu_type;
- struct sclp_cpu_entry cpu[255];
+ struct sclp_cpu_entry cpu[MAX_CPU_ADDRESS + 1];
};
int sclp_get_cpu_info(struct sclp_cpu_info *info);
/* constants used by the vdso */
DEFINE(__CLOCK_REALTIME, CLOCK_REALTIME);
DEFINE(__CLOCK_MONOTONIC, CLOCK_MONOTONIC);
+ DEFINE(__CLOCK_THREAD_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID);
DEFINE(__CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
BLANK();
/* idle data offsets */
psal[i] = 0x80000000;
lowcore->paste[4] = (u32)(addr_t) psal;
- psal[0] = 0x20000000;
+ psal[0] = 0x02000000;
psal[2] = (u32)(addr_t) aste;
*(unsigned long *) (aste + 2) = segment_table +
_ASCE_TABLE_LENGTH + _ASCE_USER_BITS + _ASCE_TYPE_SEGMENT;
jnm 3f
a %r0,__VDSO_TK_MULT(%r5)
3: alr %r0,%r2
- al %r0,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */
- al %r1,__VDSO_XTIME_NSEC+4(%r5)
- brc 12,4f
- ahi %r0,1
-4: al %r0,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic.nsec */
+ al %r0,__VDSO_WTOM_NSEC(%r5)
al %r1,__VDSO_WTOM_NSEC+4(%r5)
brc 12,5f
ahi %r0,1
5: l %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
srdl %r0,0(%r2) /* >> tk->shift */
- l %r2,__VDSO_XTIME_SEC+4(%r5)
- al %r2,__VDSO_WTOM_SEC+4(%r5)
+ l %r2,__VDSO_WTOM_SEC+4(%r5)
cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */
jne 1b
basr %r5,0
je 0f
cghi %r2,__CLOCK_MONOTONIC
je 0f
- cghi %r2,-2 /* CLOCK_THREAD_CPUTIME_ID for this thread */
+ cghi %r2,__CLOCK_THREAD_CPUTIME_ID
+ je 0f
+ cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
jne 2f
larl %r5,_vdso_data
icm %r0,15,__LC_ECTG_OK(%r5)
larl %r5,_vdso_data
cghi %r2,__CLOCK_REALTIME
je 4f
- cghi %r2,-2 /* CLOCK_THREAD_CPUTIME_ID for this thread */
+ cghi %r2,__CLOCK_THREAD_CPUTIME_ID
+ je 9f
+ cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
je 9f
cghi %r2,__CLOCK_MONOTONIC
jne 12f
jnz 0b
stck 48(%r15) /* Store TOD clock */
lgf %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */
- lg %r0,__VDSO_XTIME_SEC(%r5) /* tk->xtime_sec */
- alg %r0,__VDSO_WTOM_SEC(%r5) /* + wall_to_monotonic.sec */
+ lg %r0,__VDSO_WTOM_SEC(%r5)
lg %r1,48(%r15)
sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */
msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */
- alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */
- alg %r1,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic.nsec */
+ alg %r1,__VDSO_WTOM_NSEC(%r5)
srlg %r1,%r1,0(%r2) /* >> tk->shift */
clg %r4,__VDSO_UPD_COUNT(%r5) /* check update counter */
jne 0b
checksum.o strlen.o div64.o div64-generic.o
# Extracted from libgcc
-lib-y += movmem.o ashldi3.o ashrdi3.o lshrdi3.o \
+obj-y += movmem.o ashldi3.o ashrdi3.o lshrdi3.o \
ashlsi3.o ashrsi3.o ashiftrt.o lshrsi3.o \
udiv_qrnnd.o
}
#define pte_accessible pte_accessible
-static inline unsigned long pte_accessible(pte_t a)
+static inline unsigned long pte_accessible(struct mm_struct *mm, pte_t a)
{
return pte_val(a) & _PAGE_VALID;
}
* SUN4V NOTE: _PAGE_VALID is the same value in both the SUN4U
* and SUN4V pte layout, so this inline test is fine.
*/
- if (likely(mm != &init_mm) && pte_accessible(orig))
+ if (likely(mm != &init_mm) && pte_accessible(mm, orig))
tlb_batch_add(mm, addr, ptep, orig, fullmm);
}
KBUILD_CFLAGS += -msoft-float -mregparm=3 -freg-struct-return
- # Don't autogenerate SSE instructions
- KBUILD_CFLAGS += -mno-sse
+ # Don't autogenerate MMX or SSE instructions
+ KBUILD_CFLAGS += -mno-mmx -mno-sse
# Never want PIC in a 32-bit kernel, prevent breakage with GCC built
# with nonstandard options
KBUILD_AFLAGS += -m64
KBUILD_CFLAGS += -m64
- # Don't autogenerate SSE instructions
- KBUILD_CFLAGS += -mno-sse
+ # Don't autogenerate MMX or SSE instructions
+ KBUILD_CFLAGS += -mno-mmx -mno-sse
# Use -mpreferred-stack-boundary=3 if supported.
KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
# How to compile the 16-bit code. Note we always compile for -march=i386,
# that way we can complain to the user if the CPU is insufficient.
-KBUILD_CFLAGS := $(USERINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \
+KBUILD_CFLAGS := $(USERINCLUDE) -m32 -g -Os -D_SETUP -D__KERNEL__ \
-DDISABLE_BRANCH_PROFILING \
-Wall -Wstrict-prototypes \
-march=i386 -mregparm=3 \
-include $(srctree)/$(src)/code16gcc.h \
-fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse \
$(call cc-option, -ffreestanding) \
$(call cc-option, -fno-toplevel-reorder,\
- $(call cc-option, -fno-unit-at-a-time)) \
+ $(call cc-option, -fno-unit-at-a-time)) \
$(call cc-option, -fno-stack-protector) \
$(call cc-option, -mpreferred-stack-boundary=2)
-KBUILD_CFLAGS += $(call cc-option, -m32)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n
cflags-$(CONFIG_X86_32) := -march=i386
cflags-$(CONFIG_X86_64) := -mcmodel=small
KBUILD_CFLAGS += $(cflags-y)
+KBUILD_CFLAGS += -mno-mmx -mno-sse
KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)
}
#define pte_accessible pte_accessible
-static inline int pte_accessible(pte_t a)
+static inline bool pte_accessible(struct mm_struct *mm, pte_t a)
{
- return pte_flags(a) & _PAGE_PRESENT;
+ if (pte_flags(a) & _PAGE_PRESENT)
+ return true;
+
+ if ((pte_flags(a) & (_PAGE_PROTNONE | _PAGE_NUMA)) &&
+ mm_tlb_flush_pending(mm))
+ return true;
+
+ return false;
}
static inline int pte_hidden(pte_t pte)
__EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK, \
HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST_HSW)
-#define EVENT_CONSTRAINT_END \
- EVENT_CONSTRAINT(0, 0, 0)
+/*
+ * We define the end marker as having a weight of -1
+ * to enable blacklisting of events using a counter bitmask
+ * of zero and thus a weight of zero.
+ * The end marker has a weight that cannot possibly be
+ * obtained from counting the bits in the bitmask.
+ */
+#define EVENT_CONSTRAINT_END { .weight = -1 }
+/*
+ * Check for end marker with weight == -1
+ */
#define for_each_event_constraint(e, c) \
- for ((e) = (c); (e)->weight; (e)++)
+ for ((e) = (c); (e)->weight != -1; (e)++)
/*
* Extra registers for specific events.
return (kvm_apic_get_reg(apic, APIC_ID) >> 24) & 0xff;
}
+#define KVM_X2APIC_CID_BITS 0
+
static void recalculate_apic_map(struct kvm *kvm)
{
struct kvm_apic_map *new, *old = NULL;
if (apic_x2apic_mode(apic)) {
new->ldr_bits = 32;
new->cid_shift = 16;
- new->cid_mask = new->lid_mask = 0xffff;
+ new->cid_mask = (1 << KVM_X2APIC_CID_BITS) - 1;
+ new->lid_mask = 0xffff;
} else if (kvm_apic_sw_enabled(apic) &&
!new->cid_mask /* flat mode */ &&
kvm_apic_get_reg(apic, APIC_DFR) == APIC_DFR_CLUSTER) {
ASSERT(apic != NULL);
/* if initial count is 0, current count should also be 0 */
- if (kvm_apic_get_reg(apic, APIC_TMICT) == 0)
+ if (kvm_apic_get_reg(apic, APIC_TMICT) == 0 ||
+ apic->lapic_timer.period == 0)
return 0;
remaining = hrtimer_get_remaining(&apic->lapic_timer.timer);
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu)
{
u32 data;
- void *vapic;
if (test_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention))
apic_sync_pv_eoi_from_guest(vcpu, vcpu->arch.apic);
if (!test_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention))
return;
- vapic = kmap_atomic(vcpu->arch.apic->vapic_page);
- data = *(u32 *)(vapic + offset_in_page(vcpu->arch.apic->vapic_addr));
- kunmap_atomic(vapic);
+ kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.apic->vapic_cache, &data,
+ sizeof(u32));
apic_set_tpr(vcpu->arch.apic, data & 0xff);
}
u32 data, tpr;
int max_irr, max_isr;
struct kvm_lapic *apic = vcpu->arch.apic;
- void *vapic;
apic_sync_pv_eoi_to_guest(vcpu, apic);
max_isr = 0;
data = (tpr & 0xff) | ((max_isr & 0xf0) << 8) | (max_irr << 24);
- vapic = kmap_atomic(vcpu->arch.apic->vapic_page);
- *(u32 *)(vapic + offset_in_page(vcpu->arch.apic->vapic_addr)) = data;
- kunmap_atomic(vapic);
+ kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.apic->vapic_cache, &data,
+ sizeof(u32));
}
-void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
+int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr)
{
- vcpu->arch.apic->vapic_addr = vapic_addr;
- if (vapic_addr)
+ if (vapic_addr) {
+ if (kvm_gfn_to_hva_cache_init(vcpu->kvm,
+ &vcpu->arch.apic->vapic_cache,
+ vapic_addr, sizeof(u32)))
+ return -EINVAL;
__set_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention);
- else
+ } else {
__clear_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention);
+ }
+
+ vcpu->arch.apic->vapic_addr = vapic_addr;
+ return 0;
}
int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data)
*/
void *regs;
gpa_t vapic_addr;
- struct page *vapic_page;
+ struct gfn_to_hva_cache vapic_cache;
unsigned long pending_events;
unsigned int sipi_vector;
};
void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset);
void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector);
-void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
+int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu);
r = -EFAULT;
if (copy_from_user(&va, argp, sizeof va))
goto out;
- r = 0;
- kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr);
+ r = kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr);
break;
}
case KVM_X86_SETUP_MCE: {
!kvm_event_needs_reinjection(vcpu);
}
-static int vapic_enter(struct kvm_vcpu *vcpu)
-{
- struct kvm_lapic *apic = vcpu->arch.apic;
- struct page *page;
-
- if (!apic || !apic->vapic_addr)
- return 0;
-
- page = gfn_to_page(vcpu->kvm, apic->vapic_addr >> PAGE_SHIFT);
- if (is_error_page(page))
- return -EFAULT;
-
- vcpu->arch.apic->vapic_page = page;
- return 0;
-}
-
-static void vapic_exit(struct kvm_vcpu *vcpu)
-{
- struct kvm_lapic *apic = vcpu->arch.apic;
- int idx;
-
- if (!apic || !apic->vapic_addr)
- return;
-
- idx = srcu_read_lock(&vcpu->kvm->srcu);
- kvm_release_page_dirty(apic->vapic_page);
- mark_page_dirty(vcpu->kvm, apic->vapic_addr >> PAGE_SHIFT);
- srcu_read_unlock(&vcpu->kvm->srcu, idx);
-}
-
static void update_cr8_intercept(struct kvm_vcpu *vcpu)
{
int max_irr, tpr;
struct kvm *kvm = vcpu->kvm;
vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
- r = vapic_enter(vcpu);
- if (r) {
- srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
- return r;
- }
r = 1;
while (r > 0) {
srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
- vapic_exit(vcpu);
-
return r;
}
pte_t pte = gup_get_pte(ptep);
struct page *page;
+ /* Similar to the PMD case, NUMA hinting must take slow path */
+ if (pte_numa(pte)) {
+ pte_unmap(ptep);
+ return 0;
+ }
+
if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) {
pte_unmap(ptep);
return 0;
if (pmd_none(pmd) || pmd_trans_splitting(pmd))
return 0;
if (unlikely(pmd_large(pmd))) {
+ /*
+ * NUMA hinting faults need to be handled in the GUP
+ * slowpath for accounting purposes and so that they
+ * can be serialised against THP migration.
+ */
+ if (pmd_numa(pmd))
+ return 0;
if (!gup_huge_pmd(pmd, addr, next, write, pages, nr))
return 0;
} else {
set_bit(EFI_MEMMAP, &x86_efi_facility);
-#ifdef CONFIG_X86_32
- if (efi_is_native()) {
- x86_platform.get_wallclock = efi_get_time;
- x86_platform.set_wallclock = efi_set_rtc_mmss;
- }
-#endif
-
#if EFI_DEBUG
print_efi_memmap();
#endif
unsigned long status;
bcp = &per_cpu(bau_control, cpu);
- stat = bcp->statp;
- stat->s_enters++;
if (bcp->nobau)
return cpumask;
+ stat = bcp->statp;
+ stat->s_enters++;
+
if (bcp->busy) {
descriptor_status =
read_lmmr(UVH_LB_BAU_SB_ACTIVATION_STATUS_0);
-march=i386 -mregparm=3 \
-include $(srctree)/$(src)/../../boot/code16gcc.h \
-fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse \
$(call cc-option, -ffreestanding) \
$(call cc-option, -fno-toplevel-reorder,\
- $(call cc-option, -fno-unit-at-a-time)) \
+ $(call cc-option, -fno-unit-at-a-time)) \
$(call cc-option, -fno-stack-protector) \
$(call cc-option, -mpreferred-stack-boundary=2)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
BUG_ON(reg_size != 4);
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
ret = clk_enable(ctx->clk);
if (ret < 0)
return ret;
offset += ctx->val_bytes;
}
- if (ctx->clk)
+ if (!IS_ERR(ctx->clk))
clk_disable(ctx->clk);
return 0;
BUG_ON(reg_size != 4);
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
ret = clk_enable(ctx->clk);
if (ret < 0)
return ret;
offset += ctx->val_bytes;
}
- if (ctx->clk)
+ if (!IS_ERR(ctx->clk))
clk_disable(ctx->clk);
return 0;
{
struct regmap_mmio_context *ctx = context;
- if (ctx->clk) {
+ if (!IS_ERR(ctx->clk)) {
clk_unprepare(ctx->clk);
clk_put(ctx->clk);
}
ctx->regs = regs;
ctx->val_bytes = config->val_bits / 8;
+ ctx->clk = ERR_PTR(-ENODEV);
if (clk_id == NULL)
return ctx;
val + (i * val_bytes),
val_bytes);
if (ret != 0)
- return ret;
+ goto out;
}
} else {
ret = _regmap_raw_write(map, reg, wval, val_bytes * val_count);
/**
* regmap_read(): Read a value from a single register
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: Register to be read from
* @val: Pointer to store read value
*
/**
* regmap_raw_read(): Read raw data from the device
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: First register to be read from
* @val: Pointer to store read value
* @val_len: Size of data to read
/**
* regmap_bulk_read(): Read multiple registers from the device
*
- * @map: Register map to write to
+ * @map: Register map to read from
* @reg: First register to be read from
* @val: Pointer to store read value, in native register size for device
* @val_count: Number of registers to read
spin_lock_init(&nullb->lock);
+ if (queue_mode == NULL_Q_MQ && use_per_node_hctx)
+ submit_queues = nr_online_nodes;
+
if (setup_queues(nullb))
goto err;
if (queue_mode == NULL_Q_MQ) {
null_mq_reg.numa_node = home_node;
null_mq_reg.queue_depth = hw_queue_depth;
+ null_mq_reg.nr_hw_queues = submit_queues;
if (use_per_node_hctx) {
null_mq_reg.ops->alloc_hctx = null_alloc_hctx;
null_mq_reg.ops->free_hctx = null_free_hctx;
-
- null_mq_reg.nr_hw_queues = nr_online_nodes;
} else {
null_mq_reg.ops->alloc_hctx = blk_mq_alloc_single_hw_queue;
null_mq_reg.ops->free_hctx = blk_mq_free_single_hw_queue;
-
- null_mq_reg.nr_hw_queues = submit_queues;
}
nullb->q = blk_mq_init_queue(&null_mq_reg, nullb);
struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw);
int ret;
- ret = regmap_update_bits(s2mps11->iodev->regmap,
+ ret = regmap_update_bits(s2mps11->iodev->regmap_pmic,
S2MPS11_REG_RTC_CTRL,
s2mps11->mask, s2mps11->mask);
if (!ret)
struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw);
int ret;
- ret = regmap_update_bits(s2mps11->iodev->regmap, S2MPS11_REG_RTC_CTRL,
+ ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, S2MPS11_REG_RTC_CTRL,
s2mps11->mask, ~s2mps11->mask);
if (!ret)
s2mps11_clk->hw.init = &s2mps11_clks_init[i];
s2mps11_clk->mask = 1 << i;
- ret = regmap_read(s2mps11_clk->iodev->regmap,
+ ret = regmap_read(s2mps11_clk->iodev->regmap_pmic,
S2MPS11_REG_RTC_CTRL, &val);
if (ret < 0)
goto err_reg;
config CLKSRC_EFM32
bool "Clocksource for Energy Micro's EFM32 SoCs" if !ARCH_EFM32
depends on OF && ARM && (ARCH_EFM32 || COMPILE_TEST)
+ select CLKSRC_MMIO
default ARCH_EFM32
help
Support to use the timers of EFM32 SoCs as clock source and clock
init_func = match->data;
init_func(np);
- of_node_put(np);
}
}
static u64 read_sched_clock(void)
{
- return __raw_readl(sched_io_base);
+ return ~__raw_readl(sched_io_base);
}
static const struct of_device_id sptimer_ids[] __initconst = {
{ .compatible = "picochip,pc3x2-rtc" },
- { .compatible = "snps,dw-apb-timer-sp" },
{ /* Sentinel */ },
};
num_called++;
}
CLOCKSOURCE_OF_DECLARE(pc3x2_timer, "picochip,pc3x2-timer", dw_apb_timer_init);
-CLOCKSOURCE_OF_DECLARE(apb_timer, "snps,dw-apb-timer-osc", dw_apb_timer_init);
+CLOCKSOURCE_OF_DECLARE(apb_timer_osc, "snps,dw-apb-timer-osc", dw_apb_timer_init);
+CLOCKSOURCE_OF_DECLARE(apb_timer_sp, "snps,dw-apb-timer-sp", dw_apb_timer_init);
+CLOCKSOURCE_OF_DECLARE(apb_timer, "snps,dw-apb-timer", dw_apb_timer_init);
writel(TIMER_CTL_CLK_SRC(TIMER_CTL_CLK_SRC_OSC24M),
timer_base + TIMER_CTL_REG(0));
+ /* Make sure timer is stopped before playing with interrupts */
+ sun4i_clkevt_time_stop(0);
+
ret = setup_irq(irq, &sun4i_timer_irq);
if (ret)
pr_warn("failed to setup irq %d\n", irq);
ticks_per_jiffy = (timer_clk + HZ / 2) / HZ;
- /*
- * Set scale and timer for sched_clock.
- */
- sched_clock_register(armada_370_xp_read_sched_clock, 32, timer_clk);
-
/*
* Setup free-running clocksource timer (interrupts
* disabled).
timer_ctrl_clrset(0, TIMER0_EN | TIMER0_RELOAD_EN |
TIMER0_DIV(TIMER_DIVIDER_SHIFT));
+ /*
+ * Set scale and timer for sched_clock.
+ */
+ sched_clock_register(armada_370_xp_read_sched_clock, 32, timer_clk);
+
clocksource_mmio_init(timer_base + TIMER0_VAL_OFF,
"armada_370_xp_clocksource",
timer_clk, 300, 32, clocksource_mmio_readl_down);
return 0;
}
-static int __init at32_cpufreq_driver_init(struct cpufreq_policy *policy)
+static int at32_cpufreq_driver_init(struct cpufreq_policy *policy)
{
unsigned int frequency, rate, min_freq;
int retval, steps, i;
struct pl08x_txd *txd = to_pl08x_txd(&vd->tx);
struct pl08x_dma_chan *plchan = to_pl08x_chan(vd->tx.chan);
- dma_descriptor_unmap(txd);
+ dma_descriptor_unmap(&vd->tx);
if (!txd->done)
pl08x_release_mux(plchan);
}
}
+ platform_set_drvdata(op, pdev);
dev_info(pdev->device.dev, "initialized %d channels\n", dma_channels);
return 0;
}
s3cchan->state = S3C24XX_DMA_CHAN_IDLE;
}
-static void s3c24xx_dma_unmap_buffers(struct s3c24xx_txd *txd)
-{
- struct device *dev = txd->vd.tx.chan->device->dev;
- struct s3c24xx_sg *dsg;
-
- if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_SRC_UNMAP)) {
- if (txd->vd.tx.flags & DMA_COMPL_SRC_UNMAP_SINGLE)
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_single(dev, dsg->src_addr, dsg->len,
- DMA_TO_DEVICE);
- else {
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_page(dev, dsg->src_addr, dsg->len,
- DMA_TO_DEVICE);
- }
- }
-
- if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
- if (txd->vd.tx.flags & DMA_COMPL_DEST_UNMAP_SINGLE)
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_single(dev, dsg->dst_addr, dsg->len,
- DMA_FROM_DEVICE);
- else
- list_for_each_entry(dsg, &txd->dsg_list, node)
- dma_unmap_page(dev, dsg->dst_addr, dsg->len,
- DMA_FROM_DEVICE);
- }
-}
-
static void s3c24xx_dma_desc_free(struct virt_dma_desc *vd)
{
struct s3c24xx_txd *txd = to_s3c24xx_txd(&vd->tx);
struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(vd->tx.chan);
if (!s3cchan->slave)
- s3c24xx_dma_unmap_buffers(txd);
+ dma_descriptor_unmap(&vd->tx);
s3c24xx_dma_free_txd(txd);
}
spin_lock_irqsave(&s3cchan->vc.lock, flags);
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret == DMA_SUCCESS) {
+ if (ret == DMA_COMPLETE) {
spin_unlock_irqrestore(&s3cchan->vc.lock, flags);
return ret;
}
#define HPB_DMAE_DSTPR_DMSTP BIT(0)
/* DMA status register (DSTSR) bits */
+#define HPB_DMAE_DSTSR_DQSTS BIT(2)
#define HPB_DMAE_DSTSR_DMSTS BIT(0)
/* DMA common registers */
ch_reg_write(chan, HPB_DMAE_DCMDR_DQEND, HPB_DMAE_DCMDR);
ch_reg_write(chan, HPB_DMAE_DSTPR_DMSTP, HPB_DMAE_DSTPR);
+
+ chan->plane_idx = 0;
+ chan->first_desc = true;
}
static const struct hpb_dmae_slave_config *
struct hpb_dmae_chan *chan = to_chan(schan);
u32 dstsr = ch_reg_read(chan, HPB_DMAE_DSTSR);
- return (dstsr & HPB_DMAE_DSTSR_DMSTS) == HPB_DMAE_DSTSR_DMSTS;
+ if (chan->xfer_mode == XFER_DOUBLE)
+ return dstsr & HPB_DMAE_DSTSR_DQSTS;
+ else
+ return dstsr & HPB_DMAE_DSTSR_DMSTS;
}
static int
}
schan = &new_hpb_chan->shdma_chan;
+ schan->max_xfer_len = HPB_DMA_TCR_MAX;
+
shdma_chan_probe(sdev, schan, id);
if (pdev->id >= 0)
u32 tad_offset;
u32 rir_way;
u32 mb, kb;
- u64 ch_addr, offset, limit, prv = 0;
+ u64 ch_addr, offset, limit = 0, prv = 0;
/*
* NOTE: we assume for now that only irqs in the first gpio_chip
* can provide direct-mapped IRQs to AINTC (up to 32 GPIOs).
*/
- if (offset < d->irq_base)
+ if (offset < d->gpio_unbanked)
return d->gpio_irq + offset;
else
return -ENODEV;
/* pass "bank 0" GPIO IRQs to AINTC */
chips[0].chip.to_irq = gpio_to_irq_unbanked;
+ chips[0].gpio_irq = bank_irq;
+ chips[0].gpio_unbanked = pdata->gpio_unbanked;
binten = BIT(0);
/* AINTC handles mask/unmask; GPIO handles triggering */
spin_lock_irqsave(&tlmm_lock, irq_flags);
writel(TARGET_PROC_NONE, GPIO_INTR_CFG_SU(gpio));
- clear_gpio_bits(INTR_RAW_STATUS_EN | INTR_ENABLE, GPIO_INTR_CFG(gpio));
+ clear_gpio_bits(BIT(INTR_RAW_STATUS_EN) | BIT(INTR_ENABLE), GPIO_INTR_CFG(gpio));
__clear_bit(gpio, msm_gpio.enabled_irqs);
spin_unlock_irqrestore(&tlmm_lock, irq_flags);
}
spin_lock_irqsave(&tlmm_lock, irq_flags);
__set_bit(gpio, msm_gpio.enabled_irqs);
- set_gpio_bits(INTR_RAW_STATUS_EN | INTR_ENABLE, GPIO_INTR_CFG(gpio));
+ set_gpio_bits(BIT(INTR_RAW_STATUS_EN) | BIT(INTR_ENABLE), GPIO_INTR_CFG(gpio));
writel(TARGET_PROC_SCORPION, GPIO_INTR_CFG_SU(gpio));
spin_unlock_irqrestore(&tlmm_lock, irq_flags);
}
u32 pending;
unsigned int offset, irqs_handled = 0;
- while ((pending = gpio_rcar_read(p, INTDT))) {
+ while ((pending = gpio_rcar_read(p, INTDT) &
+ gpio_rcar_read(p, INTMSK))) {
offset = __ffs(pending);
gpio_rcar_write(p, INTCLR, BIT(offset));
generic_handle_irq(irq_find_mapping(p->irq_domain, offset));
if (offset < TWL4030_GPIO_MAX)
ret = twl4030_set_gpio_direction(offset, 1);
else
- ret = -EINVAL;
+ ret = -EINVAL; /* LED outputs can't be set as input */
if (!ret)
priv->direction &= ~BIT(offset);
static int twl_direction_out(struct gpio_chip *chip, unsigned offset, int value)
{
struct gpio_twl4030_priv *priv = to_gpio_twl4030(chip);
- int ret = -EINVAL;
+ int ret = 0;
mutex_lock(&priv->mutex);
- if (offset < TWL4030_GPIO_MAX)
+ if (offset < TWL4030_GPIO_MAX) {
ret = twl4030_set_gpio_direction(offset, 0);
+ if (ret) {
+ mutex_unlock(&priv->mutex);
+ return ret;
+ }
+ }
+
+ /*
+ * LED gpios i.e. offset >= TWL4030_GPIO_MAX are always output
+ */
priv->direction |= BIT(offset);
mutex_unlock(&priv->mutex);
extern const struct drm_mode_config_funcs armada_drm_mode_config_funcs;
int armada_fbdev_init(struct drm_device *);
+void armada_fbdev_lastclose(struct drm_device *);
void armada_fbdev_fini(struct drm_device *);
int armada_overlay_plane_create(struct drm_device *, unsigned long);
DRM_UNLOCKED),
};
+static void armada_drm_lastclose(struct drm_device *dev)
+{
+ armada_fbdev_lastclose(dev);
+}
+
static const struct file_operations armada_drm_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.open = NULL,
.preclose = NULL,
.postclose = NULL,
- .lastclose = NULL,
+ .lastclose = armada_drm_lastclose,
.unload = armada_drm_unload,
.get_vblank_counter = drm_vblank_count,
.enable_vblank = armada_drm_enable_vblank,
drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], dfb->fb.depth);
drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height);
- DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08x\n",
- dfb->fb.width, dfb->fb.height,
- dfb->fb.bits_per_pixel, obj->phys_addr);
+ DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08llx\n",
+ dfb->fb.width, dfb->fb.height, dfb->fb.bits_per_pixel,
+ (unsigned long long)obj->phys_addr);
return 0;
return ret;
}
+void armada_fbdev_lastclose(struct drm_device *dev)
+{
+ struct armada_private *priv = dev->dev_private;
+
+ drm_modeset_lock_all(dev);
+ if (priv->fbdev)
+ drm_fb_helper_restore_fbdev_mode(priv->fbdev);
+ drm_modeset_unlock_all(dev);
+}
+
void armada_fbdev_fini(struct drm_device *dev)
{
struct armada_private *priv = dev->dev_private;
framebuffer_release(info);
}
+ drm_fb_helper_fini(fbh);
+
if (fbh->fb)
fbh->fb->funcs->destroy(fbh->fb);
- drm_fb_helper_fini(fbh);
-
priv->fbdev = NULL;
}
}
obj->dev_addr = obj->linear->start;
}
- DRM_DEBUG_DRIVER("obj %p phys %#x dev %#x\n",
- obj, obj->phys_addr, obj->dev_addr);
+ DRM_DEBUG_DRIVER("obj %p phys %#llx dev %#llx\n", obj,
+ (unsigned long long)obj->phys_addr,
+ (unsigned long long)obj->dev_addr);
return 0;
}
* refcount on the gem object itself.
*/
drm_gem_object_reference(obj);
- dma_buf_put(buf);
return obj;
}
}
}
dobj->obj.import_attach = attach;
+ get_dma_buf(buf);
/*
* Don't call dma_buf_map_attachment() here - it maps the
#define EDID_QUIRK_DETAILED_SYNC_PP (1 << 6)
/* Force reduced-blanking timings for detailed modes */
#define EDID_QUIRK_FORCE_REDUCED_BLANKING (1 << 7)
+/* Force 8bpc */
+#define EDID_QUIRK_FORCE_8BPC (1 << 8)
struct detailed_mode_closure {
struct drm_connector *connector;
/* Medion MD 30217 PG */
{ "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 },
+
+ /* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
+ { "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC },
};
/*
drm_add_display_info(edid, &connector->display_info);
+ if (quirks & EDID_QUIRK_FORCE_8BPC)
+ connector->display_info.bpc = 8;
+
return num_modes;
}
EXPORT_SYMBOL(drm_add_edid_modes);
if (dev->driver->unload)
dev->driver->unload(dev);
err_primary_node:
- drm_put_minor(dev->primary);
+ drm_unplug_minor(dev->primary);
err_render_node:
- drm_put_minor(dev->render);
+ drm_unplug_minor(dev->render);
err_control_node:
- drm_put_minor(dev->control);
+ drm_unplug_minor(dev->control);
err_agp:
if (dev->driver->bus->agp_destroy)
dev->driver->bus->agp_destroy(dev);
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_master_private *master_priv;
+ /*
+ * The dri breadcrumb update races against the drm master disappearing.
+ * Instead of trying to fix this (this is by far not the only ums issue)
+ * just don't do the update in kms mode.
+ */
+ if (drm_core_check_feature(dev, DRIVER_MODESET))
+ return;
+
if (dev->primary->master) {
master_priv = dev->primary->master->driver_priv;
if (master_priv->sarea_priv)
spin_lock_init(&dev_priv->uncore.lock);
spin_lock_init(&dev_priv->mm.object_stat_lock);
mutex_init(&dev_priv->dpio_lock);
- mutex_init(&dev_priv->rps.hw_lock);
mutex_init(&dev_priv->modeset_restore_lock);
- mutex_init(&dev_priv->pc8.lock);
- dev_priv->pc8.requirements_met = false;
- dev_priv->pc8.gpu_idle = false;
- dev_priv->pc8.irqs_disabled = false;
- dev_priv->pc8.enabled = false;
- dev_priv->pc8.disable_count = 2; /* requirements_met + gpu_idle */
- INIT_DELAYED_WORK(&dev_priv->pc8.enable_work, hsw_enable_pc8_work);
+ intel_pm_setup(dev);
intel_display_crc_init(dev);
}
intel_irq_init(dev);
- intel_pm_init(dev);
intel_uncore_sanitize(dev);
/* Try to make sure MCHBAR is enabled before poking at it */
void i915_driver_preclose(struct drm_device * dev, struct drm_file *file_priv)
{
+ mutex_lock(&dev->struct_mutex);
i915_gem_context_close(dev, file_priv);
i915_gem_release(dev, file_priv);
+ mutex_unlock(&dev->struct_mutex);
}
void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
intel_modeset_init_hw(dev);
drm_modeset_lock_all(dev);
+ drm_mode_config_reset(dev);
intel_modeset_setup_hw_state(dev, true);
drm_modeset_unlock_all(dev);
#define IS_MOBILE(dev) (INTEL_INFO(dev)->is_mobile)
#define IS_HSW_EARLY_SDV(dev) (IS_HASWELL(dev) && \
((dev)->pdev->device & 0xFF00) == 0x0C00)
-#define IS_ULT(dev) (IS_HASWELL(dev) && \
+#define IS_BDW_ULT(dev) (IS_BROADWELL(dev) && \
+ (((dev)->pdev->device & 0xf) == 0x2 || \
+ ((dev)->pdev->device & 0xf) == 0x6 || \
+ ((dev)->pdev->device & 0xf) == 0xe))
+#define IS_HSW_ULT(dev) (IS_HASWELL(dev) && \
((dev)->pdev->device & 0xFF00) == 0x0A00)
+#define IS_ULT(dev) (IS_HSW_ULT(dev) || IS_BDW_ULT(dev))
#define IS_HSW_GT3(dev) (IS_HASWELL(dev) && \
((dev)->pdev->device & 0x00F0) == 0x0020)
#define IS_PRELIMINARY_HW(intel_info) ((intel_info)->is_preliminary)
void i915_handle_error(struct drm_device *dev, bool wedged);
extern void intel_irq_init(struct drm_device *dev);
-extern void intel_pm_init(struct drm_device *dev);
extern void intel_hpd_init(struct drm_device *dev);
-extern void intel_pm_init(struct drm_device *dev);
extern void intel_uncore_sanitize(struct drm_device *dev);
extern void intel_uncore_early_sanitize(struct drm_device *dev);
{
struct drm_i915_file_private *file_priv = file->driver_priv;
- mutex_lock(&dev->struct_mutex);
idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL);
idr_destroy(&file_priv->context_idr);
- mutex_unlock(&dev->struct_mutex);
}
static struct i915_hw_context *
if (ret)
return ret;
- /* Clear this page out of any CPU caches for coherent swap-in/out. Note
+ /*
+ * Pin can switch back to the default context if we end up calling into
+ * evict_everything - as a last ditch gtt defrag effort that also
+ * switches to the default context. Hence we need to reload from here.
+ */
+ from = ring->last_context;
+
+ /*
+ * Clear this page out of any CPU caches for coherent swap-in/out. Note
* that thanks to write = false in this call and us not setting any gpu
* write domains when putting a context object onto the active list
* (when switching away from it), this won't block.
- * XXX: We need a real interface to do this instead of trickery. */
+ *
+ * XXX: We need a real interface to do this instead of trickery.
+ */
ret = i915_gem_object_set_to_gtt_domain(to->obj, false);
if (ret) {
i915_gem_object_unpin(to->obj);
} else
drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level);
+search_again:
/* First see if there is a large enough contiguous idle region... */
list_for_each_entry(vma, &vm->inactive_list, mm_list) {
if (mark_free(vma, &unwind_list))
list_del_init(&vma->exec_list);
}
- /* We expect the caller to unpin, evict all and try again, or give up.
- * So calling i915_gem_evict_vm() is unnecessary.
+ /* Can we unpin some objects such as idle hw contents,
+ * or pending flips?
*/
- return -ENOSPC;
+ ret = nonblocking ? -ENOSPC : i915_gpu_idle(dev);
+ if (ret)
+ return ret;
+
+ /* Only idle the GPU and repeat the search once */
+ i915_gem_retire_requests(dev);
+ nonblocking = true;
+ goto search_again;
found:
/* drm_mm doesn't allow any other other operations while
kfree(ppgtt->gen8_pt_dma_addr[i]);
}
- __free_pages(ppgtt->gen8_pt_pages, ppgtt->num_pt_pages << PAGE_SHIFT);
- __free_pages(ppgtt->pd_pages, ppgtt->num_pd_pages << PAGE_SHIFT);
+ __free_pages(ppgtt->gen8_pt_pages, get_order(ppgtt->num_pt_pages << PAGE_SHIFT));
+ __free_pages(ppgtt->pd_pages, get_order(ppgtt->num_pd_pages << PAGE_SHIFT));
}
/**
bdw_gmch_ctl &= BDW_GMCH_GGMS_MASK;
if (bdw_gmch_ctl)
bdw_gmch_ctl = 1 << bdw_gmch_ctl;
+ if (bdw_gmch_ctl > 4) {
+ WARN_ON(!i915_preliminary_hw_support);
+ return 4<<20;
+ }
+
return bdw_gmch_ctl << 20;
}
if (IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5)
PIPE_CONF_CHECK_I(pipe_bpp);
- if (!IS_HASWELL(dev)) {
+ if (!HAS_DDI(dev)) {
PIPE_CONF_CHECK_CLOCK_FUZZY(adjusted_mode.crtc_clock);
PIPE_CONF_CHECK_CLOCK_FUZZY(port_clock);
}
}
intel_modeset_check_state(dev);
-
- drm_mode_config_reset(dev);
}
void intel_modeset_gem_init(struct drm_device *dev)
intel_setup_overlay(dev);
+ drm_modeset_lock_all(dev);
+ drm_mode_config_reset(dev);
intel_modeset_setup_hw_state(dev, false);
+ drm_modeset_unlock_all(dev);
}
void intel_modeset_cleanup(struct drm_device *dev)
uint32_t sprite_width, int pixel_size,
bool enabled, bool scaled);
void intel_init_pm(struct drm_device *dev);
+void intel_pm_setup(struct drm_device *dev);
bool intel_fbc_enabled(struct drm_device *dev);
void intel_update_fbc(struct drm_device *dev);
void intel_gpu_ips_init(struct drm_i915_private *dev_priv);
spin_lock_irqsave(&dev_priv->backlight.lock, flags);
- if (HAS_PCH_SPLIT(dev)) {
+ if (IS_BROADWELL(dev)) {
+ val = I915_READ(BLC_PWM_PCH_CTL2) & BACKLIGHT_DUTY_CYCLE_MASK;
+ } else if (HAS_PCH_SPLIT(dev)) {
val = I915_READ(BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;
} else {
if (IS_VALLEYVIEW(dev))
return val;
}
+static void intel_bdw_panel_set_backlight(struct drm_device *dev, u32 level)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 val = I915_READ(BLC_PWM_PCH_CTL2) & ~BACKLIGHT_DUTY_CYCLE_MASK;
+ I915_WRITE(BLC_PWM_PCH_CTL2, val | level);
+}
+
static void intel_pch_panel_set_backlight(struct drm_device *dev, u32 level)
{
struct drm_i915_private *dev_priv = dev->dev_private;
DRM_DEBUG_DRIVER("set backlight PWM = %d\n", level);
level = intel_panel_compute_brightness(dev, pipe, level);
- if (HAS_PCH_SPLIT(dev))
+ if (IS_BROADWELL(dev))
+ return intel_bdw_panel_set_backlight(dev, level);
+ else if (HAS_PCH_SPLIT(dev))
return intel_pch_panel_set_backlight(dev, level);
if (is_backlight_combination_mode(dev)) {
POSTING_READ(reg);
I915_WRITE(reg, tmp | BLM_PWM_ENABLE);
- if (HAS_PCH_SPLIT(dev) &&
+ if (IS_BROADWELL(dev)) {
+ /*
+ * Broadwell requires PCH override to drive the PCH
+ * backlight pin. The above will configure the CPU
+ * backlight pin, which we don't plan to use.
+ */
+ tmp = I915_READ(BLC_PWM_PCH_CTL1);
+ tmp |= BLM_PCH_OVERRIDE_ENABLE | BLM_PCH_PWM_ENABLE;
+ I915_WRITE(BLC_PWM_PCH_CTL1, tmp);
+ } else if (HAS_PCH_SPLIT(dev) &&
!(dev_priv->quirks & QUIRK_NO_PCH_PWM_ENABLE)) {
tmp = I915_READ(BLC_PWM_PCH_CTL1);
tmp |= BLM_PCH_PWM_ENABLE;
{
struct drm_i915_private *dev_priv = dev->dev_private;
bool is_enabled, enable_requested;
+ unsigned long irqflags;
uint32_t tmp;
tmp = I915_READ(HSW_PWR_WELL_DRIVER);
HSW_PWR_WELL_STATE_ENABLED), 20))
DRM_ERROR("Timeout enabling power well\n");
}
+
+ if (IS_BROADWELL(dev)) {
+ spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
+ I915_WRITE(GEN8_DE_PIPE_IMR(PIPE_B),
+ dev_priv->de_irq_mask[PIPE_B]);
+ I915_WRITE(GEN8_DE_PIPE_IER(PIPE_B),
+ ~dev_priv->de_irq_mask[PIPE_B] |
+ GEN8_PIPE_VBLANK);
+ I915_WRITE(GEN8_DE_PIPE_IMR(PIPE_C),
+ dev_priv->de_irq_mask[PIPE_C]);
+ I915_WRITE(GEN8_DE_PIPE_IER(PIPE_C),
+ ~dev_priv->de_irq_mask[PIPE_C] |
+ GEN8_PIPE_VBLANK);
+ POSTING_READ(GEN8_DE_PIPE_IER(PIPE_C));
+ spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+ }
} else {
if (enable_requested) {
- unsigned long irqflags;
enum pipe p;
I915_WRITE(HSW_PWR_WELL_DRIVER, 0);
return val;
}
-void intel_pm_init(struct drm_device *dev)
+void intel_pm_setup(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ mutex_init(&dev_priv->rps.hw_lock);
+
+ mutex_init(&dev_priv->pc8.lock);
+ dev_priv->pc8.requirements_met = false;
+ dev_priv->pc8.gpu_idle = false;
+ dev_priv->pc8.irqs_disabled = false;
+ dev_priv->pc8.enabled = false;
+ dev_priv->pc8.disable_count = 2; /* requirements_met + gpu_idle */
+ INIT_DELAYED_WORK(&dev_priv->pc8.enable_work, hsw_enable_pc8_work);
INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work,
intel_gen6_powersave_work);
}
} else if (IS_GEN6(ring->dev)) {
mmio = RING_HWS_PGA_GEN6(ring->mmio_base);
} else {
+ /* XXX: gen8 returns to sanity */
mmio = RING_HWS_PGA(ring->mmio_base);
}
int intel_gpu_reset(struct drm_device *dev)
{
switch (INTEL_INFO(dev)->gen) {
+ case 8:
case 7:
case 6: return gen6_do_reset(dev);
case 5: return ironlake_do_reset(dev);
if (nouveau_runtime_pm == 0)
return -EINVAL;
+ /* are we optimus enabled? */
+ if (nouveau_runtime_pm == -1 && !nouveau_is_optimus() && !nouveau_is_v1_dsm()) {
+ DRM_DEBUG_DRIVER("failing to power off - not optimus\n");
+ return -EINVAL;
+ }
+
nv_debug_level(SILENT);
drm_kms_helper_poll_disable(drm_dev);
vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
} else if ((rdev->family == CHIP_TAHITI) ||
(rdev->family == CHIP_PITCAIRN))
fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P8_32x32_8x16);
- else if (rdev->family == CHIP_VERDE)
+ else if ((rdev->family == CHIP_VERDE) ||
+ (rdev->family == CHIP_OLAND) ||
+ (rdev->family == CHIP_HAINAN)) /* for completeness. HAINAN has no display hw */
fb_format |= SI_GRPH_PIPE_CONFIG(SI_ADDR_SURF_P4_8x16);
switch (radeon_crtc->crtc_id) {
radeon_ring_write(ring, 0); /* src/dst endian swap */
radeon_ring_write(ring, src_offset & 0xffffffff);
radeon_ring_write(ring, upper_32_bits(src_offset) & 0xffffffff);
- radeon_ring_write(ring, dst_offset & 0xfffffffc);
+ radeon_ring_write(ring, dst_offset & 0xffffffff);
radeon_ring_write(ring, upper_32_bits(dst_offset) & 0xffffffff);
src_offset += cur_size_in_bytes;
dst_offset += cur_size_in_bytes;
.hdmi_setmode = &evergreen_hdmi_setmode,
},
.copy = {
- .blit = NULL,
+ .blit = &cik_copy_cpdma,
.blit_ring_index = RADEON_RING_TYPE_GFX_INDEX,
.dma = &cik_copy_dma,
.dma_ring_index = R600_RING_TYPE_DMA_INDEX,
.hdmi_setmode = &evergreen_hdmi_setmode,
},
.copy = {
- .blit = NULL,
+ .blit = &cik_copy_cpdma,
.blit_ring_index = RADEON_RING_TYPE_GFX_INDEX,
.dma = &cik_copy_dma,
.dma_ring_index = R600_RING_TYPE_DMA_INDEX,
#endif
};
-
-static void
-radeon_pci_shutdown(struct pci_dev *pdev)
-{
- struct drm_device *dev = pci_get_drvdata(pdev);
-
- radeon_driver_unload_kms(dev);
-}
-
static struct drm_driver kms_driver = {
.driver_features =
DRIVER_USE_AGP |
.probe = radeon_pci_probe,
.remove = radeon_pci_remove,
.driver.pm = &radeon_pm_ops,
- .shutdown = radeon_pci_shutdown,
};
static int __init radeon_init(void)
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = dev_get_drvdata(dev);
- struct radeon_device *rdev = ddev->dev_private;
+ struct radeon_device *rdev = dev_get_drvdata(dev);
int hyst = to_sensor_dev_attr(attr)->index;
int temp;
struct attribute *attr, int index)
{
struct device *dev = container_of(kobj, struct device, kobj);
- struct drm_device *ddev = dev_get_drvdata(dev);
- struct radeon_device *rdev = ddev->dev_private;
+ struct radeon_device *rdev = dev_get_drvdata(dev);
/* Skip limit attributes if DPM is not enabled */
if (rdev->pm.pm_method != PM_METHOD_DPM &&
base = RREG32_MC(R_000100_MCCFG_FB_LOCATION);
base = G_000100_MC_FB_START(base) << 16;
rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev);
+ /* Some boards seem to be configured for 128MB of sideport memory,
+ * but really only have 64MB. Just skip the sideport and use
+ * UMA memory.
+ */
+ if (rdev->mc.igp_sideport_enabled &&
+ (rdev->mc.real_vram_size == (384 * 1024 * 1024))) {
+ base += 128 * 1024 * 1024;
+ rdev->mc.real_vram_size -= 128 * 1024 * 1024;
+ rdev->mc.mc_vram_size = rdev->mc.real_vram_size;
+ }
/* Use K8 direct mapping for fast fb access. */
rdev->fastfb_working = false;
}
page_offset = ((address - vma->vm_start) >> PAGE_SHIFT) +
- drm_vma_node_start(&bo->vma_node) - vma->vm_pgoff;
- page_last = vma_pages(vma) +
- drm_vma_node_start(&bo->vma_node) - vma->vm_pgoff;
+ vma->vm_pgoff - drm_vma_node_start(&bo->vma_node);
+ page_last = vma_pages(vma) + vma->vm_pgoff -
+ drm_vma_node_start(&bo->vma_node);
if (unlikely(page_offset >= bo->num_pages)) {
retval = VM_FAULT_SIGBUS;
SVGA_FIFO_3D_HWVERSION));
break;
}
+ case DRM_VMW_PARAM_MAX_SURF_MEMORY:
+ param->value = dev_priv->memory_size;
+ break;
default:
DRM_ERROR("Illegal vmwgfx get param request: %d\n",
param->param);
case USB_DEVICE_ID_GENIUS_GX_IMPERATOR:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 83,
"Genius Gx Imperator Keyboard");
+ break;
case USB_DEVICE_ID_GENIUS_MANTICORE:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104,
"Genius Manticore Keyboard");
ret = -ENOMEM;
goto err_free_names;
}
+ sd->hid_sensor_hub_client_devs[
+ sd->hid_sensor_client_cnt].id = PLATFORM_DEVID_AUTO;
sd->hid_sensor_hub_client_devs[
sd->hid_sensor_client_cnt].name = name;
sd->hid_sensor_hub_client_devs[
* @last_update: time of last update (jiffies)
* @temperature: cached temperature measurement value
* @humidity: cached humidity measurement value
+ * @write_length: length for I2C measurement request
*/
struct hih6130 {
struct device *hwmon_dev;
unsigned long last_update;
int temperature;
int humidity;
+ size_t write_length;
};
/**
*/
if (time_after(jiffies, hih6130->last_update + HZ) || !hih6130->valid) {
- /* write to slave address, no data, to request a measurement */
- ret = i2c_master_send(client, tmp, 0);
+ /*
+ * Write to slave address to request a measurement.
+ * According with the datasheet it should be with no data, but
+ * for systems with I2C bus drivers that do not allow zero
+ * length packets we write one dummy byte to allow sensor
+ * measurements on them.
+ */
+ tmp[0] = 0;
+ ret = i2c_master_send(client, tmp, hih6130->write_length);
if (ret < 0)
goto out;
goto fail_remove_sysfs;
}
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_QUICK))
+ hih6130->write_length = 1;
+
return 0;
fail_remove_sysfs:
{
if (rpm <= 0)
return 255;
+ if (rpm > 1350000)
+ return 1;
return clamp_val((1350000 + rpm * div / 2) / (rpm * div), 1, 254);
}
"lm90", client);
if (err < 0) {
dev_err(dev, "cannot request IRQ %d\n", client->irq);
- goto exit_remove_files;
+ goto exit_unregister;
}
}
return 0;
+exit_unregister:
+ hwmon_device_unregister(data->hwmon_dev);
exit_remove_files:
lm90_remove_files(client, data);
exit_restore:
{
if (rpm <= 0)
return 255;
+ if (rpm > 1350000)
+ return 1;
return clamp_val((1350000 + rpm * div / 2) / (rpm * div), 1, 254);
}
*/
static inline u8 FAN_TO_REG(long rpm, int div)
{
- if (rpm == 0)
+ if (rpm <= 0 || rpm > 1310720)
return 0;
return clamp_val(1310720 / (rpm * div), 1, 255);
}
if (err)
return err;
val = clamp_val(val, 0, 255);
+ val = DIV_ROUND_CLOSEST(val, 0x11);
mutex_lock(&data->update_lock);
- data->pwm[nr] = val;
+ data->pwm[nr] = val * 0x11;
+ val |= w83l786ng_read_value(client, W83L786NG_REG_PWM[nr]) & 0xf0;
w83l786ng_write_value(client, W83L786NG_REG_PWM[nr], val);
mutex_unlock(&data->update_lock);
return count;
mutex_lock(&data->update_lock);
reg = w83l786ng_read_value(client, W83L786NG_REG_FAN_CFG);
data->pwm_enable[nr] = val;
- reg &= ~(0x02 << W83L786NG_PWM_ENABLE_SHIFT[nr]);
+ reg &= ~(0x03 << W83L786NG_PWM_ENABLE_SHIFT[nr]);
reg |= (val - 1) << W83L786NG_PWM_ENABLE_SHIFT[nr];
w83l786ng_write_value(client, W83L786NG_REG_FAN_CFG, reg);
mutex_unlock(&data->update_lock);
((pwmcfg >> W83L786NG_PWM_MODE_SHIFT[i]) & 1)
? 0 : 1;
data->pwm_enable[i] =
- ((pwmcfg >> W83L786NG_PWM_ENABLE_SHIFT[i]) & 2) + 1;
- data->pwm[i] = w83l786ng_read_value(client,
- W83L786NG_REG_PWM[i]);
+ ((pwmcfg >> W83L786NG_PWM_ENABLE_SHIFT[i]) & 3) + 1;
+ data->pwm[i] =
+ (w83l786ng_read_value(client, W83L786NG_REG_PWM[i])
+ & 0x0f) * 0x11;
}
dev_dbg(&i2c_imx->adapter.dev, "<%s>\n", __func__);
- clk_prepare_enable(i2c_imx->clk);
+ result = clk_prepare_enable(i2c_imx->clk);
+ if (result)
+ return result;
imx_i2c_write_reg(i2c_imx->ifdr, i2c_imx, IMX_I2C_IFDR);
/* Enable I2C controller */
imx_i2c_write_reg(i2c_imx->hwdata->i2sr_clr_opcode, i2c_imx, IMX_I2C_I2SR);
priv->adap.algo = &priv->algo;
priv->adap.algo_data = priv;
priv->adap.dev.parent = &parent->dev;
+ priv->adap.retries = parent->retries;
+ priv->adap.timeout = parent->timeout;
/* Sanity check on class */
if (i2c_mux_parent_classes(parent) & class)
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = 1,
.scan_index = 1,
- .scan_type = IIO_ST('u', 12, 16, 0),
+ .scan_type = {
+ .sign = 'u',
+ .realbits = 12,
+ .storagebits = 16,
+ .shift = 0,
+ .endianness = IIO_BE,
+ },
},
.channel[1] = {
.type = IIO_VOLTAGE,
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = 0,
.scan_index = 0,
- .scan_type = IIO_ST('u', 12, 16, 0),
+ .scan_type = {
+ .sign = 'u',
+ .realbits = 12,
+ .storagebits = 16,
+ .shift = 0,
+ .endianness = IIO_BE,
+ },
},
.channel[2] = IIO_CHAN_SOFT_TIMESTAMP(2),
.int_vref_mv = 2500,
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = ADIS16448_BARO_OUT,
.scan_index = ADIS16400_SCAN_BARO,
- .scan_type = IIO_ST('s', 16, 16, 0),
+ .scan_type = {
+ .sign = 's',
+ .realbits = 16,
+ .storagebits = 16,
+ .endianness = IIO_BE,
+ },
},
ADIS16400_TEMP_CHAN(ADIS16448_TEMP_OUT, 12),
IIO_CHAN_SOFT_TIMESTAMP(11)
return -EINVAL;
}
- return IIO_VAL_INT_PLUS_MICRO;
+ return IIO_VAL_INT;
}
static int cm36651_write_int_time(struct cm36651_data *cm36651,
/* ORIENT ADXL346 only */
#define ADXL346_2D_VALID (1 << 6)
-#define ADXL346_2D_ORIENT(x) (((x) & 0x3) >> 4)
+#define ADXL346_2D_ORIENT(x) (((x) & 0x30) >> 4)
#define ADXL346_3D_VALID (1 << 3)
#define ADXL346_3D_ORIENT(x) ((x) & 0x7)
#define ADXL346_2D_PORTRAIT_POS 0 /* +X */
static DEVICE_ATTR_RO(proto);
static DEVICE_ATTR_RO(id);
static DEVICE_ATTR_RO(extra);
-static DEVICE_ATTR_RO(modalias);
-static DEVICE_ATTR_WO(drvctl);
-static DEVICE_ATTR(description, S_IRUGO, serio_show_description, NULL);
-static DEVICE_ATTR(bind_mode, S_IWUSR | S_IRUGO, serio_show_bind_mode, serio_set_bind_mode);
static struct attribute *serio_device_id_attrs[] = {
&dev_attr_type.attr,
&dev_attr_proto.attr,
&dev_attr_id.attr,
&dev_attr_extra.attr,
+ NULL
+};
+
+static struct attribute_group serio_id_attr_group = {
+ .name = "id",
+ .attrs = serio_device_id_attrs,
+};
+
+static DEVICE_ATTR_RO(modalias);
+static DEVICE_ATTR_WO(drvctl);
+static DEVICE_ATTR(description, S_IRUGO, serio_show_description, NULL);
+static DEVICE_ATTR(bind_mode, S_IWUSR | S_IRUGO, serio_show_bind_mode, serio_set_bind_mode);
+
+static struct attribute *serio_device_attrs[] = {
&dev_attr_modalias.attr,
&dev_attr_description.attr,
&dev_attr_drvctl.attr,
NULL
};
-static struct attribute_group serio_id_attr_group = {
- .name = "id",
- .attrs = serio_device_id_attrs,
+static struct attribute_group serio_device_attr_group = {
+ .attrs = serio_device_attrs,
};
static const struct attribute_group *serio_device_attr_groups[] = {
&serio_id_attr_group,
+ &serio_device_attr_group,
NULL
};
struct arm_smmu_cfg root_cfg;
phys_addr_t output_mask;
- spinlock_t lock;
+ struct mutex lock;
};
static DEFINE_SPINLOCK(arm_smmu_devices_lock);
goto out_free_domain;
smmu_domain->root_cfg.pgd = pgd;
- spin_lock_init(&smmu_domain->lock);
+ mutex_init(&smmu_domain->lock);
domain->priv = smmu_domain;
return 0;
* Sanity check the domain. We don't currently support domains
* that cross between different SMMU chains.
*/
- spin_lock(&smmu_domain->lock);
+ mutex_lock(&smmu_domain->lock);
if (!smmu_domain->leaf_smmu) {
/* Now that we have a master, we can finalise the domain */
ret = arm_smmu_init_domain_context(domain, dev);
dev_name(device_smmu->dev));
goto err_unlock;
}
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
/* Looks ok, so add the device to the domain */
master = find_smmu_master(smmu_domain->leaf_smmu, dev->of_node);
return arm_smmu_domain_add_master(smmu_domain, master);
err_unlock:
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
return ret;
}
if (paddr & ~output_mask)
return -ERANGE;
- spin_lock(&smmu_domain->lock);
+ mutex_lock(&smmu_domain->lock);
pgd += pgd_index(iova);
end = iova + size;
do {
} while (pgd++, iova != end);
out_unlock:
- spin_unlock(&smmu_domain->lock);
+ mutex_unlock(&smmu_domain->lock);
/* Ensure new page tables are visible to the hardware walker */
if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
phys_addr_t paddr, size_t size, int flags)
{
struct arm_smmu_domain *smmu_domain = domain->priv;
- struct arm_smmu_device *smmu = smmu_domain->leaf_smmu;
- if (!smmu_domain || !smmu)
+ if (!smmu_domain)
return -ENODEV;
/* Check for silent address truncation up the SMMU chain. */
static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *pte;
+ pgd_t *pgdp, pgd;
+ pud_t pud;
+ pmd_t pmd;
+ pte_t pte;
struct arm_smmu_domain *smmu_domain = domain->priv;
struct arm_smmu_cfg *root_cfg = &smmu_domain->root_cfg;
- struct arm_smmu_device *smmu = root_cfg->smmu;
- spin_lock(&smmu_domain->lock);
- pgd = root_cfg->pgd;
- if (!pgd)
- goto err_unlock;
+ pgdp = root_cfg->pgd;
+ if (!pgdp)
+ return 0;
- pgd += pgd_index(iova);
- if (pgd_none_or_clear_bad(pgd))
- goto err_unlock;
+ pgd = *(pgdp + pgd_index(iova));
+ if (pgd_none(pgd))
+ return 0;
- pud = pud_offset(pgd, iova);
- if (pud_none_or_clear_bad(pud))
- goto err_unlock;
+ pud = *pud_offset(&pgd, iova);
+ if (pud_none(pud))
+ return 0;
- pmd = pmd_offset(pud, iova);
- if (pmd_none_or_clear_bad(pmd))
- goto err_unlock;
+ pmd = *pmd_offset(&pud, iova);
+ if (pmd_none(pmd))
+ return 0;
- pte = pmd_page_vaddr(*pmd) + pte_index(iova);
+ pte = *(pmd_page_vaddr(pmd) + pte_index(iova));
if (pte_none(pte))
- goto err_unlock;
-
- spin_unlock(&smmu_domain->lock);
- return __pfn_to_phys(pte_pfn(*pte)) | (iova & ~PAGE_MASK);
+ return 0;
-err_unlock:
- spin_unlock(&smmu_domain->lock);
- dev_warn(smmu->dev,
- "invalid (corrupt?) page tables detected for iova 0x%llx\n",
- (unsigned long long)iova);
- return -EINVAL;
+ return __pfn_to_phys(pte_pfn(pte)) | (iova & ~PAGE_MASK);
}
static int arm_smmu_domain_has_cap(struct iommu_domain *domain,
dev_err(dev,
"found only %d context interrupt(s) but %d required\n",
smmu->num_context_irqs, smmu->num_context_banks);
+ err = -ENODEV;
goto out_put_parent;
}
{
__u64 mem;
+ dm_bufio_allocated_kmem_cache = 0;
+ dm_bufio_allocated_get_free_pages = 0;
+ dm_bufio_allocated_vmalloc = 0;
+ dm_bufio_current_allocated = 0;
+
memset(&dm_bufio_caches, 0, sizeof dm_bufio_caches);
memset(&dm_bufio_cache_names, 0, sizeof dm_bufio_cache_names);
int r = 0;
bool updated = updated_this_tick(mq, e);
- requeue_and_update_tick(mq, e);
-
if ((!discarded_oblock && updated) ||
- !should_promote(mq, e, discarded_oblock, data_dir))
+ !should_promote(mq, e, discarded_oblock, data_dir)) {
+ requeue_and_update_tick(mq, e);
result->op = POLICY_MISS;
- else if (!can_migrate)
+
+ } else if (!can_migrate)
r = -EWOULDBLOCK;
- else
+
+ else {
+ requeue_and_update_tick(mq, e);
r = pre_cache_to_cache(mq, e, result);
+ }
return r;
}
{
int r;
- r = dm_cache_resize(cache->cmd, cache->cache_size);
+ r = dm_cache_resize(cache->cmd, new_size);
if (r) {
DMERR("could not resize cache metadata");
return r;
struct delay_c {
struct timer_list delay_timer;
struct mutex timer_lock;
+ struct workqueue_struct *kdelayd_wq;
struct work_struct flush_expired_bios;
struct list_head delayed_bios;
atomic_t may_delay;
static DEFINE_MUTEX(delayed_bios_lock);
-static struct workqueue_struct *kdelayd_wq;
static struct kmem_cache *delayed_cache;
static void handle_delayed_timer(unsigned long data)
{
struct delay_c *dc = (struct delay_c *)data;
- queue_work(kdelayd_wq, &dc->flush_expired_bios);
+ queue_work(dc->kdelayd_wq, &dc->flush_expired_bios);
}
static void queue_timeout(struct delay_c *dc, unsigned long expires)
goto bad_dev_write;
}
+ dc->kdelayd_wq = alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0);
+ if (!dc->kdelayd_wq) {
+ DMERR("Couldn't start kdelayd");
+ goto bad_queue;
+ }
+
setup_timer(&dc->delay_timer, handle_delayed_timer, (unsigned long)dc);
INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
ti->private = dc;
return 0;
+bad_queue:
+ mempool_destroy(dc->delayed_pool);
bad_dev_write:
if (dc->dev_write)
dm_put_device(ti, dc->dev_write);
{
struct delay_c *dc = ti->private;
- flush_workqueue(kdelayd_wq);
+ destroy_workqueue(dc->kdelayd_wq);
dm_put_device(ti, dc->dev_read);
{
int r = -ENOMEM;
- kdelayd_wq = alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0);
- if (!kdelayd_wq) {
- DMERR("Couldn't start kdelayd");
- goto bad_queue;
- }
-
delayed_cache = KMEM_CACHE(dm_delay_info, 0);
if (!delayed_cache) {
DMERR("Couldn't create delayed bio cache.");
bad_register:
kmem_cache_destroy(delayed_cache);
bad_memcache:
- destroy_workqueue(kdelayd_wq);
-bad_queue:
return r;
}
{
dm_unregister_target(&delay_target);
kmem_cache_destroy(delayed_cache);
- destroy_workqueue(kdelayd_wq);
}
/* Module hooks */
atomic_t pending_exceptions_count;
+ /* Protected by "lock" */
+ sector_t exception_start_sequence;
+
+ /* Protected by kcopyd single-threaded callback */
+ sector_t exception_complete_sequence;
+
+ /*
+ * A list of pending exceptions that completed out of order.
+ * Protected by kcopyd single-threaded callback.
+ */
+ struct list_head out_of_order_list;
+
mempool_t *pending_pool;
struct dm_exception_table pending;
*/
int started;
+ /* There was copying error. */
+ int copy_error;
+
+ /* A sequence number, it is used for in-order completion. */
+ sector_t exception_sequence;
+
+ struct list_head out_of_order_entry;
+
/*
* For writing a complete chunk, bypassing the copy.
*/
s->valid = 1;
s->active = 0;
atomic_set(&s->pending_exceptions_count, 0);
+ s->exception_start_sequence = 0;
+ s->exception_complete_sequence = 0;
+ INIT_LIST_HEAD(&s->out_of_order_list);
init_rwsem(&s->lock);
INIT_LIST_HEAD(&s->list);
spin_lock_init(&s->pe_lock);
pending_complete(pe, success);
}
+static void complete_exception(struct dm_snap_pending_exception *pe)
+{
+ struct dm_snapshot *s = pe->snap;
+
+ if (unlikely(pe->copy_error))
+ pending_complete(pe, 0);
+
+ else
+ /* Update the metadata if we are persistent */
+ s->store->type->commit_exception(s->store, &pe->e,
+ commit_callback, pe);
+}
+
/*
* Called when the copy I/O has finished. kcopyd actually runs
* this code so don't block.
struct dm_snap_pending_exception *pe = context;
struct dm_snapshot *s = pe->snap;
- if (read_err || write_err)
- pending_complete(pe, 0);
+ pe->copy_error = read_err || write_err;
- else
- /* Update the metadata if we are persistent */
- s->store->type->commit_exception(s->store, &pe->e,
- commit_callback, pe);
+ if (pe->exception_sequence == s->exception_complete_sequence) {
+ s->exception_complete_sequence++;
+ complete_exception(pe);
+
+ while (!list_empty(&s->out_of_order_list)) {
+ pe = list_entry(s->out_of_order_list.next,
+ struct dm_snap_pending_exception, out_of_order_entry);
+ if (pe->exception_sequence != s->exception_complete_sequence)
+ break;
+ s->exception_complete_sequence++;
+ list_del(&pe->out_of_order_entry);
+ complete_exception(pe);
+ }
+ } else {
+ struct list_head *lh;
+ struct dm_snap_pending_exception *pe2;
+
+ list_for_each_prev(lh, &s->out_of_order_list) {
+ pe2 = list_entry(lh, struct dm_snap_pending_exception, out_of_order_entry);
+ if (pe2->exception_sequence < pe->exception_sequence)
+ break;
+ }
+ list_add(&pe->out_of_order_entry, lh);
+ }
}
/*
return NULL;
}
+ pe->exception_sequence = s->exception_start_sequence++;
+
dm_insert_exception(&s->pending, &pe->e);
return pe;
static struct target_type snapshot_target = {
.name = "snapshot",
- .version = {1, 11, 1},
+ .version = {1, 12, 0},
.module = THIS_MODULE,
.ctr = snapshot_ctr,
.dtr = snapshot_dtr,
int __init dm_statistics_init(void)
{
+ shared_memory_amount = 0;
dm_stat_need_rcu_barrier = 0;
return 0;
}
num_targets = dm_round_up(num_targets, KEYS_PER_NODE);
+ if (!num_targets) {
+ kfree(t);
+ return -ENOMEM;
+ }
+
if (alloc_targets(t, num_targets)) {
kfree(t);
return -ENOMEM;
up_write(&pmd->root_lock);
}
+void dm_pool_metadata_read_write(struct dm_pool_metadata *pmd)
+{
+ down_write(&pmd->root_lock);
+ pmd->read_only = false;
+ dm_bm_set_read_write(pmd->bm);
+ up_write(&pmd->root_lock);
+}
+
int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
dm_block_t threshold,
dm_sm_threshold_fn fn,
* that nothing is changing.
*/
void dm_pool_metadata_read_only(struct dm_pool_metadata *pmd);
+void dm_pool_metadata_read_write(struct dm_pool_metadata *pmd);
int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
dm_block_t threshold,
*/
r = dm_thin_insert_block(tc->td, m->virt_block, m->data_block);
if (r) {
- DMERR_LIMIT("dm_thin_insert_block() failed");
+ DMERR_LIMIT("%s: dm_thin_insert_block() failed: error = %d",
+ dm_device_name(pool->pool_md), r);
+ set_pool_mode(pool, PM_READ_ONLY);
cell_error(pool, m->cell);
goto out;
}
}
}
-static int commit(struct pool *pool)
-{
- int r;
-
- r = dm_pool_commit_metadata(pool->pmd);
- if (r)
- DMERR_LIMIT("%s: commit failed: error = %d",
- dm_device_name(pool->pool_md), r);
-
- return r;
-}
-
/*
* A non-zero return indicates read_only or fail_io mode.
* Many callers don't care about the return value.
*/
-static int commit_or_fallback(struct pool *pool)
+static int commit(struct pool *pool)
{
int r;
if (get_pool_mode(pool) != PM_WRITE)
return -EINVAL;
- r = commit(pool);
- if (r)
+ r = dm_pool_commit_metadata(pool->pmd);
+ if (r) {
+ DMERR_LIMIT("%s: dm_pool_commit_metadata failed: error = %d",
+ dm_device_name(pool->pool_md), r);
set_pool_mode(pool, PM_READ_ONLY);
+ }
return r;
}
* Try to commit to see if that will free up some
* more space.
*/
- (void) commit_or_fallback(pool);
+ r = commit(pool);
+ if (r)
+ return r;
r = dm_pool_get_free_block_count(pool->pmd, &free_blocks);
if (r)
* table reload).
*/
if (!free_blocks) {
- DMWARN("%s: no free space available.",
+ DMWARN("%s: no free data space available.",
dm_device_name(pool->pool_md));
spin_lock_irqsave(&pool->lock, flags);
pool->no_free_space = 1;
}
r = dm_pool_alloc_data_block(pool->pmd, result);
- if (r)
+ if (r) {
+ if (r == -ENOSPC &&
+ !dm_pool_get_free_metadata_block_count(pool->pmd, &free_blocks) &&
+ !free_blocks) {
+ DMWARN("%s: no free metadata space available.",
+ dm_device_name(pool->pool_md));
+ set_pool_mode(pool, PM_READ_ONLY);
+ }
return r;
+ }
return 0;
}
if (bio_list_empty(&bios) && !need_commit_due_to_time(pool))
return;
- if (commit_or_fallback(pool)) {
+ if (commit(pool)) {
while ((bio = bio_list_pop(&bios)))
bio_io_error(bio);
return;
case PM_FAIL:
DMERR("%s: switching pool to failure mode",
dm_device_name(pool->pool_md));
+ dm_pool_metadata_read_only(pool->pmd);
pool->process_bio = process_bio_fail;
pool->process_discard = process_bio_fail;
pool->process_prepared_mapping = process_prepared_mapping_fail;
break;
case PM_WRITE:
+ dm_pool_metadata_read_write(pool->pmd);
pool->process_bio = process_bio;
pool->process_discard = process_discard;
pool->process_prepared_mapping = process_prepared_mapping;
struct pool_c *pt = ti->private;
/*
- * We want to make sure that degraded pools are never upgraded.
+ * We want to make sure that a pool in PM_FAIL mode is never upgraded.
*/
enum pool_mode old_mode = pool->pf.mode;
enum pool_mode new_mode = pt->adjusted_pf.mode;
- if (old_mode > new_mode)
+ /*
+ * If we were in PM_FAIL mode, rollback of metadata failed. We're
+ * not going to recover without a thin_repair. So we never let the
+ * pool move out of the old mode. On the other hand a PM_READ_ONLY
+ * may have been due to a lack of metadata or data space, and may
+ * now work (ie. if the underlying devices have been resized).
+ */
+ if (old_mode == PM_FAIL)
new_mode = old_mode;
pool->ti = ti;
return r;
if (need_commit1 || need_commit2)
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
return 0;
}
cancel_delayed_work(&pool->waker);
flush_workqueue(pool->wq);
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
}
static int check_arg_count(unsigned argc, unsigned args_required)
if (r)
return r;
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
r = dm_pool_reserve_metadata_snap(pool->pmd);
if (r)
DMWARN("Unrecognised thin pool target message received: %s", argv[0]);
if (!r)
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
return r;
}
/* Commit to ensure statistics aren't out-of-date */
if (!(status_flags & DM_STATUS_NOFLUSH_FLAG) && !dm_suspended(ti))
- (void) commit_or_fallback(pool);
+ (void) commit(pool);
r = dm_pool_get_metadata_transaction_id(pool->pmd, &transaction_id);
if (r) {
* The shadow op will often be a noop. Only insert if it really
* copied data.
*/
- if (dm_block_location(*block) != b)
+ if (dm_block_location(*block) != b) {
+ /*
+ * dm_tm_shadow_block will have already decremented the old
+ * block, but it is still referenced by the btree. We
+ * increment to stop the insert decrementing it below zero
+ * when overwriting the old value.
+ */
+ dm_tm_inc(info->btree_info.tm, b);
r = insert_ablock(info, index, *block, root);
+ }
return r;
}
}
EXPORT_SYMBOL_GPL(dm_bm_set_read_only);
+void dm_bm_set_read_write(struct dm_block_manager *bm)
+{
+ bm->read_only = false;
+}
+EXPORT_SYMBOL_GPL(dm_bm_set_read_write);
+
u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor)
{
return crc32c(~(u32) 0, data, len) ^ init_xor;
int dm_bm_flush_and_unlock(struct dm_block_manager *bm,
struct dm_block *superblock);
- /*
- * Request data be prefetched into the cache.
- */
+/*
+ * Request data is prefetched into the cache.
+ */
void dm_bm_prefetch(struct dm_block_manager *bm, dm_block_t b);
/*
* be returned if you do.
*/
void dm_bm_set_read_only(struct dm_block_manager *bm);
+void dm_bm_set_read_write(struct dm_block_manager *bm);
u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor);
}
static int sm_ll_mutate(struct ll_disk *ll, dm_block_t b,
- uint32_t (*mutator)(void *context, uint32_t old),
+ int (*mutator)(void *context, uint32_t old, uint32_t *new),
void *context, enum allocation_event *ev)
{
int r;
if (old > 2) {
r = sm_ll_lookup_big_ref_count(ll, b, &old);
- if (r < 0)
+ if (r < 0) {
+ dm_tm_unlock(ll->tm, nb);
return r;
+ }
}
- ref_count = mutator(context, old);
+ r = mutator(context, old, &ref_count);
+ if (r) {
+ dm_tm_unlock(ll->tm, nb);
+ return r;
+ }
if (ref_count <= 2) {
sm_set_bitmap(bm_le, bit, ref_count);
return ll->save_ie(ll, index, &ie_disk);
}
-static uint32_t set_ref_count(void *context, uint32_t old)
+static int set_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return *((uint32_t *) context);
+ *new = *((uint32_t *) context);
+ return 0;
}
int sm_ll_insert(struct ll_disk *ll, dm_block_t b,
return sm_ll_mutate(ll, b, set_ref_count, &ref_count, ev);
}
-static uint32_t inc_ref_count(void *context, uint32_t old)
+static int inc_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return old + 1;
+ *new = old + 1;
+ return 0;
}
int sm_ll_inc(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
return sm_ll_mutate(ll, b, inc_ref_count, NULL, ev);
}
-static uint32_t dec_ref_count(void *context, uint32_t old)
+static int dec_ref_count(void *context, uint32_t old, uint32_t *new)
{
- return old - 1;
+ if (!old) {
+ DMERR_LIMIT("unable to decrement a reference count below 0");
+ return -EINVAL;
+ }
+
+ *new = old - 1;
+ return 0;
}
int sm_ll_dec(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
int r = sm_metadata_new_block_(sm, b);
- if (r)
+ if (r) {
DMERR("unable to allocate new metadata block");
+ return r;
+ }
r = sm_metadata_get_nr_free(sm, &count);
- if (r)
+ if (r) {
DMERR("couldn't get free block count");
+ return r;
+ }
check_threshold(&smm->threshold, count);
u32 modem_state; /* from SMSHOSTLIB_DVB_MODEM_STATE_ET */
s32 SNR; /* dB */
u32 ber; /* Post Viterbi ber [1E-5] */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
u32 ber_bit_count; /* Total number of SYNC bits. */
u32 ts_per; /* Transport stream PER,
0xFFFFFFFF indicate N/A */
u32 modem_state; /* from SMSHOSTLIB_DVB_MODEM_STATE_ET */
s32 SNR; /* dB */
u32 ber; /* Post Viterbi ber [1E-5] */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
u32 ber_bit_count; /* Total number of SYNC bits. */
u32 ts_per; /* Transport stream PER,
0xFFFFFFFF indicate N/A */
u32 is_demod_locked; /* 0 - not locked, 1 - locked */
u32 ber_bit_count; /* Total number of SYNC bits. */
- u32 ber_error_count; /* Number of erronous SYNC bits. */
+ u32 ber_error_count; /* Number of erroneous SYNC bits. */
s32 MRC_SNR; /* dB */
s32 mrc_in_band_pwr; /* In band power in dBM */
dprintk_tscheck("TEI detected. "
"PID=0x%x data1=0x%x\n",
pid, buf[1]);
- /* data in this packet cant be trusted - drop it unless
+ /* data in this packet can't be trusted - drop it unless
* module option dvb_demux_feed_err_pkts is set */
if (!dvb_demux_feed_err_pkts)
return;
return -EINVAL;
}
- if (feed->is_filtering)
+ if (feed->is_filtering) {
+ /* release dvbdmx->mutex as far as it is
+ acquired by stop_filtering() itself */
+ mutex_unlock(&dvbdmx->mutex);
feed->stop_filtering(feed);
+ mutex_lock(&dvbdmx->mutex);
+ }
spin_lock_irq(&dvbdmx->lock);
f = dvbdmxfeed->filter;
static int af9033_wr_reg_val_tab(struct af9033_state *state,
const struct reg_val *tab, int tab_len)
{
+#define MAX_TAB_LEN 212
int ret, i, j;
- u8 buf[MAX_XFER_SIZE];
+ u8 buf[1 + MAX_TAB_LEN];
+
+ dev_dbg(&state->i2c->dev, "%s: tab_len=%d\n", __func__, tab_len);
if (tab_len > sizeof(buf)) {
- dev_warn(&state->i2c->dev,
- "%s: i2c wr len=%d is too big!\n",
- KBUILD_MODNAME, tab_len);
+ dev_warn(&state->i2c->dev, "%s: tab len %d is too big\n",
+ KBUILD_MODNAME, tab_len);
return -EINVAL;
}
- dev_dbg(&state->i2c->dev, "%s: tab_len=%d\n", __func__, tab_len);
-
for (i = 0, j = 0; i < tab_len; i++) {
buf[j] = tab[i].val;
num = if_freq / 1000; /* Hz => kHz */
num *= 0x4000;
- if_ctl = cxd2820r_div_u64_round_closest(num, 41000);
+ if_ctl = 0x4000 - cxd2820r_div_u64_round_closest(num, 41000);
buf[0] = (if_ctl >> 8) & 0x3f;
buf[1] = (if_ctl >> 0) & 0xff;
dib8000_set_diversity_in(state->fe[0], state->diversity_onoff);
locks = (dib8000_read_word(state, 180) >> 6) & 0x3f; /* P_coff_winlen ? */
- /* coff should lock over P_coff_winlen ofdm symbols : give 3 times this lenght to lock */
+ /* coff should lock over P_coff_winlen ofdm symbols : give 3 times this length to lock */
*timeout = dib8000_get_timeout(state, 2 * locks, SYMBOL_DEPENDENT_ON);
*tune_state = CT_DEMOD_STEP_5;
break;
case CT_DEMOD_STEP_9: /* 39 */
if ((state->revision == 0x8090) || ((dib8000_read_word(state, 1291) >> 9) & 0x1)) { /* fe capable of deinterleaving : esram */
- /* defines timeout for mpeg lock depending on interleaver lenght of longest layer */
+ /* defines timeout for mpeg lock depending on interleaver length of longest layer */
for (i = 0; i < 3; i++) {
if (c->layer[i].interleaving >= deeper_interleaver) {
dprintk("layer%i: time interleaver = %d ", i, c->layer[i].interleaving);
goto error;
if (state->m_enable_parallel == true) {
- /* paralel -> enable MD1 to MD7 */
+ /* parallel -> enable MD1 to MD7 */
status = write16(state, SIO_PDR_MD1_CFG__A,
sio_pdr_mdx_cfg);
if (status < 0)
dprintk(1, "\n");
- /* Gracefull shutdown (byte boundaries) */
+ /* Graceful shutdown (byte boundaries) */
status = read16(state, FEC_OC_SNC_MODE__A, &fec_oc_snc_mode);
if (status < 0)
goto error;
fec_oc_dto_burst_len = 204;
}
- /* Check serial or parrallel output */
+ /* Check serial or parallel output */
fec_oc_reg_ipr_mode &= (~(FEC_OC_IPR_MODE_SERIAL__M));
if (state->m_enable_parallel == false) {
/* MPEG data output is serial -> set ipr_mode[0] */
goto error;
if (count == 1) {
- /* Try sampling on a diffrent edge */
+ /* Try sampling on a different edge */
u16 clk_neg = 0;
status = read16(state, IQM_AF_CLKNEG__A, &clk_neg);
if (status < 0)
goto error;
- /* Retreive results parameters from SC */
+ /* Retrieve results parameters from SC */
switch (cmd) {
/* All commands yielding 5 results */
/* All commands yielding 4 results */
break;
}
#if 0
- /* No hierachical channels support in BDA */
+ /* No hierarchical channels support in BDA */
/* Priority (only for hierarchical channels) */
switch (channel->priority) {
case DRX_PRIORITY_LOW:
/*============================================================================*/
/**
-* \brief Retreive lock status .
+* \brief Retrieve lock status .
* \param demod Pointer to demodulator instance.
* \param lockStat Pointer to lock status structure.
* \return DRXStatus_t.
goto error;
/* Stamp driver version number in SCU data RAM in BCD code
- Done to enable field application engineers to retreive drxdriver version
+ Done to enable field application engineers to retrieve drxdriver version
via I2C from SCU RAM.
Not using SCU command interface for SCU register access since no
microcode may be present.
fe->ops.tuner_ops.get_if_frequency(fe, &IF);
start(state, 0, IF);
- /* After set_frontend, stats aren't avaliable */
+ /* After set_frontend, stats aren't available */
p->strength.stat[0].scale = FE_SCALE_RELATIVE;
p->cnr.stat[0].scale = FE_SCALE_NOT_AVAILABLE;
p->block_error.stat[0].scale = FE_SCALE_NOT_AVAILABLE;
sizeof(priv->tuner_i2c_adapter.name));
priv->tuner_i2c_adapter.algo = &rtl2830_tuner_i2c_algo;
priv->tuner_i2c_adapter.algo_data = NULL;
+ priv->tuner_i2c_adapter.dev.parent = &i2c->dev;
i2c_set_adapdata(&priv->tuner_i2c_adapter, priv);
if (i2c_add_adapter(&priv->tuner_i2c_adapter) < 0) {
dev_err(&i2c->dev,
#define ADV7183_VS_FIELD_CTRL_1 0x31 /* Vsync field control 1 */
#define ADV7183_VS_FIELD_CTRL_2 0x32 /* Vsync field control 2 */
#define ADV7183_VS_FIELD_CTRL_3 0x33 /* Vsync field control 3 */
-#define ADV7183_HS_POS_CTRL_1 0x34 /* Hsync positon control 1 */
-#define ADV7183_HS_POS_CTRL_2 0x35 /* Hsync positon control 2 */
-#define ADV7183_HS_POS_CTRL_3 0x36 /* Hsync positon control 3 */
+#define ADV7183_HS_POS_CTRL_1 0x34 /* Hsync position control 1 */
+#define ADV7183_HS_POS_CTRL_2 0x35 /* Hsync position control 2 */
+#define ADV7183_HS_POS_CTRL_3 0x36 /* Hsync position control 3 */
#define ADV7183_POLARITY 0x37 /* Polarity */
#define ADV7183_NTSC_COMB_CTRL 0x38 /* NTSC comb control */
#define ADV7183_PAL_COMB_CTRL 0x39 /* PAL comb control */
break;
case ADV7604_MODE_HDMI:
/* set default prim_mode/vid_std for HDMI
- accoring to [REF_03, c. 4.2] */
+ according to [REF_03, c. 4.2] */
io_write(sd, 0x00, 0x02); /* video std */
io_write(sd, 0x01, 0x06); /* prim mode */
break;
break;
case ADV7842_MODE_HDMI:
/* set default prim_mode/vid_std for HDMI
- accoring to [REF_03, c. 4.2] */
+ according to [REF_03, c. 4.2] */
io_write(sd, 0x00, 0x02); /* video std */
io_write(sd, 0x01, 0x06); /* prim mode */
break;
if (!rc) {
/*
- * If platform_data doesn't specify rc_dev, initilize it
+ * If platform_data doesn't specify rc_dev, initialize it
* internally
*/
rc = rc_allocate_device();
u16 zoom_step;
int ret;
- /* Determine the firmware dependant control range and step values */
+ /* Determine the firmware dependent control range and step values */
ret = m5mols_read_u16(sd, AE_MAX_GAIN_MON, &exposure_max);
if (ret < 0)
return ret;
#include <linux/i2c.h>
#include <linux/log2.h>
#include <linux/module.h>
+#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/pm.h>
#include <linux/regulator/consumer.h>
mutex_unlock(&state->lock);
v4l2_dbg(1, s5c73m3_dbg, sd, "%s: Booting %s (%d)\n",
- __func__, ret ? "failed" : "succeded", ret);
+ __func__, ret ? "failed" : "succeeded", ret);
return ret;
}
/* External master clock frequency */
u32 mclk_frequency;
- /* Video bus type - MIPI-CSI2/paralell */
+ /* Video bus type - MIPI-CSI2/parallel */
enum v4l2_mbus_type bus_type;
const struct s5c73m3_frame_size *sensor_pix_size[2];
* the analog demod.
* If the tuner is not found, it returns -ENODEV.
* If auto-detection is disabled and the tuner doesn't match what it was
- * requred, it returns -EINVAL and fills 'name'.
+ * required, it returns -EINVAL and fills 'name'.
* If the chip is found, it returns the chip ID and fills 'name'.
*/
static int saa711x_detect_chip(struct i2c_client *client,
static int reg_read(struct i2c_client *client, u16 reg, u8 *val)
{
int ret;
- /* We have 16-bit i2c addresses - care for endianess */
+ /* We have 16-bit i2c addresses - care for endianness */
unsigned char data[2] = { reg >> 8, reg & 0xff };
ret = i2c_master_send(client, data, 2);
}
/* following function is used to set ths7303 */
-int ths7303_setval(struct v4l2_subdev *sd, enum ths7303_filter_mode mode)
+static int ths7303_setval(struct v4l2_subdev *sd,
+ enum ths7303_filter_mode mode)
{
struct i2c_client *client = v4l2_get_subdevdata(sd);
struct ths7303_state *state = to_state(sd);
return -EINVAL;
}
state->input = input;
- if (!v4l2_ctrl_g_ctrl(state->mute))
+ if (v4l2_ctrl_g_ctrl(state->mute))
return 0;
if (!v4l2_ctrl_g_ctrl(state->vol))
return 0;
- if (!v4l2_ctrl_g_ctrl(state->bal))
- return 0;
wm8775_set_audio(sd, 1);
return 0;
}
}
btv->std = V4L2_STD_PAL;
init_irqreg(btv);
- v4l2_ctrl_handler_setup(hdl);
+ if (!bttv_tvcards[btv->c.type].no_video)
+ v4l2_ctrl_handler_setup(hdl);
if (hdl->error) {
result = hdl->error;
goto fail2;
};
/* per-mdl bit flags */
-#define CX18_F_M_NEED_SWAP 0 /* mdl buffer data must be endianess swapped */
+#define CX18_F_M_NEED_SWAP 0 /* mdl buffer data must be endianness swapped */
/* per-stream, s_flags */
#define CX18_F_S_CLAIMED 3 /* this stream is claimed */
cx_write(MC417_RWD, regval);
/* Transition RD to effect read transaction across bus.
- * Transtion 0x5000 -> 0x9000 correct (RD/RDY -> WR/RDY)?
+ * Transition 0x5000 -> 0x9000 correct (RD/RDY -> WR/RDY)?
* Should it be 0x9000 -> 0xF000 (also why is RDY being set, its
* input only...)
*/
/* set automatic LED control by FPGA */
pluto_rw(pluto, REG_MISC, MISC_ALED, MISC_ALED);
- /* set data endianess */
+ /* set data endianness */
#ifdef __LITTLE_ENDIAN
pluto_rw(pluto, REG_PIDn(0), PID0_END, PID0_END);
#else
if (fw_debug) {
dev->kthread = kthread_run(saa7164_thread_function, dev,
"saa7164 debug");
- if (!dev->kthread)
+ if (IS_ERR(dev->kthread)) {
+ dev->kthread = NULL;
printk(KERN_ERR "%s() Failed to create "
"debug kernel thread\n", __func__);
+ }
}
} /* != BOARD_UNKNOWN */
if (q_data->fourcc == V4L2_PIX_FMT_H264 &&
vb->vb2_queue->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) {
/*
- * For backwards compatiblity, queuing an empty buffer marks
+ * For backwards compatibility, queuing an empty buffer marks
* the stream end
*/
if (vb2_get_plane_payload(vb, 0) == 0)
dbg("fimc%d: state: 0x%lx", fimc->id, fimc->state);
- /* Enable clocks and perform basic initalization */
+ /* Enable clocks and perform basic initialization */
clk_enable(fimc->clock[CLK_GATE]);
fimc_hw_reset(fimc);
goto dev_unlock;
drvdata = dev_get_drvdata(dev);
- /* Some subdev didn't probe succesfully id drvdata is NULL */
+ /* Some subdev didn't probe successfully id drvdata is NULL */
if (drvdata) {
switch (plat_entity) {
case IDX_FIMC:
struct mmp_camera *cam = mcam_to_cam(mcam);
struct mmp_camera_platform_data *pdata;
- if (mcam->bus_type == V4L2_MBUS_CSI2) {
- cam->mipi_clk = devm_clk_get(mcam->dev, "mipi");
- if ((IS_ERR(cam->mipi_clk) && mcam->dphy[2] == 0))
- return PTR_ERR(cam->mipi_clk);
- }
-
/*
* Turn on power and clocks to the controller.
*/
gpio_set_value(pdata->sensor_power_gpio, 0);
gpio_set_value(pdata->sensor_reset_gpio, 0);
- if (mcam->bus_type == V4L2_MBUS_CSI2 && !IS_ERR(cam->mipi_clk)) {
- if (cam->mipi_clk)
- devm_clk_put(mcam->dev, cam->mipi_clk);
- cam->mipi_clk = NULL;
- }
-
mcam_clk_disable(mcam);
}
return;
/* get the escape clk, this is hard coded */
+ clk_prepare_enable(cam->mipi_clk);
tx_clk_esc = (clk_get_rate(cam->mipi_clk) / 1000000) / 12;
-
+ clk_disable_unprepare(cam->mipi_clk);
/*
* dphy[2] - CSI2_DPHY6:
* bit 0 ~ bit 7: CK Term Enable
return IRQ_RETVAL(handled);
}
-static void mcam_deinit_clk(struct mcam_camera *mcam)
-{
- unsigned int i;
-
- for (i = 0; i < NR_MCAM_CLK; i++) {
- if (!IS_ERR(mcam->clk[i])) {
- if (mcam->clk[i])
- devm_clk_put(mcam->dev, mcam->clk[i]);
- }
- mcam->clk[i] = NULL;
- }
-}
-
static void mcam_init_clk(struct mcam_camera *mcam)
{
unsigned int i;
if (cam == NULL)
return -ENOMEM;
cam->pdev = pdev;
- cam->mipi_clk = NULL;
INIT_LIST_HEAD(&cam->devlist);
mcam = &cam->mcam;
mcam->mclk_div = pdata->mclk_div;
mcam->bus_type = pdata->bus_type;
mcam->dphy = pdata->dphy;
+ if (mcam->bus_type == V4L2_MBUS_CSI2) {
+ cam->mipi_clk = devm_clk_get(mcam->dev, "mipi");
+ if ((IS_ERR(cam->mipi_clk) && mcam->dphy[2] == 0))
+ return PTR_ERR(cam->mipi_clk);
+ }
mcam->mipi_enabled = false;
mcam->lane = pdata->lane;
mcam->chip_id = MCAM_ARMADA610;
*/
ret = mmpcam_power_up(mcam);
if (ret)
- goto out_deinit_clk;
+ return ret;
ret = mccic_register(mcam);
if (ret)
goto out_power_down;
mccic_shutdown(mcam);
out_power_down:
mmpcam_power_down(mcam);
-out_deinit_clk:
- mcam_deinit_clk(mcam);
return ret;
}
static int mmpcam_remove(struct mmp_camera *cam)
{
struct mcam_camera *mcam = &cam->mcam;
- struct mmp_camera_platform_data *pdata;
mmpcam_remove_device(cam);
mccic_shutdown(mcam);
mmpcam_power_down(mcam);
- pdata = cam->pdev->dev.platform_data;
- gpio_free(pdata->sensor_reset_gpio);
- gpio_free(pdata->sensor_power_gpio);
- mcam_deinit_clk(mcam);
- iounmap(cam->power_regs);
- iounmap(mcam->regs);
- kfree(cam);
return 0;
}
* ISP clocks get disabled in suspend(). Similarly, the clocks are reenabled in
* resume(), and the the pipelines are restarted in complete().
*
- * TODO: PM dependencies between the ISP and sensors are not modeled explicitly
+ * TODO: PM dependencies between the ISP and sensors are not modelled explicitly
* yet.
*/
static int isp_pm_prepare(struct device *dev)
if (subdev == NULL)
return -EINVAL;
- mutex_lock(&video->mutex);
-
fmt.pad = pad;
fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
- ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
- if (ret == -ENOIOCTLCMD)
- ret = -EINVAL;
+ mutex_lock(&video->mutex);
+ ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
mutex_unlock(&video->mutex);
if (ret)
#define S5P_FIMV_R2H_CMD_EDFU_INIT_RET 16
#define S5P_FIMV_R2H_CMD_ERR_RET 32
-/* Dummy definition for MFCv6 compatibilty */
+/* Dummy definition for MFCv6 compatibility */
#define S5P_FIMV_CODEC_H264_MVC_DEC -1
#define S5P_FIMV_R2H_CMD_FIELD_DONE_RET -1
#define S5P_FIMV_MFC_RESET -1
frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_dec_frame_type, dev);
/* Copy timestamp / timecode from decoded src to dst and set
- appropraite flags */
+ appropriate flags */
src_buf = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
if (vb2_dma_contig_plane_dma_addr(dst_buf->b, 0) == dec_y_addr) {
case MFCINST_FINISHING:
case MFCINST_FINISHED:
case MFCINST_RUNNING:
- /* It is higly probable that an error occured
+ /* It is highly probable that an error occurred
* while decoding a frame */
clear_work_bit(ctx);
ctx->state = MFCINST_ERROR;
mfc_debug(1, "Int reason: %d (err: %08x)\n", reason, err);
switch (reason) {
case S5P_MFC_R2H_CMD_ERR_RET:
- /* An error has occured */
+ /* An error has occurred */
if (ctx->state == MFCINST_RUNNING &&
s5p_mfc_hw_call(dev->mfc_ops, err_dec, err) >=
dev->warn_start)
mutex_unlock(&dev->mfc_mutex);
mfc_debug_leave();
return ret;
- /* Deinit when failure occured */
+ /* Deinit when failure occurred */
err_queue_init:
if (dev->num_inst == 1)
s5p_mfc_deinit_hw(dev);
/* Mark context as idle */
clear_work_bit_irqsave(ctx);
/* If instance was initialised then
- * return instance and free reosurces */
+ * return instance and free resources */
if (ctx->inst_no != MFC_NO_INSTANCE_SET) {
mfc_debug(2, "Has to free instance\n");
ctx->state = MFCINST_RETURN_INST;
set_work_bit_irqsave(ctx);
s5p_mfc_clean_ctx_int_flags(ctx);
s5p_mfc_hw_call(dev->mfc_ops, try_run, dev);
- /* Wait until instance is returned or timeout occured */
+ /* Wait until instance is returned or timeout occurred */
if (s5p_mfc_wait_for_done_ctx
(ctx, S5P_MFC_R2H_CMD_CLOSE_INSTANCE_RET, 0)) {
s5p_mfc_clock_off();
} else {
/* In this case bank2 can point to the same address as bank1.
- * Firmware will always occupy the beggining of this area so it is
+ * Firmware will always occupy the beginning of this area so it is
* impossible having a video frame buffer with zero address. */
dev->bank2 = dev->bank1;
}
int num_subframes;
/** specifies to which subframe belong given plane */
int plane2subframe[MXR_MAX_PLANES];
- /** internal code, driver dependant */
+ /** internal code, driver dependent */
unsigned long cookie;
};
mutex_lock(&mdev->mutex);
/* timings change cannot be done while there is an entity
- * dependant on output configuration
+ * dependent on output configuration
*/
if (mdev->n_output > 0) {
mutex_unlock(&mdev->mutex);
mutex_lock(&mdev->mutex);
/* standard change cannot be done while there is an entity
- * dependant on output configuration
+ * dependent on output configuration
*/
if (mdev->n_output > 0) {
mutex_unlock(&mdev->mutex);
if (ctrlclock & LCLK_EN)
CAM_WRITE(pcdev, CTRLCLOCK, ctrlclock);
- /* select bus endianess */
+ /* select bus endianness */
xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
fmt = xlate->host_fmt;
return 0;
}
-/* timeperframe is arbitrary and continous */
+/* timeperframe is arbitrary and continuous */
static int vidioc_enum_frameintervals(struct file *file, void *priv,
struct v4l2_frmivalenum *fival)
{
fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
- /* fill in stepwise (step=1.0 is requred by V4L2 spec) */
+ /* fill in stepwise (step=1.0 is required by V4L2 spec) */
fival->stepwise.min = tpf_min;
fival->stepwise.max = tpf_max;
fival->stepwise.step = (struct v4l2_fract) {1, 1};
* Increment the VSP1 reference count and initialize the device if the first
* reference is taken.
*
- * Return a pointer to the VSP1 device or NULL if an error occured.
+ * Return a pointer to the VSP1 device or NULL if an error occurred.
*/
struct vsp1_device *vsp1_device_get(struct vsp1_device *vsp1)
{
/* ... and the buffers queue... */
video->alloc_ctx = vb2_dma_contig_init_ctx(video->vsp1->dev);
- if (IS_ERR(video->alloc_ctx))
+ if (IS_ERR(video->alloc_ctx)) {
+ ret = PTR_ERR(video->alloc_ctx);
goto error;
+ }
video->queue.type = video->type;
video->queue.io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
cancel_work_sync(&shark->led_work);
}
-#ifdef CONFIG_PM
-static void shark_resume_leds(struct shark_device *shark)
+static inline void shark_resume_leds(struct shark_device *shark)
{
if (test_bit(BLUE_IS_PULSE, &shark->brightness_new))
set_bit(BLUE_PULSE_LED, &shark->brightness_new);
set_bit(RED_LED, &shark->brightness_new);
schedule_work(&shark->led_work);
}
-#endif
#else
static int shark_register_leds(struct shark_device *shark, struct device *dev)
{
cancel_work_sync(&shark->led_work);
}
-#ifdef CONFIG_PM
-static void shark_resume_leds(struct shark_device *shark)
+static inline void shark_resume_leds(struct shark_device *shark)
{
int i;
schedule_work(&shark->led_work);
}
-#endif
#else
static int shark_register_leds(struct shark_device *shark, struct device *dev)
{
*
* @tune_freq: Tune chip to a specific frequency
* @seek_start: Star station seeking
- * @rsq_status: Get Recieved Signal Quality(RSQ) status
- * @rds_blckcnt: Get recived RDS blocks count
+ * @rsq_status: Get Received Signal Quality(RSQ) status
+ * @rds_blckcnt: Get received RDS blocks count
* @phase_diversity: Change phase diversity mode of the tuner
* @phase_div_status: Get phase diversity mode status
* @acf_status: Get the status of Automatically Controlled
So we keep it as-is. */
return -EINVAL;
}
- clamp(freq, FREQ_MIN * FREQ_MUL, FREQ_MAX * FREQ_MUL);
+ freq = clamp(freq, FREQ_MIN * FREQ_MUL, FREQ_MAX * FREQ_MUL);
tea5764_power_up(radio);
tea5764_tune(radio, (freq * 125) / 2);
return 0;
if (f->tuner != 0)
return -EINVAL;
- clamp(freq, TEF6862_LO_FREQ, TEF6862_HI_FREQ);
+ freq = clamp(freq, TEF6862_LO_FREQ, TEF6862_HI_FREQ);
pll = 1964 + ((freq - TEF6862_LO_FREQ) * 20) / FREQ_MUL;
i2cmsg[0] = (MSA_MODE_PRESET << MSA_MODE_SHIFT) | WM_SUB_PLLM;
i2cmsg[1] = (pll >> 8) & 0xff;
* 0x68nnnnB7 to 0x6AnnnnB7, the left mouse button generates
* 0x688301b7 and the right one 0x688481b7. All other keys generate
* 0x2nnnnnnn. Position coordinate is encoded in buf[1] and buf[2] with
- * reversed endianess. Extract direction from buffer, rotate endianess,
+ * reversed endianness. Extract direction from buffer, rotate endianness,
* adjust sign and feed the values into stabilize(). The resulting codes
* will be 0x01008000, 0x01007F00, which match the newer devices.
*/
#define RR3_IR_IO_LENGTH_FUZZ 0x04
/* Timeout for end of signal detection */
#define RR3_IR_IO_SIG_TIMEOUT 0x05
-/* Minumum value for pause recognition. */
+/* Minimum value for pause recognition. */
#define RR3_IR_IO_MIN_PAUSE 0x06
/* Clock freq. of EZ-USB chip */
* DNC Output is selected, the other is always off)
*
* @state: ptr to mt2063_state structure
- * @Mode: desired reciever delivery system
+ * @Mode: desired receiver delivery system
*
* Note: Register cache must be valid for it to work
*/
/*
* As defined on EN 300 429, the DVB-C roll-off factor is 0.15.
- * So, the amount of the needed bandwith is given by:
+ * So, the amount of the needed bandwidth is given by:
* Bw = Symbol_rate * (1 + 0.15)
* As such, the maximum symbol rate supported by 6 MHz is given by:
* max_symbol_rate = 6 MHz / 1.15 = 5217391 Bauds
#define V4L2_STD_A2 (V4L2_STD_A2_A | V4L2_STD_A2_B)
#define V4L2_STD_NICAM (V4L2_STD_NICAM_A | V4L2_STD_NICAM_B)
-/* To preserve backward compatibilty,
+/* To preserve backward compatibility,
(std & V4L2_STD_AUDIO) = 0 means that ALL audio stds are supported
*/
usb_set_intfdata(interface, NULL);
err_if:
usb_put_dev(udev);
- kfree(dev);
clear_bit(dev->devno, &cx231xx_devused);
+ kfree(dev);
return retval;
}
{
u8 wbuf[MAX_XFER_SIZE];
u8 mbox = (reg >> 16) & 0xff;
- struct usb_req req = { CMD_MEM_WR, mbox, sizeof(wbuf), wbuf, 0, NULL };
+ struct usb_req req = { CMD_MEM_WR, mbox, 6 + len, wbuf, 0, NULL };
if (6 + len > sizeof(wbuf)) {
dev_warn(&d->udev->dev, "%s: i2c wr: len=%d is too big!\n",
} else {
/* I2C */
u8 buf[MAX_XFER_SIZE];
- struct usb_req req = { CMD_I2C_RD, 0, sizeof(buf),
+ struct usb_req req = { CMD_I2C_RD, 0, 5 + msg[0].len,
buf, msg[1].len, msg[1].buf };
if (5 + msg[0].len > sizeof(buf)) {
dev_warn(&d->udev->dev,
"%s: i2c xfer: len=%d is too big!\n",
KBUILD_MODNAME, msg[0].len);
- return -EOPNOTSUPP;
+ ret = -EOPNOTSUPP;
+ goto unlock;
}
req.mbox |= ((msg[0].addr & 0x80) >> 3);
buf[0] = msg[1].len;
} else {
/* I2C */
u8 buf[MAX_XFER_SIZE];
- struct usb_req req = { CMD_I2C_WR, 0, sizeof(buf), buf,
- 0, NULL };
+ struct usb_req req = { CMD_I2C_WR, 0, 5 + msg[0].len,
+ buf, 0, NULL };
if (5 + msg[0].len > sizeof(buf)) {
dev_warn(&d->udev->dev,
"%s: i2c xfer: len=%d is too big!\n",
KBUILD_MODNAME, msg[0].len);
- return -EOPNOTSUPP;
+ ret = -EOPNOTSUPP;
+ goto unlock;
}
req.mbox |= ((msg[0].addr & 0x80) >> 3);
buf[0] = msg[0].len;
ret = -EOPNOTSUPP;
}
+unlock:
mutex_unlock(&d->i2c_mutex);
if (ret < 0)
/* XXX: that same ID [0ccd:0099] is used by af9015 driver too */
{ DVB_USB_DEVICE(USB_VID_TERRATEC, 0x0099,
&af9035_props, "TerraTec Cinergy T Stick Dual RC (rev. 2)", NULL) },
+ { DVB_USB_DEVICE(USB_VID_LEADTEK, 0x6a05,
+ &af9035_props, "Leadtek WinFast DTV Dongle Dual", NULL) },
{ }
};
MODULE_DEVICE_TABLE(usb, af9035_id_table);
struct mxl111sf_adap_state *adap_state = &state->adap_state[fe->id];
int err;
- /* exit if we didnt initialize the driver yet */
+ /* exit if we didn't initialize the driver yet */
if (!state->chip_id) {
mxl_debug("driver not yet initialized, exit.");
goto fail;
struct mxl111sf_adap_state *adap_state = &state->adap_state[fe->id];
int err;
- /* exit if we didnt initialize the driver yet */
+ /* exit if we didn't initialize the driver yet */
if (!state->chip_id) {
mxl_debug("driver not yet initialized, exit.");
goto fail;
if (rxlen > 62) {
err("i2c RX buffer can't exceed 62 bytes (dev 0x%02x)",
device_addr);
- txlen = 62;
+ rxlen = 62;
}
b[0] = I2C_SPEED_100KHZ_BIT;
em28xx_videodbg("users=%d\n", dev->users);
- mutex_lock(&dev->lock);
vb2_fop_release(filp);
+ mutex_lock(&dev->lock);
if (dev->users == 1) {
/* the device is already disconnect,
s32 nToSkip =
sd->swapRB * (gspca_dev->cam.cam_mode[mode].bytesperline + 1);
- /* Test only against 0202h, so endianess does not matter */
+ /* Test only against 0202h, so endianness does not matter */
switch (*(s16 *) data) {
case 0x0202: /* End of frame, start a new one */
gspca_frame_add(gspca_dev, LAST_PACKET, NULL, 0);
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
int ret = -EINVAL;
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
int ret = -EINVAL;
u8 data0, data1;
/* set serial interface clock divider (30MHz/0x1f*16+2) = 60240 kHz) */
reg_w(gspca_dev, STK1135_REG_SICTL + 2, 0x1f);
+
+ /* wait a while for sensor to catch up */
+ udelay(1000);
}
static void stk1135_camera_disable(struct gspca_dev *gspca_dev)
struct sd *sd = (struct sd *) gspca_dev;
struct cam *cam = &gspca_dev->cam;
- /* Give the camera some time to settle, otherwise initalization will
+ /* Give the camera some time to settle, otherwise initialization will
fail on hotplug, and yes it really needs a full second. */
msleep(1000);
{USB_DEVICE(0x055f, 0xc650), BS(SPCA533, 0)},
{USB_DEVICE(0x05da, 0x1018), BS(SPCA504B, 0)},
{USB_DEVICE(0x06d6, 0x0031), BS(SPCA533, 0)},
+ {USB_DEVICE(0x06d6, 0x0041), BS(SPCA504B, 0)},
{USB_DEVICE(0x0733, 0x1311), BS(SPCA533, 0)},
{USB_DEVICE(0x0733, 0x1314), BS(SPCA533, 0)},
{USB_DEVICE(0x0733, 0x2211), BS(SPCA533, 0)},
#if IS_ENABLED(CONFIG_INPUT)
static int sd_int_pkt_scan(struct gspca_dev *gspca_dev,
u8 *data, /* interrupt packet data */
- int len) /* interrput packet length */
+ int len) /* interrupt packet length */
{
if (len == 8 && data[4] == 1) {
input_report_key(gspca_dev->input_dev, KEY_CAMERA, 1);
/* Set the leds off */
pwc_set_leds(pdev, 0, 0);
- /* Setup intial videomode */
+ /* Setup initial videomode */
rc = pwc_set_video_mode(pdev, MAX_WIDTH, MAX_HEIGHT,
V4L2_PIX_FMT_YUV420, 30, &compression, 1);
if (rc)
#define USBTV_ISOC_TRANSFERS 16
#define USBTV_ISOC_PACKETS 8
-#define USBTV_WIDTH 720
-#define USBTV_HEIGHT 480
-
#define USBTV_CHUNK_SIZE 256
#define USBTV_CHUNK 240
-#define USBTV_CHUNKS (USBTV_WIDTH * USBTV_HEIGHT \
- / 4 / USBTV_CHUNK)
/* Chunk header. */
#define USBTV_MAGIC_OK(chunk) ((be32_to_cpu(chunk[0]) & 0xff000000) \
#define USBTV_ODD(chunk) ((be32_to_cpu(chunk[0]) & 0x0000f000) >> 15)
#define USBTV_CHUNK_NO(chunk) (be32_to_cpu(chunk[0]) & 0x00000fff)
+#define USBTV_TV_STD (V4L2_STD_525_60 | V4L2_STD_PAL)
+
+/* parameters for supported TV norms */
+struct usbtv_norm_params {
+ v4l2_std_id norm;
+ int cap_width, cap_height;
+};
+
+static struct usbtv_norm_params norm_params[] = {
+ {
+ .norm = V4L2_STD_525_60,
+ .cap_width = 720,
+ .cap_height = 480,
+ },
+ {
+ .norm = V4L2_STD_PAL,
+ .cap_width = 720,
+ .cap_height = 576,
+ }
+};
+
/* A single videobuf2 frame buffer. */
struct usbtv_buf {
struct vb2_buffer vb;
USBTV_COMPOSITE_INPUT,
USBTV_SVIDEO_INPUT,
} input;
+ v4l2_std_id norm;
+ int width, height;
+ int n_chunks;
int iso_size;
unsigned int sequence;
struct urb *isoc_urbs[USBTV_ISOC_TRANSFERS];
};
+static int usbtv_configure_for_norm(struct usbtv *usbtv, v4l2_std_id norm)
+{
+ int i, ret = 0;
+ struct usbtv_norm_params *params = NULL;
+
+ for (i = 0; i < ARRAY_SIZE(norm_params); i++) {
+ if (norm_params[i].norm & norm) {
+ params = &norm_params[i];
+ break;
+ }
+ }
+
+ if (params) {
+ usbtv->width = params->cap_width;
+ usbtv->height = params->cap_height;
+ usbtv->n_chunks = usbtv->width * usbtv->height
+ / 4 / USBTV_CHUNK;
+ usbtv->norm = params->norm;
+ } else
+ ret = -EINVAL;
+
+ return ret;
+}
+
static int usbtv_set_regs(struct usbtv *usbtv, const u16 regs[][2], int size)
{
int ret;
return ret;
}
+static int usbtv_select_norm(struct usbtv *usbtv, v4l2_std_id norm)
+{
+ int ret;
+ static const u16 pal[][2] = {
+ { USBTV_BASE + 0x001a, 0x0068 },
+ { USBTV_BASE + 0x010e, 0x0072 },
+ { USBTV_BASE + 0x010f, 0x00a2 },
+ { USBTV_BASE + 0x0112, 0x00b0 },
+ { USBTV_BASE + 0x0117, 0x0001 },
+ { USBTV_BASE + 0x0118, 0x002c },
+ { USBTV_BASE + 0x012d, 0x0010 },
+ { USBTV_BASE + 0x012f, 0x0020 },
+ { USBTV_BASE + 0x024f, 0x0002 },
+ { USBTV_BASE + 0x0254, 0x0059 },
+ { USBTV_BASE + 0x025a, 0x0016 },
+ { USBTV_BASE + 0x025b, 0x0035 },
+ { USBTV_BASE + 0x0263, 0x0017 },
+ { USBTV_BASE + 0x0266, 0x0016 },
+ { USBTV_BASE + 0x0267, 0x0036 }
+ };
+
+ static const u16 ntsc[][2] = {
+ { USBTV_BASE + 0x001a, 0x0079 },
+ { USBTV_BASE + 0x010e, 0x0068 },
+ { USBTV_BASE + 0x010f, 0x009c },
+ { USBTV_BASE + 0x0112, 0x00f0 },
+ { USBTV_BASE + 0x0117, 0x0000 },
+ { USBTV_BASE + 0x0118, 0x00fc },
+ { USBTV_BASE + 0x012d, 0x0004 },
+ { USBTV_BASE + 0x012f, 0x0008 },
+ { USBTV_BASE + 0x024f, 0x0001 },
+ { USBTV_BASE + 0x0254, 0x005f },
+ { USBTV_BASE + 0x025a, 0x0012 },
+ { USBTV_BASE + 0x025b, 0x0001 },
+ { USBTV_BASE + 0x0263, 0x001c },
+ { USBTV_BASE + 0x0266, 0x0011 },
+ { USBTV_BASE + 0x0267, 0x0005 }
+ };
+
+ ret = usbtv_configure_for_norm(usbtv, norm);
+
+ if (!ret) {
+ if (norm & V4L2_STD_525_60)
+ ret = usbtv_set_regs(usbtv, ntsc, ARRAY_SIZE(ntsc));
+ else if (norm & V4L2_STD_PAL)
+ ret = usbtv_set_regs(usbtv, pal, ARRAY_SIZE(pal));
+ }
+
+ return ret;
+}
+
static int usbtv_setup_capture(struct usbtv *usbtv)
{
int ret;
{ USBTV_BASE + 0x0284, 0x0088 },
{ USBTV_BASE + 0x0003, 0x0004 },
- { USBTV_BASE + 0x001a, 0x0079 },
{ USBTV_BASE + 0x0100, 0x00d3 },
- { USBTV_BASE + 0x010e, 0x0068 },
- { USBTV_BASE + 0x010f, 0x009c },
- { USBTV_BASE + 0x0112, 0x00f0 },
{ USBTV_BASE + 0x0115, 0x0015 },
- { USBTV_BASE + 0x0117, 0x0000 },
- { USBTV_BASE + 0x0118, 0x00fc },
- { USBTV_BASE + 0x012d, 0x0004 },
- { USBTV_BASE + 0x012f, 0x0008 },
{ USBTV_BASE + 0x0220, 0x002e },
{ USBTV_BASE + 0x0225, 0x0008 },
{ USBTV_BASE + 0x024e, 0x0002 },
- { USBTV_BASE + 0x024f, 0x0001 },
- { USBTV_BASE + 0x0254, 0x005f },
- { USBTV_BASE + 0x025a, 0x0012 },
- { USBTV_BASE + 0x025b, 0x0001 },
- { USBTV_BASE + 0x0263, 0x001c },
- { USBTV_BASE + 0x0266, 0x0011 },
- { USBTV_BASE + 0x0267, 0x0005 },
{ USBTV_BASE + 0x024e, 0x0002 },
{ USBTV_BASE + 0x024f, 0x0002 },
};
if (ret)
return ret;
+ ret = usbtv_select_norm(usbtv, usbtv->norm);
+ if (ret)
+ return ret;
+
ret = usbtv_select_input(usbtv, usbtv->input);
if (ret)
return ret;
frame_id = USBTV_FRAME_ID(chunk);
odd = USBTV_ODD(chunk);
chunk_no = USBTV_CHUNK_NO(chunk);
- if (chunk_no >= USBTV_CHUNKS)
+ if (chunk_no >= usbtv->n_chunks)
return;
/* Beginning of a frame. */
usbtv->chunks_done++;
/* Last chunk in a frame, signalling an end */
- if (odd && chunk_no == USBTV_CHUNKS-1) {
+ if (odd && chunk_no == usbtv->n_chunks-1) {
int size = vb2_plane_size(&buf->vb, 0);
enum vb2_buffer_state state = usbtv->chunks_done ==
- USBTV_CHUNKS ?
+ usbtv->n_chunks ?
VB2_BUF_STATE_DONE :
VB2_BUF_STATE_ERROR;
static int usbtv_enum_input(struct file *file, void *priv,
struct v4l2_input *i)
{
+ struct usbtv *dev = video_drvdata(file);
+
switch (i->index) {
case USBTV_COMPOSITE_INPUT:
strlcpy(i->name, "Composite", sizeof(i->name));
}
i->type = V4L2_INPUT_TYPE_CAMERA;
- i->std = V4L2_STD_525_60;
+ i->std = dev->vdev.tvnorms;
return 0;
}
static int usbtv_fmt_vid_cap(struct file *file, void *priv,
struct v4l2_format *f)
{
- f->fmt.pix.width = USBTV_WIDTH;
- f->fmt.pix.height = USBTV_HEIGHT;
+ struct usbtv *usbtv = video_drvdata(file);
+
+ f->fmt.pix.width = usbtv->width;
+ f->fmt.pix.height = usbtv->height;
f->fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
f->fmt.pix.field = V4L2_FIELD_INTERLACED;
- f->fmt.pix.bytesperline = USBTV_WIDTH * 2;
+ f->fmt.pix.bytesperline = usbtv->width * 2;
f->fmt.pix.sizeimage = (f->fmt.pix.bytesperline * f->fmt.pix.height);
f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
- f->fmt.pix.priv = 0;
+
return 0;
}
static int usbtv_g_std(struct file *file, void *priv, v4l2_std_id *norm)
{
- *norm = V4L2_STD_525_60;
+ struct usbtv *usbtv = video_drvdata(file);
+ *norm = usbtv->norm;
return 0;
}
+static int usbtv_s_std(struct file *file, void *priv, v4l2_std_id norm)
+{
+ int ret = -EINVAL;
+ struct usbtv *usbtv = video_drvdata(file);
+
+ if ((norm & V4L2_STD_525_60) || (norm & V4L2_STD_PAL))
+ ret = usbtv_select_norm(usbtv, norm);
+
+ return ret;
+}
+
static int usbtv_g_input(struct file *file, void *priv, unsigned int *i)
{
struct usbtv *usbtv = video_drvdata(file);
return usbtv_select_input(usbtv, i);
}
-static int usbtv_s_std(struct file *file, void *priv, v4l2_std_id norm)
-{
- if (norm & V4L2_STD_525_60)
- return 0;
- return -EINVAL;
-}
-
struct v4l2_ioctl_ops usbtv_ioctl_ops = {
.vidioc_querycap = usbtv_querycap,
.vidioc_enum_input = usbtv_enum_input,
const struct v4l2_format *v4l_fmt, unsigned int *nbuffers,
unsigned int *nplanes, unsigned int sizes[], void *alloc_ctxs[])
{
+ struct usbtv *usbtv = vb2_get_drv_priv(vq);
+
if (*nbuffers < 2)
*nbuffers = 2;
*nplanes = 1;
- sizes[0] = USBTV_WIDTH * USBTV_HEIGHT / 2 * sizeof(u32);
+ sizes[0] = USBTV_CHUNK * usbtv->n_chunks * 2 * sizeof(u32);
return 0;
}
return -ENOMEM;
usbtv->dev = dev;
usbtv->udev = usb_get_dev(interface_to_usbdev(intf));
+
usbtv->iso_size = size;
+
+ (void)usbtv_configure_for_norm(usbtv, V4L2_STD_525_60);
+
spin_lock_init(&usbtv->buflock);
mutex_init(&usbtv->v4l2_lock);
mutex_init(&usbtv->vb2q_lock);
usbtv->vdev.release = video_device_release_empty;
usbtv->vdev.fops = &usbtv_fops;
usbtv->vdev.ioctl_ops = &usbtv_ioctl_ops;
- usbtv->vdev.tvnorms = V4L2_STD_525_60;
+ usbtv->vdev.tvnorms = USBTV_TV_STD;
usbtv->vdev.queue = &usbtv->vb2q;
usbtv->vdev.lock = &usbtv->v4l2_lock;
set_bit(V4L2_FL_USE_FH_PRIO, &usbtv->vdev.flags);
*
* SOF = ((SOF2 - SOF1) * PTS + SOF1 * STC2 - SOF2 * STC1) / (STC2 - STC1) (1)
*
- * to avoid loosing precision in the division. Similarly, the host timestamp is
+ * to avoid losing precision in the division. Similarly, the host timestamp is
* computed with
*
* TS = ((TS2 - TS1) * PTS + TS1 * SOF2 - TS2 * SOF1) / (SOF2 - SOF1) (2)
"Advanced Simple",
"Core",
"Simple Scalable",
- "Advanced Coding Efficency",
+ "Advanced Coding Efficiency",
NULL,
};
__vb2_plane_dmabuf_put(q, &vb->planes[plane]);
}
+/**
+ * __setup_lengths() - setup initial lengths for every plane in
+ * every buffer on the queue
+ */
+static void __setup_lengths(struct vb2_queue *q, unsigned int n)
+{
+ unsigned int buffer, plane;
+ struct vb2_buffer *vb;
+
+ for (buffer = q->num_buffers; buffer < q->num_buffers + n; ++buffer) {
+ vb = q->bufs[buffer];
+ if (!vb)
+ continue;
+
+ for (plane = 0; plane < vb->num_planes; ++plane)
+ vb->v4l2_planes[plane].length = q->plane_sizes[plane];
+ }
+}
+
/**
* __setup_offsets() - setup unique offsets ("cookies") for every plane in
* every buffer on the queue
continue;
for (plane = 0; plane < vb->num_planes; ++plane) {
- vb->v4l2_planes[plane].length = q->plane_sizes[plane];
vb->v4l2_planes[plane].m.mem_offset = off;
dprintk(3, "Buffer %d, plane %d offset 0x%08lx\n",
q->bufs[q->num_buffers + buffer] = vb;
}
+ __setup_lengths(q, buffer);
if (memory == V4L2_MEMORY_MMAP)
__setup_offsets(q, buffer);
return -EINVAL;
}
- if (eb->flags & ~O_CLOEXEC) {
- dprintk(1, "Queue does support only O_CLOEXEC flag\n");
+ if (eb->flags & ~(O_CLOEXEC | O_ACCMODE)) {
+ dprintk(1, "Queue does support only O_CLOEXEC and access mode flags\n");
return -EINVAL;
}
vb_plane = &vb->planes[eb->plane];
- dbuf = call_memop(q, get_dmabuf, vb_plane->mem_priv);
+ dbuf = call_memop(q, get_dmabuf, vb_plane->mem_priv, eb->flags & O_ACCMODE);
if (IS_ERR_OR_NULL(dbuf)) {
dprintk(1, "Failed to export buffer %d, plane %d\n",
eb->index, eb->plane);
return -EINVAL;
}
- ret = dma_buf_fd(dbuf, eb->flags);
+ ret = dma_buf_fd(dbuf, eb->flags & ~O_ACCMODE);
if (ret < 0) {
dprintk(3, "buffer %d, plane %d failed to export (%d)\n",
eb->index, eb->plane, ret);
return sgt;
}
-static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv)
+static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags)
{
struct vb2_dc_buf *buf = buf_priv;
struct dma_buf *dbuf;
if (WARN_ON(!buf->sgt_base))
return NULL;
- dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, 0);
+ dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, flags);
if (IS_ERR(dbuf))
return NULL;
buf->pages = kzalloc(buf->num_pages * sizeof(struct page *),
GFP_KERNEL);
if (!buf->pages)
- return NULL;
+ goto userptr_fail_alloc_pages;
num_pages_from_user = get_user_pages(current, current->mm,
vaddr & PAGE_MASK,
while (--num_pages_from_user >= 0)
put_page(buf->pages[num_pages_from_user]);
kfree(buf->pages);
+userptr_fail_alloc_pages:
kfree(buf);
return NULL;
}
int sec_reg_read(struct sec_pmic_dev *sec_pmic, u8 reg, void *dest)
{
- return regmap_read(sec_pmic->regmap, reg, dest);
+ return regmap_read(sec_pmic->regmap_pmic, reg, dest);
}
EXPORT_SYMBOL_GPL(sec_reg_read);
int sec_bulk_read(struct sec_pmic_dev *sec_pmic, u8 reg, int count, u8 *buf)
{
- return regmap_bulk_read(sec_pmic->regmap, reg, buf, count);
+ return regmap_bulk_read(sec_pmic->regmap_pmic, reg, buf, count);
}
EXPORT_SYMBOL_GPL(sec_bulk_read);
int sec_reg_write(struct sec_pmic_dev *sec_pmic, u8 reg, u8 value)
{
- return regmap_write(sec_pmic->regmap, reg, value);
+ return regmap_write(sec_pmic->regmap_pmic, reg, value);
}
EXPORT_SYMBOL_GPL(sec_reg_write);
int sec_bulk_write(struct sec_pmic_dev *sec_pmic, u8 reg, int count, u8 *buf)
{
- return regmap_raw_write(sec_pmic->regmap, reg, buf, count);
+ return regmap_raw_write(sec_pmic->regmap_pmic, reg, buf, count);
}
EXPORT_SYMBOL_GPL(sec_bulk_write);
int sec_reg_update(struct sec_pmic_dev *sec_pmic, u8 reg, u8 val, u8 mask)
{
- return regmap_update_bits(sec_pmic->regmap, reg, mask, val);
+ return regmap_update_bits(sec_pmic->regmap_pmic, reg, mask, val);
}
EXPORT_SYMBOL_GPL(sec_reg_update);
.cache_type = REGCACHE_FLAT,
};
+static const struct regmap_config sec_rtc_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+};
+
#ifdef CONFIG_OF
/*
* Only the common platform data elements for s5m8767 are parsed here from the
break;
}
- sec_pmic->regmap = devm_regmap_init_i2c(i2c, regmap);
- if (IS_ERR(sec_pmic->regmap)) {
- ret = PTR_ERR(sec_pmic->regmap);
+ sec_pmic->regmap_pmic = devm_regmap_init_i2c(i2c, regmap);
+ if (IS_ERR(sec_pmic->regmap_pmic)) {
+ ret = PTR_ERR(sec_pmic->regmap_pmic);
dev_err(&i2c->dev, "Failed to allocate register map: %d\n",
ret);
return ret;
sec_pmic->rtc = i2c_new_dummy(i2c->adapter, RTC_I2C_ADDR);
i2c_set_clientdata(sec_pmic->rtc, sec_pmic);
+ sec_pmic->regmap_rtc = devm_regmap_init_i2c(sec_pmic->rtc,
+ &sec_rtc_regmap_config);
+ if (IS_ERR(sec_pmic->regmap_rtc)) {
+ ret = PTR_ERR(sec_pmic->regmap_rtc);
+ dev_err(&i2c->dev, "Failed to allocate RTC register map: %d\n",
+ ret);
+ return ret;
+ }
+
if (pdata && pdata->cfg_pmic_irq)
pdata->cfg_pmic_irq();
switch (type) {
case S5M8763X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s5m8763_irq_chip,
&sec_pmic->irq_data);
break;
case S5M8767X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s5m8767_irq_chip,
&sec_pmic->irq_data);
break;
case S2MPS11X:
- ret = regmap_add_irq_chip(sec_pmic->regmap, sec_pmic->irq,
+ ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
sec_pmic->irq_base, &s2mps11_irq_chip,
&sec_pmic->irq_data);
static void pxa3xx_nand_free_buff(struct pxa3xx_nand_info *info)
{
struct platform_device *pdev = info->pdev;
- if (use_dma) {
+ if (info->use_dma) {
pxa_free_dma(info->data_dma_ch);
dma_free_coherent(&pdev->dev, info->buf_size,
info->data_buff, info->data_buff_phys);
.compatible = "marvell,pxa3xx-nand",
.data = (void *)PXA3XX_NAND_VARIANT_PXA,
},
- {
- .compatible = "marvell,armada370-nand",
- .data = (void *)PXA3XX_NAND_VARIANT_ARMADA370,
- },
{}
};
MODULE_DEVICE_TABLE(of, pxa3xx_nand_dt_ids);
(arp_ip_count < BOND_MAX_ARP_TARGETS) && arp_ip_target[i]; i++) {
/* not complete check, but should be good enough to
catch mistakes */
- __be32 ip = in_aton(arp_ip_target[i]);
- if (!isdigit(arp_ip_target[i][0]) || ip == 0 ||
- ip == htonl(INADDR_BROADCAST)) {
+ __be32 ip;
+ if (!in4_pton(arp_ip_target[i], -1, (u8 *)&ip, -1, NULL) ||
+ IS_IP_TARGET_UNUSABLE_ADDRESS(ip)) {
pr_warning("Warning: bad arp_ip_target module parameter (%s), ARP monitoring will not be performed\n",
arp_ip_target[i]);
arp_interval = 0;
char *buf)
{
struct bonding *bond = to_bond(d);
- int packets_per_slave = bond->params.packets_per_slave;
+ unsigned int packets_per_slave = bond->params.packets_per_slave;
if (packets_per_slave > 1)
packets_per_slave = reciprocal_value(packets_per_slave);
- return sprintf(buf, "%d\n", packets_per_slave);
+ return sprintf(buf, "%u\n", packets_per_slave);
}
static ssize_t bonding_store_packets_per_slave(struct device *d,
usb_unanchor_urb(urb);
usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf,
urb->transfer_dma);
+ usb_free_urb(urb);
break;
}
* allowed (MAX_TX_URBS).
*/
if (!context) {
- usb_unanchor_urb(urb);
usb_free_coherent(dev->udev, size, buf, urb->transfer_dma);
+ usb_free_urb(urb);
netdev_warn(netdev, "couldn't find free context\n");
/* set LED in default state (end of init phase) */
pcan_usb_pro_set_led(dev, 0, 1);
+ kfree(bi);
+ kfree(fi);
+
return 0;
err_out:
if (netif_msg_ifup(db))
dev_dbg(db->dev, "enabling %s\n", dev->name);
- if (devm_request_irq(db->dev, dev->irq, &emac_interrupt,
- 0, dev->name, dev))
+ if (request_irq(dev->irq, &emac_interrupt, 0, dev->name, dev))
return -EAGAIN;
/* Initialize EMAC board */
emac_shutdown(ndev);
+ free_irq(ndev->irq, ndev);
+
return 0;
}
{
struct bnx2x *bp = netdev_priv(pci_get_drvdata(dev));
+ if (!IS_SRIOV(bp)) {
+ BNX2X_ERR("failed to configure SR-IOV since vfdb was not allocated. Check dmesg for errors in probe stage\n");
+ return -EINVAL;
+ }
+
DP(BNX2X_MSG_IOV, "bnx2x_sriov_configure called with %d, BNX2X_NR_VIRTFN(bp) was %d\n",
num_vfs_param, BNX2X_NR_VIRTFN(bp));
void (*write_op)(struct tg3 *, u32, u32);
int i, err;
+ if (!pci_device_is_present(tp->pdev))
+ return -ENODEV;
+
tg3_nvram_lock(tp);
tg3_ape_lock(tp, TG3_APE_LOCK_GRC);
memset(&tp->net_stats_prev, 0, sizeof(tp->net_stats_prev));
memset(&tp->estats_prev, 0, sizeof(tp->estats_prev));
- tg3_power_down_prepare(tp);
-
- tg3_carrier_off(tp);
+ if (pci_device_is_present(tp->pdev)) {
+ tg3_power_down_prepare(tp);
+ tg3_carrier_off(tp);
+ }
return 0;
}
/* Clear this out for sanity. */
tw32(TG3PCI_MEM_WIN_BASE_ADDR, 0);
+ /* Clear TG3PCI_REG_BASE_ADDR to prevent hangs. */
+ tw32(TG3PCI_REG_BASE_ADDR, 0);
+
pci_read_config_dword(tp->pdev, TG3PCI_PCISTATE,
&pci_state_reg);
if ((pci_state_reg & PCISTATE_CONV_PCI_MODE) == 0 &&
struct pci_dev *pdev = to_pci_dev(device);
struct net_device *dev = pci_get_drvdata(pdev);
struct tg3 *tp = netdev_priv(dev);
- int err;
+ int err = 0;
+
+ rtnl_lock();
if (!netif_running(dev))
- return 0;
+ goto unlock;
tg3_reset_task_cancel(tp);
tg3_phy_stop(tp);
tg3_phy_start(tp);
}
+unlock:
+ rtnl_unlock();
return err;
}
struct pci_dev *pdev = to_pci_dev(device);
struct net_device *dev = pci_get_drvdata(pdev);
struct tg3 *tp = netdev_priv(dev);
- int err;
+ int err = 0;
+
+ rtnl_lock();
if (!netif_running(dev))
- return 0;
+ goto unlock;
netif_device_attach(dev);
if (!err)
tg3_phy_start(tp);
+unlock:
+ rtnl_unlock();
return err;
}
#endif /* CONFIG_PM_SLEEP */
#include <asm/io.h>
#include "cxgb4_uld.h"
-#define FW_VERSION_MAJOR 1
-#define FW_VERSION_MINOR 4
-#define FW_VERSION_MICRO 0
+#define T4FW_VERSION_MAJOR 0x01
+#define T4FW_VERSION_MINOR 0x06
+#define T4FW_VERSION_MICRO 0x18
+#define T4FW_VERSION_BUILD 0x00
-#define FW_VERSION_MAJOR_T5 0
-#define FW_VERSION_MINOR_T5 0
-#define FW_VERSION_MICRO_T5 0
+#define T5FW_VERSION_MAJOR 0x01
+#define T5FW_VERSION_MINOR 0x08
+#define T5FW_VERSION_MICRO 0x1C
+#define T5FW_VERSION_BUILD 0x00
#define CH_WARN(adap, fmt, ...) dev_warn(adap->pdev_dev, fmt, ## __VA_ARGS__)
unsigned char width;
};
+#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
+#define CHELSIO_CHIP_FPGA 0x100
+#define CHELSIO_CHIP_VERSION(code) (((code) >> 4) & 0xf)
+#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
+
+#define CHELSIO_T4 0x4
+#define CHELSIO_T5 0x5
+
+enum chip_type {
+ T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
+ T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
+ T4_FIRST_REV = T4_A1,
+ T4_LAST_REV = T4_A2,
+
+ T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
+ T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1),
+ T5_FIRST_REV = T5_A0,
+ T5_LAST_REV = T5_A1,
+};
+
struct adapter_params {
struct tp_params tp;
struct vpd_params vpd;
unsigned char nports; /* # of ethernet ports */
unsigned char portvec;
- unsigned char rev; /* chip revision */
+ enum chip_type chip; /* chip code */
unsigned char offload;
unsigned char bypass;
unsigned int ofldq_wr_cred;
};
+#include "t4fw_api.h"
+
+#define FW_VERSION(chip) ( \
+ FW_HDR_FW_VER_MAJOR_GET(chip##FW_VERSION_MAJOR) | \
+ FW_HDR_FW_VER_MINOR_GET(chip##FW_VERSION_MINOR) | \
+ FW_HDR_FW_VER_MICRO_GET(chip##FW_VERSION_MICRO) | \
+ FW_HDR_FW_VER_BUILD_GET(chip##FW_VERSION_BUILD))
+#define FW_INTFVER(chip, intf) (FW_HDR_INTFVER_##intf)
+
+struct fw_info {
+ u8 chip;
+ char *fs_name;
+ char *fw_mod_name;
+ struct fw_hdr fw_hdr;
+};
+
+
struct trace_params {
u32 data[TRACE_LEN / 4];
u32 mask[TRACE_LEN / 4];
struct l2t_data;
-#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
-#define CHELSIO_CHIP_VERSION(code) ((code) >> 4)
-#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
-
-#define CHELSIO_T4 0x4
-#define CHELSIO_T5 0x5
-
-enum chip_type {
- T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 0),
- T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
- T4_A3 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
- T4_FIRST_REV = T4_A1,
- T4_LAST_REV = T4_A3,
-
- T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
- T5_FIRST_REV = T5_A1,
- T5_LAST_REV = T5_A1,
-};
-
#ifdef CONFIG_PCI_IOV
/* T4 supports SRIOV on PF0-3 and T5 on PF0-7. However, the Serial
static inline int is_t5(enum chip_type chip)
{
- return (chip >= T5_FIRST_REV && chip <= T5_LAST_REV);
+ return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T5;
}
static inline int is_t4(enum chip_type chip)
{
- return (chip >= T4_FIRST_REV && chip <= T4_LAST_REV);
+ return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T4;
}
static inline u32 t4_read_reg(struct adapter *adap, u32 reg_addr)
int t4_load_fw(struct adapter *adapter, const u8 *fw_data, unsigned int size);
unsigned int t4_flash_cfg_addr(struct adapter *adapter);
int t4_load_cfg(struct adapter *adapter, const u8 *cfg_data, unsigned int size);
-int t4_check_fw_version(struct adapter *adapter);
+int t4_get_fw_version(struct adapter *adapter, u32 *vers);
+int t4_get_tp_version(struct adapter *adapter, u32 *vers);
+int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
+ const u8 *fw_data, unsigned int fw_size,
+ struct fw_hdr *card_fw, enum dev_state state, int *reset);
int t4_prep_adapter(struct adapter *adapter);
int t4_port_init(struct adapter *adap, int mbox, int pf, int vf);
void t4_fatal_err(struct adapter *adapter);
{ 0, }
};
-#define FW_FNAME "cxgb4/t4fw.bin"
+#define FW4_FNAME "cxgb4/t4fw.bin"
#define FW5_FNAME "cxgb4/t5fw.bin"
-#define FW_CFNAME "cxgb4/t4-config.txt"
+#define FW4_CFNAME "cxgb4/t4-config.txt"
#define FW5_CFNAME "cxgb4/t5-config.txt"
MODULE_DESCRIPTION(DRV_DESC);
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(DRV_VERSION);
MODULE_DEVICE_TABLE(pci, cxgb4_pci_tbl);
-MODULE_FIRMWARE(FW_FNAME);
+MODULE_FIRMWARE(FW4_FNAME);
MODULE_FIRMWARE(FW5_FNAME);
/*
return 0;
}
-/*
- * Returns 0 if new FW was successfully loaded, a positive errno if a load was
- * started but failed, and a negative errno if flash load couldn't start.
- */
-static int upgrade_fw(struct adapter *adap)
-{
- int ret;
- u32 vers, exp_major;
- const struct fw_hdr *hdr;
- const struct firmware *fw;
- struct device *dev = adap->pdev_dev;
- char *fw_file_name;
-
- switch (CHELSIO_CHIP_VERSION(adap->chip)) {
- case CHELSIO_T4:
- fw_file_name = FW_FNAME;
- exp_major = FW_VERSION_MAJOR;
- break;
- case CHELSIO_T5:
- fw_file_name = FW5_FNAME;
- exp_major = FW_VERSION_MAJOR_T5;
- break;
- default:
- dev_err(dev, "Unsupported chip type, %x\n", adap->chip);
- return -EINVAL;
- }
-
- ret = request_firmware(&fw, fw_file_name, dev);
- if (ret < 0) {
- dev_err(dev, "unable to load firmware image %s, error %d\n",
- fw_file_name, ret);
- return ret;
- }
-
- hdr = (const struct fw_hdr *)fw->data;
- vers = ntohl(hdr->fw_ver);
- if (FW_HDR_FW_VER_MAJOR_GET(vers) != exp_major) {
- ret = -EINVAL; /* wrong major version, won't do */
- goto out;
- }
-
- /*
- * If the flash FW is unusable or we found something newer, load it.
- */
- if (FW_HDR_FW_VER_MAJOR_GET(adap->params.fw_vers) != exp_major ||
- vers > adap->params.fw_vers) {
- dev_info(dev, "upgrading firmware ...\n");
- ret = t4_fw_upgrade(adap, adap->mbox, fw->data, fw->size,
- /*force=*/false);
- if (!ret)
- dev_info(dev,
- "firmware upgraded to version %pI4 from %s\n",
- &hdr->fw_ver, fw_file_name);
- else
- dev_err(dev, "firmware upgrade failed! err=%d\n", -ret);
- } else {
- /*
- * Tell our caller that we didn't upgrade the firmware.
- */
- ret = -EINVAL;
- }
-
-out: release_firmware(fw);
- return ret;
-}
-
/*
* Allocate a chunk of memory using kmalloc or, if that fails, vmalloc.
* The allocated memory is cleared.
static int get_regs_len(struct net_device *dev)
{
struct adapter *adap = netdev2adap(dev);
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
return T4_REGMAP_SIZE;
else
return T5_REGMAP_SIZE;
data += sizeof(struct port_stats) / sizeof(u64);
collect_sge_port_stats(adapter, pi, (struct queue_port_stats *)data);
data += sizeof(struct queue_port_stats) / sizeof(u64);
- if (!is_t4(adapter->chip)) {
+ if (!is_t4(adapter->params.chip)) {
t4_write_reg(adapter, SGE_STAT_CFG, STATSOURCE_T5(7));
val1 = t4_read_reg(adapter, SGE_STAT_TOTAL);
val2 = t4_read_reg(adapter, SGE_STAT_MATCH);
*/
static inline unsigned int mk_adap_vers(const struct adapter *ap)
{
- return CHELSIO_CHIP_VERSION(ap->chip) |
- (CHELSIO_CHIP_RELEASE(ap->chip) << 10) | (1 << 16);
+ return CHELSIO_CHIP_VERSION(ap->params.chip) |
+ (CHELSIO_CHIP_RELEASE(ap->params.chip) << 10) | (1 << 16);
}
static void reg_block_dump(struct adapter *ap, void *buf, unsigned int start,
static const unsigned int *reg_ranges;
int arr_size = 0, buf_size = 0;
- if (is_t4(ap->chip)) {
+ if (is_t4(ap->params.chip)) {
reg_ranges = &t4_reg_ranges[0];
arr_size = ARRAY_SIZE(t4_reg_ranges);
buf_size = T4_REGMAP_SIZE;
size = t4_read_reg(adap, MA_EDRAM1_BAR);
add_debugfs_mem(adap, "edc1", MEM_EDC1, EDRAM_SIZE_GET(size));
}
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
size = t4_read_reg(adap, MA_EXT_MEMORY_BAR);
if (i & EXT_MEM_ENABLE)
add_debugfs_mem(adap, "mc", MEM_MC,
v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS);
v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2);
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
lp_count = G_LP_COUNT(v1);
hp_count = G_HP_COUNT(v1);
} else {
do {
v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS);
v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2);
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
lp_count = G_LP_COUNT(v1);
hp_count = G_HP_COUNT(v1);
} else {
adap = container_of(work, struct adapter, db_drop_task);
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
disable_dbs(adap);
notify_rdma_uld(adap, CXGB4_CONTROL_DB_DROP);
drain_db_fifo(adap, 1);
void t4_db_full(struct adapter *adap)
{
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
t4_set_reg_field(adap, SGE_INT_ENABLE3,
DBFIFO_HP_INT | DBFIFO_LP_INT, 0);
queue_work(workq, &adap->db_full_task);
void t4_db_dropped(struct adapter *adap)
{
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
queue_work(workq, &adap->db_drop_task);
}
lli.nchan = adap->params.nports;
lli.nports = adap->params.nports;
lli.wr_cred = adap->params.ofldq_wr_cred;
- lli.adapter_type = adap->params.rev;
+ lli.adapter_type = adap->params.chip;
lli.iscsi_iolen = MAXRXDATA_GET(t4_read_reg(adap, TP_PARA_REG2));
lli.udb_density = 1 << QUEUESPERPAGEPF0_GET(
t4_read_reg(adap, SGE_EGRESS_QUEUES_PER_PAGE_PF) >>
u32 bar0, mem_win0_base, mem_win1_base, mem_win2_base;
bar0 = pci_resource_start(adap->pdev, 0); /* truncation intentional */
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
mem_win0_base = bar0 + MEMWIN0_BASE;
mem_win1_base = bar0 + MEMWIN1_BASE;
mem_win2_base = bar0 + MEMWIN2_BASE;
const struct firmware *cf;
unsigned long mtype = 0, maddr = 0;
u32 finiver, finicsum, cfcsum;
- int ret, using_flash;
+ int ret;
+ int config_issued = 0;
char *fw_config_file, fw_config_file_path[256];
+ char *config_name = NULL;
/*
* Reset device if necessary.
* then use that. Otherwise, use the configuration file stored
* in the adapter flash ...
*/
- switch (CHELSIO_CHIP_VERSION(adapter->chip)) {
+ switch (CHELSIO_CHIP_VERSION(adapter->params.chip)) {
case CHELSIO_T4:
- fw_config_file = FW_CFNAME;
+ fw_config_file = FW4_CFNAME;
break;
case CHELSIO_T5:
fw_config_file = FW5_CFNAME;
ret = request_firmware(&cf, fw_config_file, adapter->pdev_dev);
if (ret < 0) {
- using_flash = 1;
+ config_name = "On FLASH";
mtype = FW_MEMTYPE_CF_FLASH;
maddr = t4_flash_cfg_addr(adapter);
} else {
u32 params[7], val[7];
- using_flash = 0;
+ sprintf(fw_config_file_path,
+ "/lib/firmware/%s", fw_config_file);
+ config_name = fw_config_file_path;
+
if (cf->size >= FLASH_CFG_MAX_SIZE)
ret = -ENOMEM;
else {
FW_LEN16(caps_cmd));
ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, sizeof(caps_cmd),
&caps_cmd);
+
+ /* If the CAPS_CONFIG failed with an ENOENT (for a Firmware
+ * Configuration File in FLASH), our last gasp effort is to use the
+ * Firmware Configuration File which is embedded in the firmware. A
+ * very few early versions of the firmware didn't have one embedded
+ * but we can ignore those.
+ */
+ if (ret == -ENOENT) {
+ memset(&caps_cmd, 0, sizeof(caps_cmd));
+ caps_cmd.op_to_write =
+ htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
+ FW_CMD_REQUEST |
+ FW_CMD_READ);
+ caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd));
+ ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd,
+ sizeof(caps_cmd), &caps_cmd);
+ config_name = "Firmware Default";
+ }
+
+ config_issued = 1;
if (ret < 0)
goto bye;
if (ret < 0)
goto bye;
- sprintf(fw_config_file_path, "/lib/firmware/%s", fw_config_file);
/*
* Return successfully and note that we're operating with parameters
* not supplied by the driver, rather than from hard-wired
*/
adapter->flags |= USING_SOFT_PARAMS;
dev_info(adapter->pdev_dev, "Successfully configured using Firmware "\
- "Configuration File %s, version %#x, computed checksum %#x\n",
- (using_flash
- ? "in device FLASH"
- : fw_config_file_path),
- finiver, cfcsum);
+ "Configuration File \"%s\", version %#x, computed checksum %#x\n",
+ config_name, finiver, cfcsum);
return 0;
/*
* want to issue a warning since this is fairly common.)
*/
bye:
- if (ret != -ENOENT)
- dev_warn(adapter->pdev_dev, "Configuration file error %d\n",
- -ret);
+ if (config_issued && ret != -ENOENT)
+ dev_warn(adapter->pdev_dev, "\"%s\" configuration file error %d\n",
+ config_name, -ret);
return ret;
}
return ret;
}
+static struct fw_info fw_info_array[] = {
+ {
+ .chip = CHELSIO_T4,
+ .fs_name = FW4_CFNAME,
+ .fw_mod_name = FW4_FNAME,
+ .fw_hdr = {
+ .chip = FW_HDR_CHIP_T4,
+ .fw_ver = __cpu_to_be32(FW_VERSION(T4)),
+ .intfver_nic = FW_INTFVER(T4, NIC),
+ .intfver_vnic = FW_INTFVER(T4, VNIC),
+ .intfver_ri = FW_INTFVER(T4, RI),
+ .intfver_iscsi = FW_INTFVER(T4, ISCSI),
+ .intfver_fcoe = FW_INTFVER(T4, FCOE),
+ },
+ }, {
+ .chip = CHELSIO_T5,
+ .fs_name = FW5_CFNAME,
+ .fw_mod_name = FW5_FNAME,
+ .fw_hdr = {
+ .chip = FW_HDR_CHIP_T5,
+ .fw_ver = __cpu_to_be32(FW_VERSION(T5)),
+ .intfver_nic = FW_INTFVER(T5, NIC),
+ .intfver_vnic = FW_INTFVER(T5, VNIC),
+ .intfver_ri = FW_INTFVER(T5, RI),
+ .intfver_iscsi = FW_INTFVER(T5, ISCSI),
+ .intfver_fcoe = FW_INTFVER(T5, FCOE),
+ },
+ }
+};
+
+static struct fw_info *find_fw_info(int chip)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(fw_info_array); i++) {
+ if (fw_info_array[i].chip == chip)
+ return &fw_info_array[i];
+ }
+ return NULL;
+}
+
/*
* Phase 0 of initialization: contact FW, obtain config, perform basic init.
*/
* later reporting and B. to warn if the currently loaded firmware
* is excessively mismatched relative to the driver.)
*/
- ret = t4_check_fw_version(adap);
-
- /* The error code -EFAULT is returned by t4_check_fw_version() if
- * firmware on adapter < supported firmware. If firmware on adapter
- * is too old (not supported by driver) and we're the MASTER_PF set
- * adapter state to DEV_STATE_UNINIT to force firmware upgrade
- * and reinitialization.
- */
- if ((adap->flags & MASTER_PF) && ret == -EFAULT)
- state = DEV_STATE_UNINIT;
+ t4_get_fw_version(adap, &adap->params.fw_vers);
+ t4_get_tp_version(adap, &adap->params.tp_vers);
if ((adap->flags & MASTER_PF) && state != DEV_STATE_INIT) {
- if (ret == -EINVAL || ret == -EFAULT || ret > 0) {
- if (upgrade_fw(adap) >= 0) {
- /*
- * Note that the chip was reset as part of the
- * firmware upgrade so we don't reset it again
- * below and grab the new firmware version.
- */
- reset = 0;
- ret = t4_check_fw_version(adap);
- } else
- if (ret == -EFAULT) {
- /*
- * Firmware is old but still might
- * work if we force reinitialization
- * of the adapter. Ignoring FW upgrade
- * failure.
- */
- dev_warn(adap->pdev_dev,
- "Ignoring firmware upgrade "
- "failure, and forcing driver "
- "to reinitialize the "
- "adapter.\n");
- ret = 0;
- }
+ struct fw_info *fw_info;
+ struct fw_hdr *card_fw;
+ const struct firmware *fw;
+ const u8 *fw_data = NULL;
+ unsigned int fw_size = 0;
+
+ /* This is the firmware whose headers the driver was compiled
+ * against
+ */
+ fw_info = find_fw_info(CHELSIO_CHIP_VERSION(adap->params.chip));
+ if (fw_info == NULL) {
+ dev_err(adap->pdev_dev,
+ "unable to get firmware info for chip %d.\n",
+ CHELSIO_CHIP_VERSION(adap->params.chip));
+ return -EINVAL;
}
+
+ /* allocate memory to read the header of the firmware on the
+ * card
+ */
+ card_fw = t4_alloc_mem(sizeof(*card_fw));
+
+ /* Get FW from from /lib/firmware/ */
+ ret = request_firmware(&fw, fw_info->fw_mod_name,
+ adap->pdev_dev);
+ if (ret < 0) {
+ dev_err(adap->pdev_dev,
+ "unable to load firmware image %s, error %d\n",
+ fw_info->fw_mod_name, ret);
+ } else {
+ fw_data = fw->data;
+ fw_size = fw->size;
+ }
+
+ /* upgrade FW logic */
+ ret = t4_prep_fw(adap, fw_info, fw_data, fw_size, card_fw,
+ state, &reset);
+
+ /* Cleaning up */
+ if (fw != NULL)
+ release_firmware(fw);
+ t4_free_mem(card_fw);
+
if (ret < 0)
- return ret;
+ goto bye;
}
/*
if (ret == -ENOENT) {
dev_info(adap->pdev_dev,
"No Configuration File present "
- "on adapter. Using hard-wired "
+ "on adapter. Using hard-wired "
"configuration parameters.\n");
ret = adap_init0_no_config(adap, reset);
}
netdev_info(dev, "Chelsio %s rev %d %s %sNIC PCIe x%d%s%s\n",
adap->params.vpd.id,
- CHELSIO_CHIP_RELEASE(adap->params.rev), buf,
+ CHELSIO_CHIP_RELEASE(adap->params.chip), buf,
is_offload(adap) ? "R" : "", adap->params.pci.width, spd,
(adap->flags & USING_MSIX) ? " MSI-X" :
(adap->flags & USING_MSI) ? " MSI" : "");
if (err)
goto out_unmap_bar0;
- if (!is_t4(adapter->chip)) {
+ if (!is_t4(adapter->params.chip)) {
s_qpp = QUEUESPERPAGEPF1 * adapter->fn;
qpp = 1 << QUEUESPERPAGEPF0_GET(t4_read_reg(adapter,
SGE_EGRESS_QUEUES_PER_PAGE_PF) >> s_qpp);
out_free_dev:
free_some_resources(adapter);
out_unmap_bar:
- if (!is_t4(adapter->chip))
+ if (!is_t4(adapter->params.chip))
iounmap(adapter->bar2);
out_unmap_bar0:
iounmap(adapter->regs);
free_some_resources(adapter);
iounmap(adapter->regs);
- if (!is_t4(adapter->chip))
+ if (!is_t4(adapter->params.chip))
iounmap(adapter->bar2);
kfree(adapter);
pci_disable_pcie_error_reporting(pdev);
u32 val;
if (q->pend_cred >= 8) {
val = PIDX(q->pend_cred / 8);
- if (!is_t4(adap->chip))
+ if (!is_t4(adap->params.chip))
val |= DBTYPE(1);
wmb();
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), DBPRIO(1) |
wmb(); /* write descriptors before telling HW */
spin_lock(&q->db_lock);
if (!q->db_disabled) {
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
QID(q->cntxt_id) | PIDX(n));
} else {
return 0;
}
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
__skb_pull(skb, sizeof(struct cpl_trace_pkt));
else
__skb_pull(skb, sizeof(struct cpl_t5_trace_pkt));
const struct cpl_rx_pkt *pkt;
struct sge_eth_rxq *rxq = container_of(q, struct sge_eth_rxq, rspq);
struct sge *s = &q->adap->sge;
- int cpl_trace_pkt = is_t4(q->adap->chip) ?
+ int cpl_trace_pkt = is_t4(q->adap->params.chip) ?
CPL_TRACE_PKT : CPL_TRACE_PKT_T5;
if (unlikely(*(u8 *)rsp == cpl_trace_pkt))
static void init_txq(struct adapter *adap, struct sge_txq *q, unsigned int id)
{
q->cntxt_id = id;
- if (!is_t4(adap->chip)) {
+ if (!is_t4(adap->params.chip)) {
unsigned int s_qpp;
unsigned short udb_density;
unsigned long qpshift;
* Set up to drop DOORBELL writes when the DOORBELL FIFO overflows
* and generate an interrupt when this occurs so we can recover.
*/
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
t4_set_reg_field(adap, A_SGE_DBFIFO_STATUS,
V_HP_INT_THRESH(M_HP_INT_THRESH) |
V_LP_INT_THRESH(M_LP_INT_THRESH),
u32 mc_bist_cmd, mc_bist_cmd_addr, mc_bist_cmd_len;
u32 mc_bist_status_rdata, mc_bist_data_pattern;
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
mc_bist_cmd = MC_BIST_CMD;
mc_bist_cmd_addr = MC_BIST_CMD_ADDR;
mc_bist_cmd_len = MC_BIST_CMD_LEN;
u32 edc_bist_cmd, edc_bist_cmd_addr, edc_bist_cmd_len;
u32 edc_bist_cmd_data_pattern, edc_bist_status_rdata;
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
edc_bist_cmd = EDC_REG(EDC_BIST_CMD, idx);
edc_bist_cmd_addr = EDC_REG(EDC_BIST_CMD_ADDR, idx);
edc_bist_cmd_len = EDC_REG(EDC_BIST_CMD_LEN, idx);
static int t4_mem_win_rw(struct adapter *adap, u32 addr, __be32 *data, int dir)
{
int i;
- u32 win_pf = is_t4(adap->chip) ? 0 : V_PFNUM(adap->fn);
+ u32 win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn);
/*
* Setup offset into PCIE memory window. Address must be a
}
/**
- * get_fw_version - read the firmware version
+ * t4_get_fw_version - read the firmware version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the FW version from flash.
*/
-static int get_fw_version(struct adapter *adapter, u32 *vers)
+int t4_get_fw_version(struct adapter *adapter, u32 *vers)
{
- return t4_read_flash(adapter, adapter->params.sf_fw_start +
- offsetof(struct fw_hdr, fw_ver), 1, vers, 0);
+ return t4_read_flash(adapter, FLASH_FW_START +
+ offsetof(struct fw_hdr, fw_ver), 1,
+ vers, 0);
}
/**
- * get_tp_version - read the TP microcode version
+ * t4_get_tp_version - read the TP microcode version
* @adapter: the adapter
* @vers: where to place the version
*
* Reads the TP microcode version from flash.
*/
-static int get_tp_version(struct adapter *adapter, u32 *vers)
+int t4_get_tp_version(struct adapter *adapter, u32 *vers)
{
- return t4_read_flash(adapter, adapter->params.sf_fw_start +
+ return t4_read_flash(adapter, FLASH_FW_START +
offsetof(struct fw_hdr, tp_microcode_ver),
1, vers, 0);
}
-/**
- * t4_check_fw_version - check if the FW is compatible with this driver
- * @adapter: the adapter
- *
- * Checks if an adapter's FW is compatible with the driver. Returns 0
- * if there's exact match, a negative error if the version could not be
- * read or there's a major version mismatch, and a positive value if the
- * expected major version is found but there's a minor version mismatch.
+/* Is the given firmware API compatible with the one the driver was compiled
+ * with?
*/
-int t4_check_fw_version(struct adapter *adapter)
+static int fw_compatible(const struct fw_hdr *hdr1, const struct fw_hdr *hdr2)
{
- u32 api_vers[2];
- int ret, major, minor, micro;
- int exp_major, exp_minor, exp_micro;
- ret = get_fw_version(adapter, &adapter->params.fw_vers);
- if (!ret)
- ret = get_tp_version(adapter, &adapter->params.tp_vers);
- if (!ret)
- ret = t4_read_flash(adapter, adapter->params.sf_fw_start +
- offsetof(struct fw_hdr, intfver_nic),
- 2, api_vers, 1);
- if (ret)
- return ret;
+ /* short circuit if it's the exact same firmware version */
+ if (hdr1->chip == hdr2->chip && hdr1->fw_ver == hdr2->fw_ver)
+ return 1;
- major = FW_HDR_FW_VER_MAJOR_GET(adapter->params.fw_vers);
- minor = FW_HDR_FW_VER_MINOR_GET(adapter->params.fw_vers);
- micro = FW_HDR_FW_VER_MICRO_GET(adapter->params.fw_vers);
+#define SAME_INTF(x) (hdr1->intfver_##x == hdr2->intfver_##x)
+ if (hdr1->chip == hdr2->chip && SAME_INTF(nic) && SAME_INTF(vnic) &&
+ SAME_INTF(ri) && SAME_INTF(iscsi) && SAME_INTF(fcoe))
+ return 1;
+#undef SAME_INTF
- switch (CHELSIO_CHIP_VERSION(adapter->chip)) {
- case CHELSIO_T4:
- exp_major = FW_VERSION_MAJOR;
- exp_minor = FW_VERSION_MINOR;
- exp_micro = FW_VERSION_MICRO;
- break;
- case CHELSIO_T5:
- exp_major = FW_VERSION_MAJOR_T5;
- exp_minor = FW_VERSION_MINOR_T5;
- exp_micro = FW_VERSION_MICRO_T5;
- break;
- default:
- dev_err(adapter->pdev_dev, "Unsupported chip type, %x\n",
- adapter->chip);
- return -EINVAL;
- }
+ return 0;
+}
- memcpy(adapter->params.api_vers, api_vers,
- sizeof(adapter->params.api_vers));
+/* The firmware in the filesystem is usable, but should it be installed?
+ * This routine explains itself in detail if it indicates the filesystem
+ * firmware should be installed.
+ */
+static int should_install_fs_fw(struct adapter *adap, int card_fw_usable,
+ int k, int c)
+{
+ const char *reason;
- if (major < exp_major || (major == exp_major && minor < exp_minor) ||
- (major == exp_major && minor == exp_minor && micro < exp_micro)) {
- dev_err(adapter->pdev_dev,
- "Card has firmware version %u.%u.%u, minimum "
- "supported firmware is %u.%u.%u.\n", major, minor,
- micro, exp_major, exp_minor, exp_micro);
- return -EFAULT;
+ if (!card_fw_usable) {
+ reason = "incompatible or unusable";
+ goto install;
}
- if (major != exp_major) { /* major mismatch - fail */
- dev_err(adapter->pdev_dev,
- "card FW has major version %u, driver wants %u\n",
- major, exp_major);
- return -EINVAL;
+ if (k > c) {
+ reason = "older than the version supported with this driver";
+ goto install;
}
- if (minor == exp_minor && micro == exp_micro)
- return 0; /* perfect match */
+ return 0;
+
+install:
+ dev_err(adap->pdev_dev, "firmware on card (%u.%u.%u.%u) is %s, "
+ "installing firmware %u.%u.%u.%u on card.\n",
+ FW_HDR_FW_VER_MAJOR_GET(c), FW_HDR_FW_VER_MINOR_GET(c),
+ FW_HDR_FW_VER_MICRO_GET(c), FW_HDR_FW_VER_BUILD_GET(c), reason,
+ FW_HDR_FW_VER_MAJOR_GET(k), FW_HDR_FW_VER_MINOR_GET(k),
+ FW_HDR_FW_VER_MICRO_GET(k), FW_HDR_FW_VER_BUILD_GET(k));
- /* Minor/micro version mismatch. Report it but often it's OK. */
return 1;
}
+int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
+ const u8 *fw_data, unsigned int fw_size,
+ struct fw_hdr *card_fw, enum dev_state state,
+ int *reset)
+{
+ int ret, card_fw_usable, fs_fw_usable;
+ const struct fw_hdr *fs_fw;
+ const struct fw_hdr *drv_fw;
+
+ drv_fw = &fw_info->fw_hdr;
+
+ /* Read the header of the firmware on the card */
+ ret = -t4_read_flash(adap, FLASH_FW_START,
+ sizeof(*card_fw) / sizeof(uint32_t),
+ (uint32_t *)card_fw, 1);
+ if (ret == 0) {
+ card_fw_usable = fw_compatible(drv_fw, (const void *)card_fw);
+ } else {
+ dev_err(adap->pdev_dev,
+ "Unable to read card's firmware header: %d\n", ret);
+ card_fw_usable = 0;
+ }
+
+ if (fw_data != NULL) {
+ fs_fw = (const void *)fw_data;
+ fs_fw_usable = fw_compatible(drv_fw, fs_fw);
+ } else {
+ fs_fw = NULL;
+ fs_fw_usable = 0;
+ }
+
+ if (card_fw_usable && card_fw->fw_ver == drv_fw->fw_ver &&
+ (!fs_fw_usable || fs_fw->fw_ver == drv_fw->fw_ver)) {
+ /* Common case: the firmware on the card is an exact match and
+ * the filesystem one is an exact match too, or the filesystem
+ * one is absent/incompatible.
+ */
+ } else if (fs_fw_usable && state == DEV_STATE_UNINIT &&
+ should_install_fs_fw(adap, card_fw_usable,
+ be32_to_cpu(fs_fw->fw_ver),
+ be32_to_cpu(card_fw->fw_ver))) {
+ ret = -t4_fw_upgrade(adap, adap->mbox, fw_data,
+ fw_size, 0);
+ if (ret != 0) {
+ dev_err(adap->pdev_dev,
+ "failed to install firmware: %d\n", ret);
+ goto bye;
+ }
+
+ /* Installed successfully, update the cached header too. */
+ memcpy(card_fw, fs_fw, sizeof(*card_fw));
+ card_fw_usable = 1;
+ *reset = 0; /* already reset as part of load_fw */
+ }
+
+ if (!card_fw_usable) {
+ uint32_t d, c, k;
+
+ d = be32_to_cpu(drv_fw->fw_ver);
+ c = be32_to_cpu(card_fw->fw_ver);
+ k = fs_fw ? be32_to_cpu(fs_fw->fw_ver) : 0;
+
+ dev_err(adap->pdev_dev, "Cannot find a usable firmware: "
+ "chip state %d, "
+ "driver compiled with %d.%d.%d.%d, "
+ "card has %d.%d.%d.%d, filesystem has %d.%d.%d.%d\n",
+ state,
+ FW_HDR_FW_VER_MAJOR_GET(d), FW_HDR_FW_VER_MINOR_GET(d),
+ FW_HDR_FW_VER_MICRO_GET(d), FW_HDR_FW_VER_BUILD_GET(d),
+ FW_HDR_FW_VER_MAJOR_GET(c), FW_HDR_FW_VER_MINOR_GET(c),
+ FW_HDR_FW_VER_MICRO_GET(c), FW_HDR_FW_VER_BUILD_GET(c),
+ FW_HDR_FW_VER_MAJOR_GET(k), FW_HDR_FW_VER_MINOR_GET(k),
+ FW_HDR_FW_VER_MICRO_GET(k), FW_HDR_FW_VER_BUILD_GET(k));
+ ret = EINVAL;
+ goto bye;
+ }
+
+ /* We're using whatever's on the card and it's known to be good. */
+ adap->params.fw_vers = be32_to_cpu(card_fw->fw_ver);
+ adap->params.tp_vers = be32_to_cpu(card_fw->tp_microcode_ver);
+
+bye:
+ return ret;
+}
+
/**
* t4_flash_erase_sectors - erase a range of flash sectors
* @adapter: the adapter
PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
pcie_port_intr_info) +
t4_handle_intr_status(adapter, PCIE_INT_CAUSE,
- is_t4(adapter->chip) ?
+ is_t4(adapter->params.chip) ?
pcie_intr_info : t5_pcie_intr_info);
if (fat)
{
u32 v, int_cause_reg;
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
int_cause_reg = PORT_REG(port, XGMAC_PORT_INT_CAUSE);
else
int_cause_reg = T5_PORT_REG(port, MAC_PORT_INT_CAUSE);
#define GET_STAT(name) \
t4_read_reg64(adap, \
- (is_t4(adap->chip) ? PORT_REG(idx, MPS_PORT_STAT_##name##_L) : \
+ (is_t4(adap->params.chip) ? PORT_REG(idx, MPS_PORT_STAT_##name##_L) : \
T5_PORT_REG(idx, MPS_PORT_STAT_##name##_L)))
#define GET_STAT_COM(name) t4_read_reg64(adap, MPS_STAT_##name##_L)
{
u32 mag_id_reg_l, mag_id_reg_h, port_cfg_reg;
- if (is_t4(adap->chip)) {
+ if (is_t4(adap->params.chip)) {
mag_id_reg_l = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_LO);
mag_id_reg_h = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_HI);
port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2);
int i;
u32 port_cfg_reg;
- if (is_t4(adap->chip))
+ if (is_t4(adap->params.chip))
port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2);
else
port_cfg_reg = T5_PORT_REG(port, MAC_PORT_CFG2);
return -EINVAL;
#define EPIO_REG(name) \
- (is_t4(adap->chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \
+ (is_t4(adap->params.chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \
T5_PORT_REG(port, MAC_PORT_EPIO_##name))
t4_write_reg(adap, EPIO_REG(DATA1), mask0 >> 32);
int t4_mem_win_read_len(struct adapter *adap, u32 addr, __be32 *data, int len)
{
int i, off;
- u32 win_pf = is_t4(adap->chip) ? 0 : V_PFNUM(adap->fn);
+ u32 win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn);
/* Align on a 2KB boundary.
*/
int i, ret;
struct fw_vi_mac_cmd c;
struct fw_vi_mac_exact *p;
- unsigned int max_naddr = is_t4(adap->chip) ?
+ unsigned int max_naddr = is_t4(adap->params.chip) ?
NUM_MPS_CLS_SRAM_L_INSTANCES :
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
int ret, mode;
struct fw_vi_mac_cmd c;
struct fw_vi_mac_exact *p = c.u.exact;
- unsigned int max_mac_addr = is_t4(adap->chip) ?
+ unsigned int max_mac_addr = is_t4(adap->params.chip) ?
NUM_MPS_CLS_SRAM_L_INSTANCES :
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
{
int ret, ver;
uint16_t device_id;
+ u32 pl_rev;
ret = t4_wait_dev_ready(adapter);
if (ret < 0)
return ret;
get_pci_mode(adapter, &adapter->params.pci);
- adapter->params.rev = t4_read_reg(adapter, PL_REV);
+ pl_rev = G_REV(t4_read_reg(adapter, PL_REV));
ret = get_flash_params(adapter);
if (ret < 0) {
*/
pci_read_config_word(adapter->pdev, PCI_DEVICE_ID, &device_id);
ver = device_id >> 12;
+ adapter->params.chip = 0;
switch (ver) {
case CHELSIO_T4:
- adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T4,
- adapter->params.rev);
+ adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T4, pl_rev);
break;
case CHELSIO_T5:
- adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T5,
- adapter->params.rev);
+ adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, pl_rev);
break;
default:
dev_err(adapter->pdev_dev, "Device %d is not supported\n",
return -EINVAL;
}
- /* Reassign the updated revision field */
- adapter->params.rev = adapter->chip;
-
init_cong_ctrl(adapter->params.a_wnd, adapter->params.b_wnd);
/*
#define PL_REV 0x1943c
+#define S_REV 0
+#define M_REV 0xfU
+#define V_REV(x) ((x) << S_REV)
+#define G_REV(x) (((x) >> S_REV) & M_REV)
+
#define LE_DB_CONFIG 0x19c04
#define HASHEN 0x00100000U
#define EDC_STRIDE_T5 (EDC_T51_BASE_ADDR - EDC_T50_BASE_ADDR)
#define EDC_REG_T5(reg, idx) (reg + EDC_STRIDE_T5 * idx)
+#define A_PL_VF_REV 0x4
+#define A_PL_VF_WHOAMI 0x0
+#define A_PL_VF_REVISION 0x8
+
+#define S_CHIPID 4
+#define M_CHIPID 0xfU
+#define V_CHIPID(x) ((x) << S_CHIPID)
+#define G_CHIPID(x) (((x) >> S_CHIPID) & M_CHIPID)
+
#endif /* __T4_REGS_H */
struct fw_hdr {
u8 ver;
- u8 reserved1;
+ u8 chip; /* terminator chip type */
__be16 len512; /* bin length in units of 512-bytes */
__be32 fw_ver; /* firmware version */
__be32 tp_microcode_ver;
__be32 reserved6[23];
};
+enum fw_hdr_chip {
+ FW_HDR_CHIP_T4,
+ FW_HDR_CHIP_T5
+};
+
#define FW_HDR_FW_VER_MAJOR_GET(x) (((x) >> 24) & 0xff)
#define FW_HDR_FW_VER_MINOR_GET(x) (((x) >> 16) & 0xff)
#define FW_HDR_FW_VER_MICRO_GET(x) (((x) >> 8) & 0xff)
unsigned long registered_device_map;
unsigned long open_device_map;
unsigned long flags;
- enum chip_type chip;
struct adapter_params params;
/* queue and interrupt resources */
/*
* Chip version 4, revision 0x3f (cxgb4vf).
*/
- return CHELSIO_CHIP_VERSION(adapter->chip) | (0x3f << 10);
+ return CHELSIO_CHIP_VERSION(adapter->params.chip) | (0x3f << 10);
}
/*
reg_block_dump(adapter, regbuf,
T4VF_MPS_BASE_ADDR + T4VF_MOD_MAP_MPS_FIRST,
T4VF_MPS_BASE_ADDR + T4VF_MOD_MAP_MPS_LAST);
+
+ /* T5 adds new registers in the PL Register map.
+ */
reg_block_dump(adapter, regbuf,
T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_FIRST,
- T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_LAST);
+ T4VF_PL_BASE_ADDR + (is_t4(adapter->params.chip)
+ ? A_PL_VF_WHOAMI : A_PL_VF_REVISION));
reg_block_dump(adapter, regbuf,
T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_FIRST,
T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_LAST);
unsigned int ethqsets;
int err;
u32 param, val = 0;
+ unsigned int chipid;
/*
* Wait for the device to become ready before proceeding ...
return err;
}
+ adapter->params.chip = 0;
switch (adapter->pdev->device >> 12) {
case CHELSIO_T4:
- adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 0);
+ adapter->params.chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 0);
break;
case CHELSIO_T5:
- adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T5, 0);
+ chipid = G_REV(t4_read_reg(adapter, A_PL_VF_REV));
+ adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, chipid);
break;
}
*/
if (fl->pend_cred >= FL_PER_EQ_UNIT) {
val = PIDX(fl->pend_cred / FL_PER_EQ_UNIT);
- if (!is_t4(adapter->chip))
+ if (!is_t4(adapter->params.chip))
val |= DBTYPE(1);
wmb();
t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_KDOORBELL,
#include "../cxgb4/t4fw_api.h"
#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
-#define CHELSIO_CHIP_VERSION(code) ((code) >> 4)
+#define CHELSIO_CHIP_VERSION(code) (((code) >> 4) & 0xf)
#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
+/* All T4 and later chips have their PCI-E Device IDs encoded as 0xVFPP where:
+ *
+ * V = "4" for T4; "5" for T5, etc. or
+ * = "a" for T4 FPGA; "b" for T4 FPGA, etc.
+ * F = "0" for PF 0..3; "4".."7" for PF4..7; and "8" for VFs
+ * PP = adapter product designation
+ */
#define CHELSIO_T4 0x4
#define CHELSIO_T5 0x5
enum chip_type {
- T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 0),
- T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
- T4_A3 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
+ T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1),
+ T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2),
T4_FIRST_REV = T4_A1,
- T4_LAST_REV = T4_A3,
+ T4_LAST_REV = T4_A2,
- T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
- T5_FIRST_REV = T5_A1,
+ T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
+ T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1),
+ T5_FIRST_REV = T5_A0,
T5_LAST_REV = T5_A1,
};
struct vpd_params vpd; /* Vital Product Data */
struct rss_params rss; /* Receive Side Scaling */
struct vf_resources vfres; /* Virtual Function Resource limits */
+ enum chip_type chip; /* chip code */
u8 nports; /* # of Ethernet "ports" */
};
static inline int is_t4(enum chip_type chip)
{
- return (chip >= T4_FIRST_REV && chip <= T4_LAST_REV);
+ return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T4;
}
int t4vf_wait_dev_ready(struct adapter *);
unsigned nfilters = 0;
unsigned int rem = naddr;
struct fw_vi_mac_cmd cmd, rpl;
- unsigned int max_naddr = is_t4(adapter->chip) ?
+ unsigned int max_naddr = is_t4(adapter->params.chip) ?
NUM_MPS_CLS_SRAM_L_INSTANCES :
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
struct fw_vi_mac_exact *p = &cmd.u.exact[0];
size_t len16 = DIV_ROUND_UP(offsetof(struct fw_vi_mac_cmd,
u.exact[1]), 16);
- unsigned int max_naddr = is_t4(adapter->chip) ?
+ unsigned int max_naddr = is_t4(adapter->params.chip) ?
NUM_MPS_CLS_SRAM_L_INSTANCES :
NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
#define SLIPORT_ERROR_NO_RESOURCE1 0x2
#define SLIPORT_ERROR_NO_RESOURCE2 0x9
+#define SLIPORT_ERROR_FW_RESET1 0x2
+#define SLIPORT_ERROR_FW_RESET2 0x0
+
/********* Memory BAR register ************/
#define PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET 0xfc
/* Host Interrupt Enable, if set interrupts are enabled although "PCI Interrupt
*/
if (sliport_status & SLIPORT_STATUS_ERR_MASK) {
adapter->hw_error = true;
- dev_err(&adapter->pdev->dev,
- "Error detected in the card\n");
+ /* Do not log error messages if its a FW reset */
+ if (sliport_err1 == SLIPORT_ERROR_FW_RESET1 &&
+ sliport_err2 == SLIPORT_ERROR_FW_RESET2) {
+ dev_info(&adapter->pdev->dev,
+ "Firmware update in progress\n");
+ return;
+ } else {
+ dev_err(&adapter->pdev->dev,
+ "Error detected in the card\n");
+ }
}
if (sliport_status & SLIPORT_STATUS_ERR_MASK) {
}
}
-static int be_clear(struct be_adapter *adapter)
+static void be_mac_clear(struct be_adapter *adapter)
{
int i;
+ if (adapter->pmac_id) {
+ for (i = 0; i < (adapter->uc_macs + 1); i++)
+ be_cmd_pmac_del(adapter, adapter->if_handle,
+ adapter->pmac_id[i], 0);
+ adapter->uc_macs = 0;
+
+ kfree(adapter->pmac_id);
+ adapter->pmac_id = NULL;
+ }
+}
+
+static int be_clear(struct be_adapter *adapter)
+{
be_cancel_worker(adapter);
if (sriov_enabled(adapter))
be_vf_clear(adapter);
/* delete the primary mac along with the uc-mac list */
- for (i = 0; i < (adapter->uc_macs + 1); i++)
- be_cmd_pmac_del(adapter, adapter->if_handle,
- adapter->pmac_id[i], 0);
- adapter->uc_macs = 0;
+ be_mac_clear(adapter);
be_cmd_if_destroy(adapter, adapter->if_handle, 0);
be_clear_queues(adapter);
- kfree(adapter->pmac_id);
- adapter->pmac_id = NULL;
-
be_msix_disable(adapter);
return 0;
}
}
if (change_status == LANCER_FW_RESET_NEEDED) {
+ dev_info(&adapter->pdev->dev,
+ "Resetting adapter to activate new FW\n");
status = lancer_physdev_ctrl(adapter,
PHYSDEV_CONTROL_FW_RESET_MASK);
if (status) {
goto err;
}
- dev_err(dev, "Error recovery successful\n");
+ dev_err(dev, "Adapter recovery successful\n");
return 0;
err:
if (status == -EAGAIN)
dev_err(dev, "Waiting for resource provisioning\n");
else
- dev_err(dev, "Error recovery failed\n");
+ dev_err(dev, "Adapter recovery failed\n");
return status;
}
* detected as not set during a prior frame transmission, then the
* ENET_TDAR[TDAR] bit is cleared at a later time, even if additional TxBDs
* were added to the ring and the ENET_TDAR[TDAR] bit is set. This results in
- * If the ready bit in the transmit buffer descriptor (TxBD[R]) is previously
- * detected as not set during a prior frame transmission, then the
- * ENET_TDAR[TDAR] bit is cleared at a later time, even if additional TxBDs
- * were added to the ring and the ENET_TDAR[TDAR] bit is set. This results in
* frames not being transmitted until there is a 0-to-1 transition on
* ENET_TDAR[TDAR].
*/
* data.
*/
bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, bufaddr,
- FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
+ skb->len, DMA_TO_DEVICE);
if (dma_mapping_error(&fep->pdev->dev, bdp->cbd_bufaddr)) {
bdp->cbd_bufaddr = 0;
fep->tx_skbuff[index] = NULL;
else
index = bdp - fep->tx_bd_base;
- dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
- FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
- bdp->cbd_bufaddr = 0;
-
skb = fep->tx_skbuff[index];
+ dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, skb->len,
+ DMA_TO_DEVICE);
+ bdp->cbd_bufaddr = 0;
/* Check for errors. */
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
dev->hw_features = NETIF_F_SG | NETIF_F_TSO |
NETIF_F_IP_CSUM | NETIF_F_HW_VLAN_CTAG_TX;
- dev->features = NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_TSO |
+ dev->features = NETIF_F_SG | NETIF_F_TSO |
NETIF_F_HIGHDMA | NETIF_F_IP_CSUM |
NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX |
NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXCSUM;
struct rtnl_link_stats64 *vsi_stats = i40e_get_vsi_stats_struct(vsi);
int i;
+ if (!vsi->tx_rings)
+ return stats;
+
rcu_read_lock();
for (i = 0; i < vsi->num_queue_pairs; i++) {
struct i40e_ring *tx_ring, *rx_ring;
* ownership of the resources, wait and try again to
* see if they have relinquished the resources yet.
*/
- udelay(usec_interval);
+ if (usec_interval >= 1000)
+ mdelay(usec_interval/1000);
+ else
+ udelay(usec_interval);
}
ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, &phy_status);
if (ret_val)
dev_kfree_skb_any(skb);
dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
- rx_desc->data_size, DMA_FROM_DEVICE);
+ MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
}
if (rx_done)
}
dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
- rx_desc->data_size, DMA_FROM_DEVICE);
+ MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
rx_bytes = rx_desc->data_size -
(ETH_FCS_LEN + MVNETA_MH_SIZE);
return -ENOMEM;
ret = pci_register_driver(&mlx4_driver);
+ if (ret < 0)
+ destroy_workqueue(mlx4_wq);
return ret < 0 ? ret : 0;
}
{
struct fe_priv *np = netdev_priv(dev);
u8 __iomem *base = get_hwbase(dev);
- int result;
- memset(buffer, 0, nv_get_sset_count(dev, ETH_SS_TEST)*sizeof(u64));
+ int result, count;
+
+ count = nv_get_sset_count(dev, ETH_SS_TEST);
+ memset(buffer, 0, count * sizeof(u64));
if (!nv_link_test(dev)) {
test->flags |= ETH_TEST_FL_FAILED;
return;
}
- if (!nv_loopback_test(dev)) {
+ if (count > NV_TEST_COUNT_BASE && !nv_loopback_test(dev)) {
test->flags |= ETH_TEST_FL_FAILED;
buffer[3] = 1;
}
qlcnic_83xx_poll_process_aen(adapter);
- if (ahw->diag_test == QLCNIC_INTERRUPT_TEST) {
- ahw->diag_cnt++;
+ if (ahw->diag_test) {
+ if (ahw->diag_test == QLCNIC_INTERRUPT_TEST)
+ ahw->diag_cnt++;
qlcnic_83xx_enable_legacy_msix_mbx_intr(adapter);
return IRQ_HANDLED;
}
}
if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) {
- /* disable and free mailbox interrupt */
- if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) {
- qlcnic_83xx_enable_mbx_poll(adapter);
- qlcnic_83xx_free_mbx_intr(adapter);
- }
adapter->ahw->loopback_state = 0;
adapter->ahw->hw_ops->setup_link_event(adapter, 1);
}
{
struct qlcnic_adapter *adapter = netdev_priv(netdev);
struct qlcnic_host_sds_ring *sds_ring;
- int ring, err;
+ int ring;
clear_bit(__QLCNIC_DEV_UP, &adapter->state);
if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) {
for (ring = 0; ring < adapter->drv_sds_rings; ring++) {
sds_ring = &adapter->recv_ctx->sds_rings[ring];
- qlcnic_83xx_disable_intr(adapter, sds_ring);
- if (!(adapter->flags & QLCNIC_MSIX_ENABLED))
- qlcnic_83xx_enable_mbx_poll(adapter);
+ if (adapter->flags & QLCNIC_MSIX_ENABLED)
+ qlcnic_83xx_disable_intr(adapter, sds_ring);
}
}
qlcnic_fw_destroy_ctx(adapter);
qlcnic_detach(adapter);
- if (adapter->ahw->diag_test == QLCNIC_LOOPBACK_TEST) {
- if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) {
- err = qlcnic_83xx_setup_mbx_intr(adapter);
- qlcnic_83xx_disable_mbx_poll(adapter);
- if (err) {
- dev_err(&adapter->pdev->dev,
- "%s: failed to setup mbx interrupt\n",
- __func__);
- goto out;
- }
- }
- }
adapter->ahw->diag_test = 0;
adapter->drv_sds_rings = drv_sds_rings;
if (netif_running(netdev))
__qlcnic_up(adapter, netdev);
- if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST &&
- !(adapter->flags & QLCNIC_MSIX_ENABLED))
- qlcnic_83xx_disable_mbx_poll(adapter);
out:
netif_device_attach(netdev);
}
return;
}
+static inline void qlcnic_dump_mailbox_registers(struct qlcnic_adapter *adapter)
+{
+ struct qlcnic_hardware_context *ahw = adapter->ahw;
+ u32 offset;
+
+ offset = QLCRDX(ahw, QLCNIC_DEF_INT_MASK);
+ dev_info(&adapter->pdev->dev, "Mbx interrupt mask=0x%x, Mbx interrupt enable=0x%x, Host mbx control=0x%x, Fw mbx control=0x%x",
+ readl(ahw->pci_base0 + offset),
+ QLCRDX(ahw, QLCNIC_MBX_INTR_ENBL),
+ QLCRDX(ahw, QLCNIC_HOST_MBX_CTRL),
+ QLCRDX(ahw, QLCNIC_FW_MBX_CTRL));
+}
+
static void qlcnic_83xx_mailbox_worker(struct work_struct *work)
{
struct qlcnic_mailbox *mbx = container_of(work, struct qlcnic_mailbox,
__func__, cmd->cmd_op, cmd->type, ahw->pci_func,
ahw->op_mode);
clear_bit(QLC_83XX_MBX_READY, &mbx->status);
+ qlcnic_dump_mailbox_registers(adapter);
+ qlcnic_83xx_get_mbx_data(adapter, cmd);
qlcnic_dump_mbx(adapter, cmd);
qlcnic_83xx_idc_request_reset(adapter,
QLCNIC_FORCE_FW_DUMP_KEY);
pci_channel_state_t);
pci_ers_result_t qlcnic_83xx_io_slot_reset(struct pci_dev *);
void qlcnic_83xx_io_resume(struct pci_dev *);
+void qlcnic_83xx_stop_hw(struct qlcnic_adapter *);
#endif
adapter->ahw->idc.err_code = -EIO;
dev_err(&adapter->pdev->dev,
"%s: Device in unknown state\n", __func__);
+ clear_bit(__QLCNIC_RESETTING, &adapter->state);
return 0;
}
struct qlcnic_hardware_context *ahw = adapter->ahw;
struct qlcnic_mailbox *mbx = ahw->mailbox;
int ret = 0;
- u32 owner;
u32 val;
/* Perform NIC configuration based ready state entry actions */
set_bit(__QLCNIC_RESETTING, &adapter->state);
qlcnic_83xx_idc_enter_need_reset_state(adapter, 1);
} else {
- owner = qlcnic_83xx_idc_find_reset_owner_id(adapter);
- if (ahw->pci_func == owner)
- qlcnic_dump_fw(adapter);
+ netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n",
+ __func__);
+ qlcnic_83xx_idc_enter_failed_state(adapter, 1);
}
return -EIO;
}
return 0;
}
-static int qlcnic_83xx_idc_failed_state(struct qlcnic_adapter *adapter)
+static void qlcnic_83xx_idc_failed_state(struct qlcnic_adapter *adapter)
{
- dev_err(&adapter->pdev->dev, "%s: please restart!!\n", __func__);
+ struct qlcnic_hardware_context *ahw = adapter->ahw;
+ u32 val, owner;
+
+ val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL);
+ if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) {
+ owner = qlcnic_83xx_idc_find_reset_owner_id(adapter);
+ if (ahw->pci_func == owner) {
+ qlcnic_83xx_stop_hw(adapter);
+ qlcnic_dump_fw(adapter);
+ }
+ }
+
+ netdev_warn(adapter->netdev, "%s: Reboot will be required to recover the adapter!!\n",
+ __func__);
clear_bit(__QLCNIC_RESETTING, &adapter->state);
- adapter->ahw->idc.err_code = -EIO;
+ ahw->idc.err_code = -EIO;
- return 0;
+ return;
}
static int qlcnic_83xx_idc_quiesce_state(struct qlcnic_adapter *adapter)
adapter->ahw->idc.prev_state = adapter->ahw->idc.curr_state;
qlcnic_83xx_periodic_tasks(adapter);
- /* Do not reschedule if firmaware is in hanged state and auto
- * recovery is disabled
- */
- if ((adapter->flags & QLCNIC_FW_HANG) && !qlcnic_auto_fw_reset)
- return;
-
/* Re-schedule the function */
if (test_bit(QLC_83XX_MODULE_LOADED, &adapter->ahw->idc.status))
qlcnic_schedule_work(adapter, qlcnic_83xx_idc_poll_dev_state,
}
val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL);
- if ((val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) ||
- !qlcnic_auto_fw_reset) {
- dev_err(&adapter->pdev->dev,
- "%s:failed, device in non reset mode\n", __func__);
+ if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) {
+ netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n",
+ __func__);
+ qlcnic_83xx_idc_enter_failed_state(adapter, 0);
qlcnic_83xx_unlock_driver(adapter);
return;
}
if (size & 0xF)
size = (size + 16) & ~0xF;
- p_cache = kzalloc(size, GFP_KERNEL);
+ p_cache = vzalloc(size);
if (p_cache == NULL)
return -ENOMEM;
ret = qlcnic_83xx_lockless_flash_read32(adapter, src, p_cache,
size / sizeof(u32));
if (ret) {
- kfree(p_cache);
+ vfree(p_cache);
return ret;
}
/* 16 byte write to MS memory */
ret = qlcnic_83xx_ms_mem_write128(adapter, dest, (u32 *)p_cache,
size / 16);
if (ret) {
- kfree(p_cache);
+ vfree(p_cache);
return ret;
}
- kfree(p_cache);
+ vfree(p_cache);
return ret;
}
p_dev->ahw->reset.seq_index = index;
}
-static void qlcnic_83xx_stop_hw(struct qlcnic_adapter *p_dev)
+void qlcnic_83xx_stop_hw(struct qlcnic_adapter *p_dev)
{
p_dev->ahw->reset.seq_index = 0;
val = QLCRDX(adapter->ahw, QLC_83XX_IDC_CTRL);
if (!(val & QLC_83XX_IDC_GRACEFULL_RESET))
qlcnic_dump_fw(adapter);
+
+ if (val & QLC_83XX_IDC_DISABLE_FW_RESET_RECOVERY) {
+ netdev_info(adapter->netdev, "%s: Auto firmware recovery is disabled\n",
+ __func__);
+ qlcnic_83xx_idc_enter_failed_state(adapter, 1);
+ return err;
+ }
+
qlcnic_83xx_init_hw(adapter);
if (qlcnic_83xx_copy_bootloader(adapter))
ahw->nic_mode = QLCNIC_DEFAULT_MODE;
adapter->nic_ops->init_driver = qlcnic_83xx_init_default_driver;
ahw->idc.state_entry = qlcnic_83xx_idc_ready_state_entry;
- adapter->max_sds_rings = ahw->max_rx_ques;
- adapter->max_tx_rings = ahw->max_tx_ques;
+ adapter->max_sds_rings = QLCNIC_MAX_SDS_RINGS;
+ adapter->max_tx_rings = QLCNIC_MAX_TX_RINGS;
} else {
return -EIO;
}
static int qlcnic_validate_ring_count(struct qlcnic_adapter *adapter,
u8 rx_ring, u8 tx_ring)
{
+ if (rx_ring == 0 || tx_ring == 0)
+ return -EINVAL;
+
if (rx_ring != 0) {
if (rx_ring > adapter->max_sds_rings) {
- netdev_err(adapter->netdev, "Invalid ring count, SDS ring count %d should not be greater than max %d driver sds rings.\n",
+ netdev_err(adapter->netdev,
+ "Invalid ring count, SDS ring count %d should not be greater than max %d driver sds rings.\n",
rx_ring, adapter->max_sds_rings);
return -EINVAL;
}
}
if (tx_ring != 0) {
- if (qlcnic_82xx_check(adapter) &&
- (tx_ring > adapter->max_tx_rings)) {
+ if (tx_ring > adapter->max_tx_rings) {
netdev_err(adapter->netdev,
"Invalid ring count, Tx ring count %d should not be greater than max %d driver Tx rings.\n",
tx_ring, adapter->max_tx_rings);
return -EINVAL;
}
-
- if (qlcnic_83xx_check(adapter) &&
- (tx_ring > QLCNIC_SINGLE_RING)) {
- netdev_err(adapter->netdev,
- "Invalid ring count, Tx ring count %d should not be greater than %d driver Tx rings.\n",
- tx_ring, QLCNIC_SINGLE_RING);
- return -EINVAL;
- }
}
return 0;
struct qlcnic_hardware_context *ahw = adapter->ahw;
struct qlcnic_cmd_args cmd;
int ret, drv_sds_rings = adapter->drv_sds_rings;
+ int drv_tx_rings = adapter->drv_tx_rings;
if (qlcnic_83xx_check(adapter))
return qlcnic_83xx_interrupt_test(netdev);
clear_diag_irq:
adapter->drv_sds_rings = drv_sds_rings;
+ adapter->drv_tx_rings = drv_tx_rings;
clear_bit(__QLCNIC_RESETTING, &adapter->state);
return ret;
if (adapter->ahw->linkup && !linkup) {
netdev_info(netdev, "NIC Link is down\n");
adapter->ahw->linkup = 0;
- if (netif_running(netdev)) {
- netif_carrier_off(netdev);
- netif_tx_stop_all_queues(netdev);
- }
+ netif_carrier_off(netdev);
} else if (!adapter->ahw->linkup && linkup) {
netdev_info(netdev, "NIC Link is up\n");
adapter->ahw->linkup = 1;
- if (netif_running(netdev)) {
- netif_carrier_on(netdev);
- netif_wake_queue(netdev);
- }
+ netif_carrier_on(netdev);
}
}
} else {
adapter->ahw->nic_mode = QLCNIC_DEFAULT_MODE;
adapter->max_tx_rings = QLCNIC_MAX_HW_TX_RINGS;
+ adapter->max_sds_rings = QLCNIC_MAX_SDS_RINGS;
adapter->flags &= ~QLCNIC_ESWITCH_ENABLED;
}
qlcnic_detach(adapter);
adapter->drv_sds_rings = QLCNIC_SINGLE_RING;
- adapter->drv_tx_rings = QLCNIC_SINGLE_RING;
adapter->ahw->diag_test = test;
adapter->ahw->linkup = 0;
*/
#define DRV_NAME "qlge"
#define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver "
-#define DRV_VERSION "1.00.00.33"
+#define DRV_VERSION "1.00.00.34"
#define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */
};
#define QLGE_TEST_LEN (sizeof(ql_gstrings_test) / ETH_GSTRING_LEN)
#define QLGE_STATS_LEN ARRAY_SIZE(ql_gstrings_stats)
+#define QLGE_RCV_MAC_ERR_STATS 7
static int ql_update_ring_coalescing(struct ql_adapter *qdev)
{
iter++;
}
+ /* Update receive mac error statistics */
+ iter += QLGE_RCV_MAC_ERR_STATS;
+
/*
* Get Per-priority TX pause frame counter statistics.
*/
netdev_features_t features)
{
int err;
- /*
- * Since there is no support for separate rx/tx vlan accel
- * enable/disable make sure tx flag is always in same state as rx.
- */
- if (features & NETIF_F_HW_VLAN_CTAG_RX)
- features |= NETIF_F_HW_VLAN_CTAG_TX;
- else
- features &= ~NETIF_F_HW_VLAN_CTAG_TX;
/* Update the behavior of vlan accel in the adapter */
err = qlge_update_hw_vlan_features(ndev, features);
EFX_MAX_FRAME_LEN(efx->net_dev->mtu) +
efx->type->rx_buffer_padding);
rx_buf_len = (sizeof(struct efx_rx_page_state) +
- NET_IP_ALIGN + efx->rx_dma_len);
+ efx->rx_ip_align + efx->rx_dma_len);
if (rx_buf_len <= PAGE_SIZE) {
efx->rx_scatter = efx->type->always_rx_scatter;
efx->rx_buffer_order = 0;
WARN_ON(channel->rx_pkt_n_frags);
}
+ efx_ptp_start_datapath(efx);
+
if (netif_device_present(efx->net_dev))
netif_tx_wake_all_queues(efx->net_dev);
}
EFX_ASSERT_RESET_SERIALISED(efx);
BUG_ON(efx->port_enabled);
+ efx_ptp_stop_datapath(efx);
+
/* Stop RX refill */
efx_for_each_channel(channel, efx) {
efx_for_each_channel_rx_queue(rx_queue, channel)
efx->net_dev = net_dev;
efx->rx_prefix_size = efx->type->rx_prefix_size;
+ efx->rx_ip_align =
+ NET_IP_ALIGN ? (efx->rx_prefix_size + NET_IP_ALIGN) % 4 : 0;
efx->rx_packet_hash_offset =
efx->type->rx_hash_offset - efx->type->rx_prefix_size;
spin_lock_init(&efx->stats_lock);
static void efx_mcdi_timeout_async(unsigned long context);
static int efx_mcdi_drv_attach(struct efx_nic *efx, bool driver_operating,
bool *was_attached_out);
+static bool efx_mcdi_poll_once(struct efx_nic *efx);
static inline struct efx_mcdi_iface *efx_mcdi(struct efx_nic *efx)
{
}
}
+static bool efx_mcdi_poll_once(struct efx_nic *efx)
+{
+ struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
+
+ rmb();
+ if (!efx->type->mcdi_poll_response(efx))
+ return false;
+
+ spin_lock_bh(&mcdi->iface_lock);
+ efx_mcdi_read_response_header(efx);
+ spin_unlock_bh(&mcdi->iface_lock);
+
+ return true;
+}
+
static int efx_mcdi_poll(struct efx_nic *efx)
{
struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
time = jiffies;
- rmb();
- if (efx->type->mcdi_poll_response(efx))
+ if (efx_mcdi_poll_once(efx))
break;
if (time_after(time, finish))
return -ETIMEDOUT;
}
- spin_lock_bh(&mcdi->iface_lock);
- efx_mcdi_read_response_header(efx);
- spin_unlock_bh(&mcdi->iface_lock);
-
/* Return rc=0 like wait_event_timeout() */
return 0;
}
rc = efx_mcdi_await_completion(efx);
if (rc != 0) {
+ netif_err(efx, hw, efx->net_dev,
+ "MC command 0x%x inlen %d mode %d timed out\n",
+ cmd, (int)inlen, mcdi->mode);
+
+ if (mcdi->mode == MCDI_MODE_EVENTS && efx_mcdi_poll_once(efx)) {
+ netif_err(efx, hw, efx->net_dev,
+ "MCDI request was completed without an event\n");
+ rc = 0;
+ }
+
/* Close the race with efx_mcdi_ev_cpl() executing just too late
* and completing a request we've just cancelled, by ensuring
* that the seqno check therein fails.
++mcdi->seqno;
++mcdi->credits;
spin_unlock_bh(&mcdi->iface_lock);
+ }
- netif_err(efx, hw, efx->net_dev,
- "MC command 0x%x inlen %d mode %d timed out\n",
- cmd, (int)inlen, mcdi->mode);
- } else {
+ if (rc == 0) {
size_t hdr_len, data_len;
/* At the very least we need a memory barrier here to ensure
* @n_channels: Number of channels in use
* @n_rx_channels: Number of channels used for RX (= number of RX queues)
* @n_tx_channels: Number of channels used for TX
+ * @rx_ip_align: RX DMA address offset to have IP header aligned in
+ * in accordance with NET_IP_ALIGN
* @rx_dma_len: Current maximum RX DMA length
* @rx_buffer_order: Order (log2) of number of pages for each RX buffer
* @rx_buffer_truesize: Amortised allocation size of an RX buffer,
unsigned rss_spread;
unsigned tx_channel_offset;
unsigned n_tx_channels;
+ unsigned int rx_ip_align;
unsigned int rx_dma_len;
unsigned int rx_buffer_order;
unsigned int rx_buffer_truesize;
bool efx_ptp_is_ptp_tx(struct efx_nic *efx, struct sk_buff *skb);
int efx_ptp_tx(struct efx_nic *efx, struct sk_buff *skb);
void efx_ptp_event(struct efx_nic *efx, efx_qword_t *ev);
+void efx_ptp_start_datapath(struct efx_nic *efx);
+void efx_ptp_stop_datapath(struct efx_nic *efx);
extern const struct efx_nic_type falcon_a1_nic_type;
extern const struct efx_nic_type falcon_b0_nic_type;
* @evt_list: List of MC receive events awaiting packets
* @evt_free_list: List of free events
* @evt_lock: Lock for manipulating evt_list and evt_free_list
+ * @evt_overflow: Boolean indicating that event list has overflowed
* @rx_evts: Instantiated events (on evt_list and evt_free_list)
* @workwq: Work queue for processing pending PTP operations
* @work: Work task
struct list_head evt_list;
struct list_head evt_free_list;
spinlock_t evt_lock;
+ bool evt_overflow;
struct efx_ptp_event_rx rx_evts[MAX_RECEIVE_EVENTS];
struct workqueue_struct *workwq;
struct work_struct work;
}
}
}
+ /* If the event overflow flag is set and the event list is now empty
+ * clear the flag to re-enable the overflow warning message.
+ */
+ if (ptp->evt_overflow && list_empty(&ptp->evt_list))
+ ptp->evt_overflow = false;
spin_unlock_bh(&ptp->evt_lock);
}
break;
}
}
+ /* If the event overflow flag is set and the event list is now empty
+ * clear the flag to re-enable the overflow warning message.
+ */
+ if (ptp->evt_overflow && list_empty(&ptp->evt_list))
+ ptp->evt_overflow = false;
spin_unlock_bh(&ptp->evt_lock);
return rc;
__skb_queue_tail(q, skb);
} else if (time_after(jiffies, match->expiry)) {
match->state = PTP_PACKET_STATE_TIMED_OUT;
- netif_warn(efx, rx_err, efx->net_dev,
- "PTP packet - no timestamp seen\n");
+ if (net_ratelimit())
+ netif_warn(efx, rx_err, efx->net_dev,
+ "PTP packet - no timestamp seen\n");
__skb_queue_tail(q, skb);
} else {
/* Replace unprocessed entry and stop */
static int efx_ptp_stop(struct efx_nic *efx)
{
struct efx_ptp_data *ptp = efx->ptp_data;
- int rc = efx_ptp_disable(efx);
struct list_head *cursor;
struct list_head *next;
+ int rc;
+
+ if (ptp == NULL)
+ return 0;
+
+ rc = efx_ptp_disable(efx);
if (ptp->rxfilter_installed) {
efx_filter_remove_id_safe(efx, EFX_FILTER_PRI_REQUIRED,
list_for_each_safe(cursor, next, &efx->ptp_data->evt_list) {
list_move(cursor, &efx->ptp_data->evt_free_list);
}
+ ptp->evt_overflow = false;
spin_unlock_bh(&efx->ptp_data->evt_lock);
return rc;
}
+static int efx_ptp_restart(struct efx_nic *efx)
+{
+ if (efx->ptp_data && efx->ptp_data->enabled)
+ return efx_ptp_start(efx);
+ return 0;
+}
+
static void efx_ptp_pps_worker(struct work_struct *work)
{
struct efx_ptp_data *ptp =
spin_lock_init(&ptp->evt_lock);
for (pos = 0; pos < MAX_RECEIVE_EVENTS; pos++)
list_add(&ptp->rx_evts[pos].link, &ptp->evt_free_list);
+ ptp->evt_overflow = false;
ptp->phc_clock_info.owner = THIS_MODULE;
snprintf(ptp->phc_clock_info.name,
skb->len >= PTP_MIN_LENGTH &&
skb->len <= MC_CMD_PTP_IN_TRANSMIT_PACKET_MAXNUM &&
likely(skb->protocol == htons(ETH_P_IP)) &&
+ skb_transport_header_was_set(skb) &&
+ skb_network_header_len(skb) >= sizeof(struct iphdr) &&
ip_hdr(skb)->protocol == IPPROTO_UDP &&
+ skb_headlen(skb) >=
+ skb_transport_offset(skb) + sizeof(struct udphdr) &&
udp_hdr(skb)->dest == htons(PTP_EVENT_PORT);
}
{
if ((enable_wanted != efx->ptp_data->enabled) ||
(enable_wanted && (efx->ptp_data->mode != new_mode))) {
- int rc;
+ int rc = 0;
if (enable_wanted) {
/* Change of mode requires disable */
* succeed.
*/
efx->ptp_data->mode = new_mode;
- rc = efx_ptp_start(efx);
+ if (netif_running(efx->net_dev))
+ rc = efx_ptp_start(efx);
if (rc == 0) {
rc = efx_ptp_synchronize(efx,
PTP_SYNC_ATTEMPTS * 2);
list_add_tail(&evt->link, &ptp->evt_list);
queue_work(ptp->workwq, &ptp->work);
- } else {
- netif_err(efx, rx_err, efx->net_dev, "No free PTP event");
+ } else if (!ptp->evt_overflow) {
+ /* Log a warning message and set the event overflow flag.
+ * The message won't be logged again until the event queue
+ * becomes empty.
+ */
+ netif_err(efx, rx_err, efx->net_dev, "PTP event queue overflow\n");
+ ptp->evt_overflow = true;
}
spin_unlock_bh(&ptp->evt_lock);
}
if (rc != 0)
return rc;
- ptp_data->current_adjfreq = delta;
+ ptp_data->current_adjfreq = adjustment_ns;
return 0;
}
MCDI_SET_DWORD(inbuf, PTP_IN_OP, MC_CMD_PTP_OP_ADJUST);
MCDI_SET_DWORD(inbuf, PTP_IN_PERIPH_ID, 0);
- MCDI_SET_QWORD(inbuf, PTP_IN_ADJUST_FREQ, 0);
+ MCDI_SET_QWORD(inbuf, PTP_IN_ADJUST_FREQ, ptp_data->current_adjfreq);
MCDI_SET_DWORD(inbuf, PTP_IN_ADJUST_SECONDS, (u32)delta_ts.tv_sec);
MCDI_SET_DWORD(inbuf, PTP_IN_ADJUST_NANOSECONDS, (u32)delta_ts.tv_nsec);
return efx_mcdi_rpc(efx, MC_CMD_PTP, inbuf, sizeof(inbuf),
efx->extra_channel_type[EFX_EXTRA_CHANNEL_PTP] =
&efx_ptp_channel_type;
}
+
+void efx_ptp_start_datapath(struct efx_nic *efx)
+{
+ if (efx_ptp_restart(efx))
+ netif_err(efx, drv, efx->net_dev, "Failed to restart PTP.\n");
+}
+
+void efx_ptp_stop_datapath(struct efx_nic *efx)
+{
+ efx_ptp_stop(efx);
+}
void efx_rx_config_page_split(struct efx_nic *efx)
{
- efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + NET_IP_ALIGN,
+ efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + efx->rx_ip_align,
EFX_RX_BUF_ALIGNMENT);
efx->rx_bufs_per_page = efx->rx_buffer_order ? 1 :
((PAGE_SIZE - sizeof(struct efx_rx_page_state)) /
do {
index = rx_queue->added_count & rx_queue->ptr_mask;
rx_buf = efx_rx_buffer(rx_queue, index);
- rx_buf->dma_addr = dma_addr + NET_IP_ALIGN;
+ rx_buf->dma_addr = dma_addr + efx->rx_ip_align;
rx_buf->page = page;
- rx_buf->page_offset = page_offset + NET_IP_ALIGN;
+ rx_buf->page_offset = page_offset + efx->rx_ip_align;
rx_buf->len = efx->rx_dma_len;
rx_buf->flags = 0;
++rx_queue->added_count;
#include <linux/mii.h>
#include <linux/workqueue.h>
#include <linux/of.h>
+#include <linux/of_device.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
}
}
+#if IS_BUILTIN(CONFIG_OF)
+static const struct of_device_id smc91x_match[] = {
+ { .compatible = "smsc,lan91c94", },
+ { .compatible = "smsc,lan91c111", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, smc91x_match);
+#endif
+
/*
* smc_init(void)
* Input parameters:
static int smc_drv_probe(struct platform_device *pdev)
{
struct smc91x_platdata *pd = dev_get_platdata(&pdev->dev);
+ const struct of_device_id *match = NULL;
struct smc_local *lp;
struct net_device *ndev;
struct resource *res, *ires;
*/
lp = netdev_priv(ndev);
+ lp->cfg.flags = 0;
if (pd) {
memcpy(&lp->cfg, pd, sizeof(lp->cfg));
lp->io_shift = SMC91X_IO_SHIFT(lp->cfg.flags);
- } else {
+ }
+
+#if IS_BUILTIN(CONFIG_OF)
+ match = of_match_device(of_match_ptr(smc91x_match), &pdev->dev);
+ if (match) {
+ struct device_node *np = pdev->dev.of_node;
+ u32 val;
+
+ /* Combination of IO widths supported, default to 16-bit */
+ if (!of_property_read_u32(np, "reg-io-width", &val)) {
+ if (val & 1)
+ lp->cfg.flags |= SMC91X_USE_8BIT;
+ if ((val == 0) || (val & 2))
+ lp->cfg.flags |= SMC91X_USE_16BIT;
+ if (val & 4)
+ lp->cfg.flags |= SMC91X_USE_32BIT;
+ } else {
+ lp->cfg.flags |= SMC91X_USE_16BIT;
+ }
+ }
+#endif
+
+ if (!pd && !match) {
lp->cfg.flags |= (SMC_CAN_USE_8BIT) ? SMC91X_USE_8BIT : 0;
lp->cfg.flags |= (SMC_CAN_USE_16BIT) ? SMC91X_USE_16BIT : 0;
lp->cfg.flags |= (SMC_CAN_USE_32BIT) ? SMC91X_USE_32BIT : 0;
return 0;
}
-#ifdef CONFIG_OF
-static const struct of_device_id smc91x_match[] = {
- { .compatible = "smsc,lan91c94", },
- { .compatible = "smsc,lan91c111", },
- {},
-};
-MODULE_DEVICE_TABLE(of, smc91x_match);
-#endif
-
static struct dev_pm_ops smc_drv_pm_ops = {
.suspend = smc_drv_suspend,
.resume = smc_drv_resume,
ndev->features = NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO
| NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX |
NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXCSUM
- /*| NETIF_F_FRAGLIST */
;
ndev->hw_features = NETIF_F_IP_CSUM | NETIF_F_SG |
NETIF_F_TSO | NETIF_F_HW_VLAN_CTAG_TX;
* receive descs
*/
cpsw_info(priv, ifup, "submitted %d rx descriptors\n", i);
+
+ if (cpts_register(&priv->pdev->dev, priv->cpts,
+ priv->data.cpts_clock_mult,
+ priv->data.cpts_clock_shift))
+ dev_err(priv->dev, "error registering cpts device\n");
+
}
/* Enable Interrupt pacing if configured */
netif_carrier_off(priv->ndev);
if (cpsw_common_res_usage_state(priv) <= 1) {
+ cpts_unregister(priv->cpts);
cpsw_intr_disable(priv);
cpdma_ctlr_int_ctrl(priv->dma, false);
cpdma_ctlr_stop(priv->dma);
}
i++;
+ if (i == data->slaves)
+ break;
}
return 0;
goto clean_runtime_disable_ret;
}
priv->regs = ss_regs;
- priv->version = __raw_readl(&priv->regs->id_ver);
priv->host_port = HOST_PORT_NUM;
+ /* Need to enable clocks with runtime PM api to access module
+ * registers
+ */
+ pm_runtime_get_sync(&pdev->dev);
+ priv->version = readl(&priv->regs->id_ver);
+ pm_runtime_put_sync(&pdev->dev);
+
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
priv->wr_regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(priv->wr_regs)) {
unregister_netdev(cpsw_get_slave_ndev(priv, 1));
unregister_netdev(ndev);
- cpts_unregister(priv->cpts);
-
cpsw_ale_destroy(priv->ale);
cpdma_chan_destroy(priv->txch);
cpdma_chan_destroy(priv->rxch);
#include <linux/davinci_emac.h>
#include <linux/of.h>
#include <linux/of_address.h>
+#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/of_net.h>
#endif
};
+static const struct of_device_id davinci_emac_of_match[];
+
static struct emac_platform_data *
davinci_emac_of_get_pdata(struct platform_device *pdev, struct emac_priv *priv)
{
struct device_node *np;
+ const struct of_device_id *match;
+ const struct emac_platform_data *auxdata;
struct emac_platform_data *pdata = NULL;
const u8 *mac_addr;
priv->phy_node = of_parse_phandle(np, "phy-handle", 0);
if (!priv->phy_node)
- pdata->phy_id = "";
+ pdata->phy_id = NULL;
+
+ auxdata = pdev->dev.platform_data;
+ if (auxdata) {
+ pdata->interrupt_enable = auxdata->interrupt_enable;
+ pdata->interrupt_disable = auxdata->interrupt_disable;
+ }
+
+ match = of_match_device(davinci_emac_of_match, &pdev->dev);
+ if (match && match->data) {
+ auxdata = match->data;
+ pdata->version = auxdata->version;
+ pdata->hw_ram_addr = auxdata->hw_ram_addr;
+ }
pdev->dev.platform_data = pdata;
};
#if IS_ENABLED(CONFIG_OF)
+static const struct emac_platform_data am3517_emac_data = {
+ .version = EMAC_VERSION_2,
+ .hw_ram_addr = 0x01e20000,
+};
+
static const struct of_device_id davinci_emac_of_match[] = {
{.compatible = "ti,davinci-dm6467-emac", },
+ {.compatible = "ti,am3517-emac", .data = &am3517_emac_data, },
{},
};
MODULE_DEVICE_TABLE(of, davinci_emac_of_match);
platform_set_drvdata(op, ndev);
SET_NETDEV_DEV(ndev, &op->dev);
ndev->flags &= ~IFF_MULTICAST; /* clear multicast */
- ndev->features = NETIF_F_SG | NETIF_F_FRAGLIST;
+ ndev->features = NETIF_F_SG;
ndev->netdev_ops = &temac_netdev_ops;
ndev->ethtool_ops = &temac_ethtool_ops;
#if 0
SET_NETDEV_DEV(ndev, &op->dev);
ndev->flags &= ~IFF_MULTICAST; /* clear multicast */
- ndev->features = NETIF_F_SG | NETIF_F_FRAGLIST;
+ ndev->features = NETIF_F_SG;
ndev->netdev_ops = &axienet_netdev_ops;
ndev->ethtool_ops = &axienet_ethtool_ops;
__raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK,
drvdata->base_addr + XEL_TSR_OFFSET);
- /* Enable the Tx interrupts for the second Buffer if
- * configured in HW */
- if (drvdata->tx_ping_pong != 0) {
- reg_data = __raw_readl(drvdata->base_addr +
- XEL_BUFFER_OFFSET + XEL_TSR_OFFSET);
- __raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK,
- drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_TSR_OFFSET);
- }
-
/* Enable the Rx interrupts for the first buffer */
__raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr + XEL_RSR_OFFSET);
- /* Enable the Rx interrupts for the second Buffer if
- * configured in HW */
- if (drvdata->rx_ping_pong != 0) {
- __raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr +
- XEL_BUFFER_OFFSET + XEL_RSR_OFFSET);
- }
-
/* Enable the Global Interrupt Enable */
__raw_writel(XEL_GIER_GIE_MASK, drvdata->base_addr + XEL_GIER_OFFSET);
}
__raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK),
drvdata->base_addr + XEL_TSR_OFFSET);
- /* Disable the Tx interrupts for the second Buffer
- * if configured in HW */
- if (drvdata->tx_ping_pong != 0) {
- reg_data = __raw_readl(drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_TSR_OFFSET);
- __raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK),
- drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_TSR_OFFSET);
- }
-
/* Disable the Rx interrupts for the first buffer */
reg_data = __raw_readl(drvdata->base_addr + XEL_RSR_OFFSET);
__raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK),
drvdata->base_addr + XEL_RSR_OFFSET);
-
- /* Disable the Rx interrupts for the second buffer
- * if configured in HW */
- if (drvdata->rx_ping_pong != 0) {
-
- reg_data = __raw_readl(drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_RSR_OFFSET);
- __raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK),
- drvdata->base_addr + XEL_BUFFER_OFFSET +
- XEL_RSR_OFFSET);
- }
}
/**
*to_u16_ptr++ = *from_u16_ptr++;
*to_u16_ptr++ = *from_u16_ptr++;
+ /* This barrier resolves occasional issues seen around
+ * cases where the data is not properly flushed out
+ * from the processor store buffers to the destination
+ * memory locations.
+ */
+ wmb();
+
/* Output a word */
*to_u32_ptr++ = align_buffer;
}
for (; length > 0; length--)
*to_u8_ptr++ = *from_u8_ptr++;
+ /* This barrier resolves occasional issues seen around
+ * cases where the data is not properly flushed out
+ * from the processor store buffers to the destination
+ * memory locations.
+ */
+ wmb();
*to_u32_ptr = align_buffer;
}
}
return -EINVAL;
nvdev->start_remove = true;
- cancel_delayed_work_sync(&ndevctx->dwork);
cancel_work_sync(&ndevctx->work);
netif_tx_disable(ndev);
rndis_filter_device_remove(hdev);
int ret;
int vnet_hdr_len = 0;
int vlan_offset = 0;
- int copied;
+ int copied, total;
if (q->flags & IFF_VNET_HDR) {
struct virtio_net_hdr vnet_hdr;
if (memcpy_toiovecend(iv, (void *)&vnet_hdr, 0, sizeof(vnet_hdr)))
return -EFAULT;
}
- copied = vnet_hdr_len;
+ total = copied = vnet_hdr_len;
+ total += skb->len;
if (!vlan_tx_tag_present(skb))
len = min_t(int, skb->len, len);
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
len = min_t(int, skb->len + VLAN_HLEN, len);
+ total += VLAN_HLEN;
copy = min_t(int, vlan_offset, len);
ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy);
}
ret = skb_copy_datagram_const_iovec(skb, vlan_offset, iv, copied, len);
- copied += len;
done:
- return ret ? ret : copied;
+ return ret ? ret : total;
}
static ssize_t macvtap_do_read(struct macvtap_queue *q, struct kiocb *iocb,
}
ret = macvtap_do_read(q, iocb, iv, len, file->f_flags & O_NONBLOCK);
- ret = min_t(ssize_t, ret, len); /* XXX copied from tun.c. Why? */
+ ret = min_t(ssize_t, ret, len);
+ if (ret > 0)
+ iocb->ki_pos = ret;
out:
return ret;
}
.suspend = genphy_suspend,
.resume = genphy_resume,
.driver = { .owner = THIS_MODULE,},
+}, {
+ .phy_id = PHY_ID_KSZ8041RNLI,
+ .phy_id_mask = 0x00fffff0,
+ .name = "Micrel KSZ8041RNLI",
+ .features = PHY_BASIC_FEATURES |
+ SUPPORTED_Pause | SUPPORTED_Asym_Pause,
+ .flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
+ .config_init = kszphy_config_init,
+ .config_aneg = genphy_config_aneg,
+ .read_status = genphy_read_status,
+ .ack_interrupt = kszphy_ack_interrupt,
+ .config_intr = kszphy_config_intr,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ .driver = { .owner = THIS_MODULE,},
}, {
.phy_id = PHY_ID_KSZ8051,
.phy_id_mask = 0x00fffff0,
{
struct tun_pi pi = { 0, skb->protocol };
ssize_t total = 0;
- int vlan_offset = 0;
+ int vlan_offset = 0, copied;
if (!(tun->flags & TUN_NO_PI)) {
if ((len -= sizeof(pi)) < 0)
total += tun->vnet_hdr_sz;
}
+ copied = total;
+ total += skb->len;
if (!vlan_tx_tag_present(skb)) {
len = min_t(int, skb->len, len);
} else {
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
len = min_t(int, skb->len + VLAN_HLEN, len);
+ total += VLAN_HLEN;
copy = min_t(int, vlan_offset, len);
- ret = skb_copy_datagram_const_iovec(skb, 0, iv, total, copy);
+ ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy);
len -= copy;
- total += copy;
+ copied += copy;
if (ret || !len)
goto done;
copy = min_t(int, sizeof(veth), len);
- ret = memcpy_toiovecend(iv, (void *)&veth, total, copy);
+ ret = memcpy_toiovecend(iv, (void *)&veth, copied, copy);
len -= copy;
- total += copy;
+ copied += copy;
if (ret || !len)
goto done;
}
- skb_copy_datagram_const_iovec(skb, vlan_offset, iv, total, len);
- total += len;
+ skb_copy_datagram_const_iovec(skb, vlan_offset, iv, copied, len);
done:
tun->dev->stats.tx_packets++;
ret = tun_do_read(tun, tfile, iocb, iv, len,
file->f_flags & O_NONBLOCK);
ret = min_t(ssize_t, ret, len);
+ if (ret > 0)
+ iocb->ki_pos = ret;
out:
tun_put(tun);
return ret;
if (unlikely(len < sizeof(struct virtio_net_hdr) + ETH_HLEN)) {
pr_debug("%s: short packet %i\n", dev->name, len);
dev->stats.rx_length_errors++;
- if (vi->big_packets)
- give_pages(rq, buf);
- else if (vi->mergeable_rx_bufs)
+ if (vi->mergeable_rx_bufs)
put_page(virt_to_head_page(buf));
+ else if (vi->big_packets)
+ give_pages(rq, buf);
else
dev_kfree_skb(buf);
return;
static void virtnet_free_queues(struct virtnet_info *vi)
{
+ int i;
+
+ for (i = 0; i < vi->max_queue_pairs; i++)
+ netif_napi_del(&vi->rq[i].napi);
+
kfree(vi->rq);
kfree(vi->sq);
}
struct virtqueue *vq = vi->rq[i].vq;
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
- if (vi->big_packets)
- give_pages(&vi->rq[i], buf);
- else if (vi->mergeable_rx_bufs)
+ if (vi->mergeable_rx_bufs)
put_page(virt_to_head_page(buf));
+ else if (vi->big_packets)
+ give_pages(&vi->rq[i], buf);
else
dev_kfree_skb(buf);
--vi->rq[i].num;
netdev_dbg(dev, "circular route to %pI4\n",
&dst->sin.sin_addr.s_addr);
dev->stats.collisions++;
- goto tx_error;
+ goto rt_tx_error;
}
/* Bypass encapsulation if the destination is local */
int quick_drop;
s32 t[3], f[3] = {5180, 5500, 5785};
- if (!(pBase->miscConfiguration & BIT(1)))
+ if (!(pBase->miscConfiguration & BIT(4)))
return;
- if (freq < 4000)
- quick_drop = eep->modalHeader2G.quick_drop;
- else {
- t[0] = eep->base_ext1.quick_drop_low;
- t[1] = eep->modalHeader5G.quick_drop;
- t[2] = eep->base_ext1.quick_drop_high;
- quick_drop = ar9003_hw_power_interpolate(freq, f, t, 3);
+ if (AR_SREV_9300(ah) || AR_SREV_9580(ah) || AR_SREV_9340(ah)) {
+ if (freq < 4000) {
+ quick_drop = eep->modalHeader2G.quick_drop;
+ } else {
+ t[0] = eep->base_ext1.quick_drop_low;
+ t[1] = eep->modalHeader5G.quick_drop;
+ t[2] = eep->base_ext1.quick_drop_high;
+ quick_drop = ar9003_hw_power_interpolate(freq, f, t, 3);
+ }
+ REG_RMW_FIELD(ah, AR_PHY_AGC, AR_PHY_AGC_QUICK_DROP, quick_drop);
}
- REG_RMW_FIELD(ah, AR_PHY_AGC, AR_PHY_AGC_QUICK_DROP, quick_drop);
}
static void ar9003_hw_txend_to_xpa_off_apply(struct ath_hw *ah, bool is2ghz)
struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep;
u8 bias;
- if (!(eep->baseEepHeader.featureEnable & 0x40))
+ if (!(eep->baseEepHeader.miscConfiguration & 0x40))
return;
if (!AR_SREV_9300(ah))
else
clockrate = ATH9K_CLOCK_RATE_5GHZ_OFDM;
- if (IS_CHAN_HT40(chan))
- clockrate *= 2;
-
- if (ah->curchan) {
+ if (chan) {
+ if (IS_CHAN_HT40(chan))
+ clockrate *= 2;
if (IS_CHAN_HALF_RATE(chan))
clockrate /= 2;
if (IS_CHAN_QUARTER_RATE(chan))
if (!rts_thresh || (len > rts_thresh))
rts = true;
}
+
+ if (!aggr)
+ len = fi->framelen;
+
ath_buf_set_rate(sc, bf, &info, len, rts);
}
case WCN36XX_HAL_DELETE_STA_CONTEXT_IND:
mutex_lock(&wcn->hal_ind_mutex);
msg_ind = kmalloc(sizeof(*msg_ind), GFP_KERNEL);
- msg_ind->msg_len = len;
- msg_ind->msg = kmalloc(len, GFP_KERNEL);
- memcpy(msg_ind->msg, buf, len);
- list_add_tail(&msg_ind->list, &wcn->hal_ind_queue);
- queue_work(wcn->hal_ind_wq, &wcn->hal_ind_work);
- wcn36xx_dbg(WCN36XX_DBG_HAL, "indication arrived\n");
+ if (msg_ind) {
+ msg_ind->msg_len = len;
+ msg_ind->msg = kmalloc(len, GFP_KERNEL);
+ memcpy(msg_ind->msg, buf, len);
+ list_add_tail(&msg_ind->list, &wcn->hal_ind_queue);
+ queue_work(wcn->hal_ind_wq, &wcn->hal_ind_work);
+ wcn36xx_dbg(WCN36XX_DBG_HAL, "indication arrived\n");
+ }
mutex_unlock(&wcn->hal_ind_mutex);
+ if (msg_ind)
+ break;
+ /* FIXME: Do something smarter then just printing an error. */
+ wcn36xx_err("Run out of memory while handling SMD_EVENT (%d)\n",
+ msg_header->msg_type);
break;
default:
wcn36xx_err("SMD_EVENT (%d) not supported\n",
tristate "Broadcom IEEE802.11n PCIe SoftMAC WLAN driver"
depends on MAC80211
depends on BCMA
+ select NEW_LEDS if BCMA_DRIVER_GPIO
+ select LEDS_CLASS if BCMA_DRIVER_GPIO
select BRCMUTIL
select FW_LOADER
select CRC_CCITT
brcmf_err("Disable F2 failed:%d\n",
err_ret);
}
+ } else {
+ err_ret = -ENOENT;
}
} else if ((regaddr == SDIO_CCCR_ABORT) ||
(regaddr == SDIO_CCCR_IENx)) {
#include "iwl-agn-hw.h"
/* Highest firmware API version supported */
-#define IWL7260_UCODE_API_MAX 7
-#define IWL3160_UCODE_API_MAX 7
+#define IWL7260_UCODE_API_MAX 8
+#define IWL3160_UCODE_API_MAX 8
/* Oldest version we won't warn about */
#define IWL7260_UCODE_API_OK 7
.ht_params = &iwl7000_ht_params,
.nvm_ver = IWL7260_NVM_VERSION,
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
+ .host_interrupt_operation_mode = true,
};
const struct iwl_cfg iwl7260_2ac_cfg_high_temp = {
.nvm_ver = IWL7260_NVM_VERSION,
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
.high_temp = true,
+ .host_interrupt_operation_mode = true,
};
const struct iwl_cfg iwl7260_2n_cfg = {
.ht_params = &iwl7000_ht_params,
.nvm_ver = IWL7260_NVM_VERSION,
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
+ .host_interrupt_operation_mode = true,
};
const struct iwl_cfg iwl7260_n_cfg = {
.ht_params = &iwl7000_ht_params,
.nvm_ver = IWL7260_NVM_VERSION,
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
+ .host_interrupt_operation_mode = true,
};
const struct iwl_cfg iwl3160_2ac_cfg = {
.ht_params = &iwl7000_ht_params,
.nvm_ver = IWL3160_NVM_VERSION,
.nvm_calib_ver = IWL3160_TX_POWER_VERSION,
+ .host_interrupt_operation_mode = true,
};
const struct iwl_cfg iwl3160_2n_cfg = {
.ht_params = &iwl7000_ht_params,
.nvm_ver = IWL3160_NVM_VERSION,
.nvm_calib_ver = IWL3160_TX_POWER_VERSION,
+ .host_interrupt_operation_mode = true,
};
const struct iwl_cfg iwl3160_n_cfg = {
.ht_params = &iwl7000_ht_params,
.nvm_ver = IWL3160_NVM_VERSION,
.nvm_calib_ver = IWL3160_TX_POWER_VERSION,
+ .host_interrupt_operation_mode = true,
};
const struct iwl_cfg iwl7265_2ac_cfg = {
.nvm_calib_ver = IWL7265_TX_POWER_VERSION,
};
+const struct iwl_cfg iwl7265_2n_cfg = {
+ .name = "Intel(R) Dual Band Wireless N 7265",
+ .fw_name_pre = IWL7265_FW_PRE,
+ IWL_DEVICE_7000,
+ .ht_params = &iwl7000_ht_params,
+ .nvm_ver = IWL7265_NVM_VERSION,
+ .nvm_calib_ver = IWL7265_TX_POWER_VERSION,
+};
+
+const struct iwl_cfg iwl7265_n_cfg = {
+ .name = "Intel(R) Wireless N 7265",
+ .fw_name_pre = IWL7265_FW_PRE,
+ IWL_DEVICE_7000,
+ .ht_params = &iwl7000_ht_params,
+ .nvm_ver = IWL7265_NVM_VERSION,
+ .nvm_calib_ver = IWL7265_TX_POWER_VERSION,
+};
+
MODULE_FIRMWARE(IWL7260_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
MODULE_FIRMWARE(IWL3160_MODULE_FIRMWARE(IWL3160_UCODE_API_OK));
* @rx_with_siso_diversity: 1x1 device with rx antenna diversity
* @internal_wimax_coex: internal wifi/wimax combo device
* @high_temp: Is this NIC is designated to be in high temperature.
+ * @host_interrupt_operation_mode: device needs host interrupt operation
+ * mode set
*
* We enable the driver to be backward compatible wrt. hardware features.
* API differences in uCode shouldn't be handled here but through TLVs
enum iwl_led_mode led_mode;
const bool rx_with_siso_diversity;
const bool internal_wimax_coex;
+ const bool host_interrupt_operation_mode;
bool high_temp;
};
extern const struct iwl_cfg iwl3160_2n_cfg;
extern const struct iwl_cfg iwl3160_n_cfg;
extern const struct iwl_cfg iwl7265_2ac_cfg;
+extern const struct iwl_cfg iwl7265_2n_cfg;
+extern const struct iwl_cfg iwl7265_n_cfg;
#endif /* CONFIG_IWLMVM */
#endif /* __IWL_CONFIG_H__ */
* the CSR_INT_COALESCING is an 8 bit register in 32-usec unit
*
* default interrupt coalescing timer is 64 x 32 = 2048 usecs
- * default interrupt coalescing calibration timer is 16 x 32 = 512 usecs
*/
#define IWL_HOST_INT_TIMEOUT_MAX (0xFF)
#define IWL_HOST_INT_TIMEOUT_DEF (0x40)
#define IWL_HOST_INT_TIMEOUT_MIN (0x0)
-#define IWL_HOST_INT_CALIB_TIMEOUT_MAX (0xFF)
-#define IWL_HOST_INT_CALIB_TIMEOUT_DEF (0x10)
-#define IWL_HOST_INT_CALIB_TIMEOUT_MIN (0x0)
+#define IWL_HOST_INT_OPER_MODE BIT(31)
/*****************************************************************************
* 7000/3000 series SHR DTS addresses *
BT_VALID_LUT |
BT_VALID_WIFI_RX_SW_PRIO_BOOST |
BT_VALID_WIFI_TX_SW_PRIO_BOOST |
- BT_VALID_MULTI_PRIO_LUT |
BT_VALID_CORUN_LUT_20 |
BT_VALID_CORUN_LUT_40 |
BT_VALID_ANT_ISOLATION |
sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id],
lockdep_is_held(&mvm->mutex));
+
+ /* This can happen if the station has been removed right now */
+ if (IS_ERR_OR_NULL(sta))
+ return;
+
mvmsta = (void *)sta->drv_priv;
data->num_bss_ifaces++;
/* new API returns next, not last-used seqno */
if (mvm->fw->ucode_capa.flags &
IWL_UCODE_TLV_FLAGS_D3_CONTINUITY_API)
- err -= 0x10;
+ err = (u16) (err - 0x10);
}
iwl_free_resp(&cmd);
if (gtkdata.unhandled_cipher)
return false;
if (!gtkdata.num_keys)
- return true;
+ goto out;
if (!gtkdata.last_gtk)
return false;
(void *)&replay_ctr, GFP_KERNEL);
}
+out:
mvmvif->seqno_valid = true;
/* +0x10 because the set API expects next-to-use, not last-used */
mvmvif->seqno = le16_to_cpu(status->non_qos_seq_ctr) + 0x10;
if (sscanf(buf, "%d %d", &sta_id, &drain) != 2)
return -EINVAL;
+ if (sta_id < 0 || sta_id >= IWL_MVM_STATION_COUNT)
+ return -EINVAL;
+ if (drain < 0 || drain > 1)
+ return -EINVAL;
mutex_lock(&mvm->mutex);
* P2P Device discoveribility, while there are other higher priority
* events in the system).
*/
- if (WARN_ONCE(!le32_to_cpu(notif->status),
- "Failed to schedule time event\n")) {
+ if (!le32_to_cpu(notif->status)) {
+ bool start = le32_to_cpu(notif->action) &
+ TE_V2_NOTIF_HOST_EVENT_START;
+ IWL_WARN(mvm, "Time Event %s notification failure\n",
+ start ? "start" : "end");
if (iwl_mvm_te_check_disconnect(mvm, te_data->vif, NULL)) {
iwl_mvm_te_clear_data(mvm, te_data);
return;
/* 7265 Series */
{IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5110, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x5310, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x5210, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x5012, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x500A, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x5200, iwl7265_2n_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5002, iwl7265_n_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x5202, iwl7265_n_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x9010, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5020, iwl7265_2n_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x502A, iwl7265_2n_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5420, iwl7265_2n_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5090, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x5290, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5490, iwl7265_2ac_cfg)},
#endif /* CONFIG_IWLMVM */
{0}
CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW);
}
+static inline void iwl_nic_error(struct iwl_trans *trans)
+{
+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+
+ set_bit(STATUS_FW_ERROR, &trans_pcie->status);
+ iwl_op_mode_nic_error(trans->op_mode);
+}
+
#endif /* __iwl_trans_int_pcie_h__ */
/* Set interrupt coalescing timer to default (2048 usecs) */
iwl_write8(trans, CSR_INT_COALESCING, IWL_HOST_INT_TIMEOUT_DEF);
+
+ /* W/A for interrupt coalescing bug in 7260 and 3160 */
+ if (trans->cfg->host_interrupt_operation_mode)
+ iwl_set_bit(trans, CSR_INT_COALESCING, IWL_HOST_INT_OPER_MODE);
}
static void iwl_pcie_rx_init_rxb_lists(struct iwl_rxq *rxq)
iwl_pcie_dump_csr(trans);
iwl_dump_fh(trans, NULL);
+ /* set the ERROR bit before we wake up the caller */
set_bit(STATUS_FW_ERROR, &trans_pcie->status);
clear_bit(STATUS_HCMD_ACTIVE, &trans_pcie->status);
wake_up(&trans_pcie->wait_command_queue);
local_bh_disable();
- iwl_op_mode_nic_error(trans->op_mode);
+ iwl_nic_error(trans);
local_bh_enable();
}
spin_lock_irqsave(&trans_pcie->irq_lock, flags);
iwl_pcie_apm_init(trans);
- /* Set interrupt coalescing calibration timer to default (512 usecs) */
- iwl_write8(trans, CSR_INT_COALESCING, IWL_HOST_INT_CALIB_TIMEOUT_DEF);
-
spin_unlock_irqrestore(&trans_pcie->irq_lock, flags);
iwl_pcie_set_pwr(trans, false);
IWL_ERR(trans, "scratch %d = 0x%08x\n", i,
le32_to_cpu(txq->scratchbufs[i].scratch));
- iwl_op_mode_nic_error(trans->op_mode);
+ iwl_nic_error(trans);
}
/*
if (nfreed++ > 0) {
IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n",
idx, q->write_ptr, q->read_ptr);
- iwl_op_mode_nic_error(trans->op_mode);
+ iwl_nic_error(trans);
}
}
get_cmd_string(trans_pcie, cmd->id));
ret = -ETIMEDOUT;
- iwl_op_mode_nic_error(trans->op_mode);
+ iwl_nic_error(trans);
goto cancel;
}
__le16 rt_chbitmask;
} __packed;
+struct hwsim_radiotap_ack_hdr {
+ struct ieee80211_radiotap_header hdr;
+ u8 rt_flags;
+ u8 pad;
+ __le16 rt_channel;
+ __le16 rt_chbitmask;
+} __packed;
+
/* MAC80211_HWSIM netlinf family */
static struct genl_family hwsim_genl_family = {
.id = GENL_ID_GENERATE,
const u8 *addr)
{
struct sk_buff *skb;
- struct hwsim_radiotap_hdr *hdr;
+ struct hwsim_radiotap_ack_hdr *hdr;
u16 flags;
struct ieee80211_hdr *hdr11;
if (skb == NULL)
return;
- hdr = (struct hwsim_radiotap_hdr *) skb_put(skb, sizeof(*hdr));
+ hdr = (struct hwsim_radiotap_ack_hdr *) skb_put(skb, sizeof(*hdr));
hdr->hdr.it_version = PKTHDR_RADIOTAP_VERSION;
hdr->hdr.it_pad = 0;
hdr->hdr.it_len = cpu_to_le16(sizeof(*hdr));
hdr->hdr.it_present = cpu_to_le32((1 << IEEE80211_RADIOTAP_FLAGS) |
(1 << IEEE80211_RADIOTAP_CHANNEL));
hdr->rt_flags = 0;
- hdr->rt_rate = 0;
+ hdr->pad = 0;
hdr->rt_channel = cpu_to_le16(chan->center_freq);
flags = IEEE80211_CHAN_2GHZ;
hdr->rt_chbitmask = cpu_to_le16(flags);
HRTIMER_MODE_REL);
} else if (!info->enable_beacon) {
unsigned int count = 0;
- ieee80211_iterate_active_interfaces(
+ ieee80211_iterate_active_interfaces_atomic(
data->hw, IEEE80211_IFACE_ITER_NORMAL,
mac80211_hwsim_bcn_en_iter, &count);
wiphy_debug(hw->wiphy, " beaconing vifs remaining: %u",
if (bss_desc && bss_desc->ssid.ssid_len &&
(!mwifiex_ssid_cmp(&priv->curr_bss_params.bss_descriptor.
ssid, &bss_desc->ssid))) {
- kfree(bss_desc);
- return 0;
+ ret = 0;
+ goto done;
}
/* Exit Adhoc mode first */
unsigned long rx_ring_ref, unsigned int tx_evtchn,
unsigned int rx_evtchn)
{
+ struct task_struct *task;
int err = -ENOMEM;
- /* Already connected through? */
- if (vif->tx_irq)
- return 0;
+ BUG_ON(vif->tx_irq);
+ BUG_ON(vif->task);
err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
if (err < 0)
}
init_waitqueue_head(&vif->wq);
- vif->task = kthread_create(xenvif_kthread,
- (void *)vif, "%s", vif->dev->name);
- if (IS_ERR(vif->task)) {
+ task = kthread_create(xenvif_kthread,
+ (void *)vif, "%s", vif->dev->name);
+ if (IS_ERR(task)) {
pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
- err = PTR_ERR(vif->task);
+ err = PTR_ERR(task);
goto err_rx_unbind;
}
+ vif->task = task;
+
rtnl_lock();
if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
dev_set_mtu(vif->dev, ETH_DATA_LEN);
if (netif_carrier_ok(vif->dev))
xenvif_carrier_off(vif);
- if (vif->task)
+ if (vif->task) {
kthread_stop(vif->task);
+ vif->task = NULL;
+ }
if (vif->tx_irq) {
if (vif->tx_irq == vif->rx_irq)
}
/* Set up a GSO prefix descriptor, if necessary */
- if ((1 << skb_shinfo(skb)->gso_type) & vif->gso_prefix_mask) {
+ if ((1 << gso_type) & vif->gso_prefix_mask) {
req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
meta = npo->meta + npo->meta_prod++;
meta->gso_type = gso_type;
return 0;
}
-static inline void maybe_pull_tail(struct sk_buff *skb, unsigned int len)
+static inline int maybe_pull_tail(struct sk_buff *skb, unsigned int len,
+ unsigned int max)
{
- if (skb_is_nonlinear(skb) && skb_headlen(skb) < len) {
- /* If we need to pullup then pullup to the max, so we
- * won't need to do it again.
- */
- int target = min_t(int, skb->len, MAX_TCP_HEADER);
- __pskb_pull_tail(skb, target - skb_headlen(skb));
- }
+ if (skb_headlen(skb) >= len)
+ return 0;
+
+ /* If we need to pullup then pullup to the max, so we
+ * won't need to do it again.
+ */
+ if (max > skb->len)
+ max = skb->len;
+
+ if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
+ return -ENOMEM;
+
+ if (skb_headlen(skb) < len)
+ return -EPROTO;
+
+ return 0;
}
+/* This value should be large enough to cover a tagged ethernet header plus
+ * maximally sized IP and TCP or UDP headers.
+ */
+#define MAX_IP_HDR_LEN 128
+
static int checksum_setup_ip(struct xenvif *vif, struct sk_buff *skb,
int recalculate_partial_csum)
{
- struct iphdr *iph = (void *)skb->data;
- unsigned int header_size;
unsigned int off;
- int err = -EPROTO;
+ bool fragment;
+ int err;
+
+ fragment = false;
- off = sizeof(struct iphdr);
+ err = maybe_pull_tail(skb,
+ sizeof(struct iphdr),
+ MAX_IP_HDR_LEN);
+ if (err < 0)
+ goto out;
+
+ if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
+ fragment = true;
- header_size = skb->network_header + off + MAX_IPOPTLEN;
- maybe_pull_tail(skb, header_size);
+ off = ip_hdrlen(skb);
- off = iph->ihl * 4;
+ err = -EPROTO;
+
+ if (fragment)
+ goto out;
- switch (iph->protocol) {
+ switch (ip_hdr(skb)->protocol) {
case IPPROTO_TCP:
+ err = maybe_pull_tail(skb,
+ off + sizeof(struct tcphdr),
+ MAX_IP_HDR_LEN);
+ if (err < 0)
+ goto out;
+
if (!skb_partial_csum_set(skb, off,
offsetof(struct tcphdr, check)))
goto out;
- if (recalculate_partial_csum) {
- struct tcphdr *tcph = tcp_hdr(skb);
-
- header_size = skb->network_header +
- off +
- sizeof(struct tcphdr);
- maybe_pull_tail(skb, header_size);
-
- tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
- skb->len - off,
- IPPROTO_TCP, 0);
- }
+ if (recalculate_partial_csum)
+ tcp_hdr(skb)->check =
+ ~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+ ip_hdr(skb)->daddr,
+ skb->len - off,
+ IPPROTO_TCP, 0);
break;
case IPPROTO_UDP:
+ err = maybe_pull_tail(skb,
+ off + sizeof(struct udphdr),
+ MAX_IP_HDR_LEN);
+ if (err < 0)
+ goto out;
+
if (!skb_partial_csum_set(skb, off,
offsetof(struct udphdr, check)))
goto out;
- if (recalculate_partial_csum) {
- struct udphdr *udph = udp_hdr(skb);
-
- header_size = skb->network_header +
- off +
- sizeof(struct udphdr);
- maybe_pull_tail(skb, header_size);
-
- udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
- skb->len - off,
- IPPROTO_UDP, 0);
- }
+ if (recalculate_partial_csum)
+ udp_hdr(skb)->check =
+ ~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+ ip_hdr(skb)->daddr,
+ skb->len - off,
+ IPPROTO_UDP, 0);
break;
default:
- if (net_ratelimit())
- netdev_err(vif->dev,
- "Attempting to checksum a non-TCP/UDP packet, "
- "dropping a protocol %d packet\n",
- iph->protocol);
goto out;
}
return err;
}
+/* This value should be large enough to cover a tagged ethernet header plus
+ * an IPv6 header, all options, and a maximal TCP or UDP header.
+ */
+#define MAX_IPV6_HDR_LEN 256
+
+#define OPT_HDR(type, skb, off) \
+ (type *)(skb_network_header(skb) + (off))
+
static int checksum_setup_ipv6(struct xenvif *vif, struct sk_buff *skb,
int recalculate_partial_csum)
{
- int err = -EPROTO;
- struct ipv6hdr *ipv6h = (void *)skb->data;
+ int err;
u8 nexthdr;
- unsigned int header_size;
unsigned int off;
+ unsigned int len;
bool fragment;
bool done;
+ fragment = false;
done = false;
off = sizeof(struct ipv6hdr);
- header_size = skb->network_header + off;
- maybe_pull_tail(skb, header_size);
+ err = maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
- nexthdr = ipv6h->nexthdr;
+ nexthdr = ipv6_hdr(skb)->nexthdr;
- while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len)) &&
- !done) {
+ len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
+ while (off <= len && !done) {
switch (nexthdr) {
case IPPROTO_DSTOPTS:
case IPPROTO_HOPOPTS:
case IPPROTO_ROUTING: {
- struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
+ struct ipv6_opt_hdr *hp;
- header_size = skb->network_header +
- off +
- sizeof(struct ipv6_opt_hdr);
- maybe_pull_tail(skb, header_size);
+ err = maybe_pull_tail(skb,
+ off +
+ sizeof(struct ipv6_opt_hdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
+ hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
nexthdr = hp->nexthdr;
off += ipv6_optlen(hp);
break;
}
case IPPROTO_AH: {
- struct ip_auth_hdr *hp = (void *)(skb->data + off);
+ struct ip_auth_hdr *hp;
+
+ err = maybe_pull_tail(skb,
+ off +
+ sizeof(struct ip_auth_hdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
- header_size = skb->network_header +
- off +
- sizeof(struct ip_auth_hdr);
- maybe_pull_tail(skb, header_size);
+ hp = OPT_HDR(struct ip_auth_hdr, skb, off);
+ nexthdr = hp->nexthdr;
+ off += ipv6_authlen(hp);
+ break;
+ }
+ case IPPROTO_FRAGMENT: {
+ struct frag_hdr *hp;
+
+ err = maybe_pull_tail(skb,
+ off +
+ sizeof(struct frag_hdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
+
+ hp = OPT_HDR(struct frag_hdr, skb, off);
+
+ if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
+ fragment = true;
nexthdr = hp->nexthdr;
- off += (hp->hdrlen+2)<<2;
+ off += sizeof(struct frag_hdr);
break;
}
- case IPPROTO_FRAGMENT:
- fragment = true;
- /* fall through */
default:
done = true;
break;
}
}
- if (!done) {
- if (net_ratelimit())
- netdev_err(vif->dev, "Failed to parse packet header\n");
- goto out;
- }
+ err = -EPROTO;
- if (fragment) {
- if (net_ratelimit())
- netdev_err(vif->dev, "Packet is a fragment!\n");
+ if (!done || fragment)
goto out;
- }
switch (nexthdr) {
case IPPROTO_TCP:
+ err = maybe_pull_tail(skb,
+ off + sizeof(struct tcphdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
+
if (!skb_partial_csum_set(skb, off,
offsetof(struct tcphdr, check)))
goto out;
- if (recalculate_partial_csum) {
- struct tcphdr *tcph = tcp_hdr(skb);
-
- header_size = skb->network_header +
- off +
- sizeof(struct tcphdr);
- maybe_pull_tail(skb, header_size);
-
- tcph->check = ~csum_ipv6_magic(&ipv6h->saddr,
- &ipv6h->daddr,
- skb->len - off,
- IPPROTO_TCP, 0);
- }
+ if (recalculate_partial_csum)
+ tcp_hdr(skb)->check =
+ ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+ &ipv6_hdr(skb)->daddr,
+ skb->len - off,
+ IPPROTO_TCP, 0);
break;
case IPPROTO_UDP:
+ err = maybe_pull_tail(skb,
+ off + sizeof(struct udphdr),
+ MAX_IPV6_HDR_LEN);
+ if (err < 0)
+ goto out;
+
if (!skb_partial_csum_set(skb, off,
offsetof(struct udphdr, check)))
goto out;
- if (recalculate_partial_csum) {
- struct udphdr *udph = udp_hdr(skb);
-
- header_size = skb->network_header +
- off +
- sizeof(struct udphdr);
- maybe_pull_tail(skb, header_size);
-
- udph->check = ~csum_ipv6_magic(&ipv6h->saddr,
- &ipv6h->daddr,
- skb->len - off,
- IPPROTO_UDP, 0);
- }
+ if (recalculate_partial_csum)
+ udp_hdr(skb)->check =
+ ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+ &ipv6_hdr(skb)->daddr,
+ skb->len - off,
+ IPPROTO_UDP, 0);
break;
default:
- if (net_ratelimit())
- netdev_err(vif->dev,
- "Attempting to checksum a non-TCP/UDP packet, "
- "dropping a protocol %d packet\n",
- nexthdr);
goto out;
}
return false;
}
-static unsigned xenvif_tx_build_gops(struct xenvif *vif)
+static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
{
struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
struct sk_buff *skb;
int ret;
while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
- < MAX_PENDING_REQS)) {
+ < MAX_PENDING_REQS) &&
+ (skb_queue_len(&vif->tx_queue) < budget)) {
struct xen_netif_tx_request txreq;
struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
struct page *page;
continue;
}
- RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do);
+ work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
if (!work_to_do)
break;
}
-static int xenvif_tx_submit(struct xenvif *vif, int budget)
+static int xenvif_tx_submit(struct xenvif *vif)
{
struct gnttab_copy *gop = vif->tx_copy_ops;
struct sk_buff *skb;
int work_done = 0;
- while (work_done < budget &&
- (skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+ while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
struct xen_netif_tx_request *txp;
u16 pending_idx;
unsigned data_len;
if (unlikely(!tx_work_todo(vif)))
return 0;
- nr_gops = xenvif_tx_build_gops(vif);
+ nr_gops = xenvif_tx_build_gops(vif, budget);
if (nr_gops == 0)
return 0;
gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
- work_done = xenvif_tx_submit(vif, nr_gops);
+ work_done = xenvif_tx_submit(vif);
return work_done;
}
*value = 0;
break;
+ case PCI_INTERRUPT_LINE:
+ /* LINE PIN MIN_GNT MAX_LAT */
+ *value = 0;
+ break;
+
default:
*value = 0xffffffff;
return PCIBIOS_BAD_REGISTER_NUMBER;
#include <linux/cpu.h>
#include <linux/pm_runtime.h>
#include <linux/suspend.h>
+#include <linux/kexec.h>
#include "pci.h"
struct pci_dynid {
int error, node;
struct drv_dev_and_id ddi = { drv, dev, id };
- /* Execute driver initialization on node where the device's
- bus is attached to. This way the driver likely allocates
- its local memory on the right node without any need to
- change it. */
+ /*
+ * Execute driver initialization on node where the device is
+ * attached. This way the driver likely allocates its local memory
+ * on the right node.
+ */
node = dev_to_node(&dev->dev);
- if (node >= 0) {
+
+ /*
+ * On NUMA systems, we are likely to call a PF probe function using
+ * work_on_cpu(). If that probe calls pci_enable_sriov() (which
+ * adds the VF devices via pci_bus_add_device()), we may re-enter
+ * this function to call the VF probe function. Calling
+ * work_on_cpu() again will cause a lockdep warning. Since VFs are
+ * always on the same node as the PF, we can work around this by
+ * avoiding work_on_cpu() when we're already on the correct node.
+ *
+ * Preemption is enabled, so it's theoretically unsafe to use
+ * numa_node_id(), but even if we run the probe function on the
+ * wrong node, it should be functionally correct.
+ */
+ if (node >= 0 && node != numa_node_id()) {
int cpu;
get_online_cpus();
put_online_cpus();
} else
error = local_pci_probe(&ddi);
+
return error;
}
pci_msi_shutdown(pci_dev);
pci_msix_shutdown(pci_dev);
+#ifdef CONFIG_KEXEC
/*
- * Turn off Bus Master bit on the device to tell it to not
- * continue to do DMA. Don't touch devices in D3cold or unknown states.
+ * If this is a kexec reboot, turn off Bus Master bit on the
+ * device to tell it to not continue to do DMA. Don't touch
+ * devices in D3cold or unknown states.
+ * If it is not a kexec reboot, firmware will hit the PCI
+ * devices with big hammer and stop their DMA any way.
*/
- if (pci_dev->current_state <= PCI_D3hot)
+ if (kexec_in_progress && (pci_dev->current_state <= PCI_D3hot))
pci_clear_master(pci_dev);
+#endif
}
#ifdef CONFIG_PM
return 0;
}
+bool pci_device_is_present(struct pci_dev *pdev)
+{
+ u32 v;
+
+ return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0);
+}
+EXPORT_SYMBOL_GPL(pci_device_is_present);
+
#define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE
static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
static DEFINE_SPINLOCK(resource_alignment_lock);
if (dev->is_added) {
pci_proc_detach_device(dev);
pci_remove_sysfs_dev_files(dev);
- device_del(&dev->dev);
+ device_release_driver(&dev->dev);
dev->is_added = 0;
}
static void pci_destroy_dev(struct pci_dev *dev)
{
+ device_del(&dev->dev);
+
down_write(&pci_bus_sem);
list_del(&dev->bus_list);
up_write(&pci_bus_sem);
config OMAP_USB2
tristate "OMAP USB2 PHY Driver"
depends on ARCH_OMAP2PLUS
+ depends on USB_PHY
select GENERIC_PHY
- select USB_PHY
select OMAP_CONTROL_USB
help
Enable this to support the transceiver that is part of SOC. This
config TWL4030_USB
tristate "TWL4030 USB Transceiver Driver"
depends on TWL4030_CORE && REGULATOR_TWL4030 && USB_MUSB_OMAP2PLUS
+ depends on USB_PHY
select GENERIC_PHY
- select USB_PHY
help
Enable this to support the USB OTG transceiver on TWL4030
family chips (including the TWL5030 and TPS659x0 devices).
int id;
struct phy *phy;
- if (!dev) {
- dev_WARN(dev, "no device provided for PHY\n");
- ret = -EINVAL;
- goto err0;
- }
+ if (WARN_ON(!dev))
+ return ERR_PTR(-EINVAL);
phy = kzalloc(sizeof(*phy), GFP_KERNEL);
- if (!phy) {
- ret = -ENOMEM;
- goto err0;
- }
+ if (!phy)
+ return ERR_PTR(-ENOMEM);
id = ida_simple_get(&phy_ida, 0, 0, GFP_KERNEL);
if (id < 0) {
dev_err(dev, "unable to get id\n");
ret = id;
- goto err0;
+ goto free_phy;
}
device_initialize(&phy->dev);
ret = dev_set_name(&phy->dev, "phy-%s.%d", dev_name(dev), id);
if (ret)
- goto err1;
+ goto put_dev;
ret = device_add(&phy->dev);
if (ret)
- goto err1;
+ goto put_dev;
if (pm_runtime_enabled(dev)) {
pm_runtime_enable(&phy->dev);
return phy;
-err1:
- ida_remove(&phy_ida, phy->id);
+put_dev:
put_device(&phy->dev);
+ ida_remove(&phy_ida, phy->id);
+free_phy:
kfree(phy);
-
-err0:
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(phy_create);
#define PINMUX_GPIO(_pin) \
[GPIO_##_pin] = { \
.pin = (u16)-1, \
- .name = __stringify(name), \
+ .name = __stringify(GPIO_##_pin), \
.enum_id = _pin##_DATA, \
}
default:
return -EINVAL;
}
+ ret <<= ffs(mask) - 1;
val = ret & mask;
- val <<= ffs(mask) - 1;
return as3722_update_bits(as3722, reg, mask, val);
}
return "";
}
+static bool have_full_constraints(void)
+{
+ return has_full_constraints || of_have_populated_dt();
+}
+
/**
* of_get_regulator - get a regulator device node based on supply name
* @dev: Device pointer for the consumer (of regulator) device
* Assume that a regulator is physically present and enabled
* even if it isn't hooked up and just provide a dummy.
*/
- if (has_full_constraints && allow_dummy) {
+ if (have_full_constraints() && allow_dummy) {
pr_warn("%s supply %s not found, using dummy regulator\n",
devname, id);
if (error)
ret = error;
} else {
- if (!has_full_constraints)
+ if (!have_full_constraints())
goto unlock;
if (!ops->disable)
goto unlock;
if (!enabled)
goto unlock;
- if (has_full_constraints) {
+ if (have_full_constraints()) {
/* We log since this may kill the system if it
* goes wrong. */
rdev_info(rdev, "disabling\n");
#define PFUZE100_DEVICEID 0x0
#define PFUZE100_REVID 0x3
-#define PFUZE100_FABID 0x3
+#define PFUZE100_FABID 0x4
#define PFUZE100_SW1ABVOL 0x20
#define PFUZE100_SW1CVOL 0x2e
platform_set_drvdata(pdev, s2mps11);
config.dev = &pdev->dev;
- config.regmap = iodev->regmap;
+ config.regmap = iodev->regmap_pmic;
config.driver_data = s2mps11;
for (i = 0; i < S2MPS11_REGULATOR_MAX; i++) {
if (!reg_np) {
config.dev = s5m8767->dev;
config.init_data = pdata->regulators[i].initdata;
config.driver_data = s5m8767;
- config.regmap = iodev->regmap;
+ config.regmap = iodev->regmap_pmic;
config.of_node = pdata->regulators[i].reg_node;
rdev[i] = devm_regulator_register(&pdev->dev, ®ulators[id],
at91_alarm_year = tm.tm_year;
+ tm.tm_mon = alrm->time.tm_mon;
+ tm.tm_mday = alrm->time.tm_mday;
tm.tm_hour = alrm->time.tm_hour;
tm.tm_min = alrm->time.tm_min;
tm.tm_sec = alrm->time.tm_sec;
#include <linux/mfd/samsung/irq.h>
#include <linux/mfd/samsung/rtc.h>
+/*
+ * Maximum number of retries for checking changes in UDR field
+ * of SEC_RTC_UDR_CON register (to limit possible endless loop).
+ *
+ * After writing to RTC registers (setting time or alarm) read the UDR field
+ * in SEC_RTC_UDR_CON register. UDR is auto-cleared when data have
+ * been transferred.
+ */
+#define UDR_READ_RETRY_CNT 5
+
struct s5m_rtc_info {
struct device *dev;
struct sec_pmic_dev *s5m87xx;
- struct regmap *rtc;
+ struct regmap *regmap;
struct rtc_device *rtc_dev;
int irq;
int device_type;
}
}
+/*
+ * Read RTC_UDR_CON register and wait till UDR field is cleared.
+ * This indicates that time/alarm update ended.
+ */
+static inline int s5m8767_wait_for_udr_update(struct s5m_rtc_info *info)
+{
+ int ret, retry = UDR_READ_RETRY_CNT;
+ unsigned int data;
+
+ do {
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
+ } while (--retry && (data & RTC_UDR_MASK) && !ret);
+
+ if (!retry)
+ dev_err(info->dev, "waiting for UDR update, reached max number of retries\n");
+
+ return ret;
+}
+
static inline int s5m8767_rtc_set_time_reg(struct s5m_rtc_info *info)
{
int ret;
unsigned int data;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
if (ret < 0) {
dev_err(info->dev, "failed to read update reg(%d)\n", ret);
return ret;
data |= RTC_TIME_EN_MASK;
data |= RTC_UDR_MASK;
- ret = regmap_write(info->rtc, SEC_RTC_UDR_CON, data);
+ ret = regmap_write(info->regmap, SEC_RTC_UDR_CON, data);
if (ret < 0) {
dev_err(info->dev, "failed to write update reg(%d)\n", ret);
return ret;
}
- do {
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
- } while ((data & RTC_UDR_MASK) && !ret);
+ ret = s5m8767_wait_for_udr_update(info);
return ret;
}
int ret;
unsigned int data;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &data);
if (ret < 0) {
dev_err(info->dev, "%s: fail to read update reg(%d)\n",
__func__, ret);
data &= ~RTC_TIME_EN_MASK;
data |= RTC_UDR_MASK;
- ret = regmap_write(info->rtc, SEC_RTC_UDR_CON, data);
+ ret = regmap_write(info->regmap, SEC_RTC_UDR_CON, data);
if (ret < 0) {
dev_err(info->dev, "%s: fail to write update reg(%d)\n",
__func__, ret);
return ret;
}
- do {
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &data);
- } while ((data & RTC_UDR_MASK) && !ret);
+ ret = s5m8767_wait_for_udr_update(info);
return ret;
}
u8 data[8];
int ret;
- ret = regmap_bulk_read(info->rtc, SEC_RTC_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_RTC_SEC, data, 8);
if (ret < 0)
return ret;
1900 + tm->tm_year, 1 + tm->tm_mon, tm->tm_mday,
tm->tm_hour, tm->tm_min, tm->tm_sec, tm->tm_wday);
- ret = regmap_raw_write(info->rtc, SEC_RTC_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_RTC_SEC, data, 8);
if (ret < 0)
return ret;
unsigned int val;
int ret, i;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
s5m8763_data_to_tm(data, &alrm->time);
- ret = regmap_read(info->rtc, SEC_ALARM0_CONF, &val);
+ ret = regmap_read(info->regmap, SEC_ALARM0_CONF, &val);
if (ret < 0)
return ret;
alrm->enabled = !!val;
- ret = regmap_read(info->rtc, SEC_RTC_STATUS, &val);
+ ret = regmap_read(info->regmap, SEC_RTC_STATUS, &val);
if (ret < 0)
return ret;
}
alrm->pending = 0;
- ret = regmap_read(info->rtc, SEC_RTC_STATUS, &val);
+ ret = regmap_read(info->regmap, SEC_RTC_STATUS, &val);
if (ret < 0)
return ret;
break;
int ret, i;
struct rtc_time tm;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
- ret = regmap_write(info->rtc, SEC_ALARM0_CONF, 0);
+ ret = regmap_write(info->regmap, SEC_ALARM0_CONF, 0);
break;
case S5M8767X:
for (i = 0; i < 7; i++)
data[i] &= ~ALARM_ENABLE_MASK;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
u8 alarm0_conf;
struct rtc_time tm;
- ret = regmap_bulk_read(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_bulk_read(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
switch (info->device_type) {
case S5M8763X:
alarm0_conf = 0x77;
- ret = regmap_write(info->rtc, SEC_ALARM0_CONF, alarm0_conf);
+ ret = regmap_write(info->regmap, SEC_ALARM0_CONF, alarm0_conf);
break;
case S5M8767X:
if (data[RTC_YEAR1] & 0x7f)
data[RTC_YEAR1] |= ALARM_ENABLE_MASK;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
ret = s5m8767_rtc_set_alarm_reg(info);
if (ret < 0)
return ret;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_SEC, data, 8);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_SEC, data, 8);
if (ret < 0)
return ret;
static void s5m_rtc_enable_wtsr(struct s5m_rtc_info *info, bool enable)
{
int ret;
- ret = regmap_update_bits(info->rtc, SEC_WTSR_SMPL_CNTL,
+ ret = regmap_update_bits(info->regmap, SEC_WTSR_SMPL_CNTL,
WTSR_ENABLE_MASK,
enable ? WTSR_ENABLE_MASK : 0);
if (ret < 0)
static void s5m_rtc_enable_smpl(struct s5m_rtc_info *info, bool enable)
{
int ret;
- ret = regmap_update_bits(info->rtc, SEC_WTSR_SMPL_CNTL,
+ ret = regmap_update_bits(info->regmap, SEC_WTSR_SMPL_CNTL,
SMPL_ENABLE_MASK,
enable ? SMPL_ENABLE_MASK : 0);
if (ret < 0)
int ret;
struct rtc_time tm;
- ret = regmap_read(info->rtc, SEC_RTC_UDR_CON, &tp_read);
+ ret = regmap_read(info->regmap, SEC_RTC_UDR_CON, &tp_read);
if (ret < 0) {
dev_err(info->dev, "%s: fail to read control reg(%d)\n",
__func__, ret);
data[1] = (0 << BCD_EN_SHIFT) | (1 << MODEL24_SHIFT);
info->rtc_24hr_mode = 1;
- ret = regmap_raw_write(info->rtc, SEC_ALARM0_CONF, data, 2);
+ ret = regmap_raw_write(info->regmap, SEC_ALARM0_CONF, data, 2);
if (ret < 0) {
dev_err(info->dev, "%s: fail to write controlm reg(%d)\n",
__func__, ret);
ret = s5m_rtc_set_time(info->dev, &tm);
}
- ret = regmap_update_bits(info->rtc, SEC_RTC_UDR_CON,
+ ret = regmap_update_bits(info->regmap, SEC_RTC_UDR_CON,
RTC_TCON_MASK, tp_read | RTC_TCON_MASK);
if (ret < 0)
dev_err(info->dev, "%s: fail to update TCON reg(%d)\n",
info->dev = &pdev->dev;
info->s5m87xx = s5m87xx;
- info->rtc = s5m87xx->rtc;
+ info->regmap = s5m87xx->regmap_rtc;
info->device_type = s5m87xx->device_type;
info->wtsr_smpl = s5m87xx->wtsr_smpl;
switch (pdata->device_type) {
case S5M8763X:
- info->irq = s5m87xx->irq_base + S5M8763_IRQ_ALARM0;
+ info->irq = regmap_irq_get_virq(s5m87xx->irq_data,
+ S5M8763_IRQ_ALARM0);
break;
case S5M8767X:
- info->irq = s5m87xx->irq_base + S5M8767_IRQ_RTCA1;
+ info->irq = regmap_irq_get_virq(s5m87xx->irq_data,
+ S5M8767_IRQ_RTCA1);
break;
default:
if (info->wtsr_smpl) {
for (i = 0; i < 3; i++) {
s5m_rtc_enable_wtsr(info, false);
- regmap_read(info->rtc, SEC_WTSR_SMPL_CNTL, &val);
+ regmap_read(info->regmap, SEC_WTSR_SMPL_CNTL, &val);
pr_debug("%s: WTSR_SMPL reg(0x%02x)\n", __func__, val);
if (val & WTSR_ENABLE_MASK)
pr_emerg("%s: fail to disable WTSR\n",
s5m_rtc_enable_smpl(info, false);
}
+static int s5m_rtc_resume(struct device *dev)
+{
+ struct s5m_rtc_info *info = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (device_may_wakeup(dev))
+ ret = disable_irq_wake(info->irq);
+
+ return ret;
+}
+
+static int s5m_rtc_suspend(struct device *dev)
+{
+ struct s5m_rtc_info *info = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (device_may_wakeup(dev))
+ ret = enable_irq_wake(info->irq);
+
+ return ret;
+}
+
+static SIMPLE_DEV_PM_OPS(s5m_rtc_pm_ops, s5m_rtc_suspend, s5m_rtc_resume);
+
static const struct platform_device_id s5m_rtc_id[] = {
{ "s5m-rtc", 0 },
};
.driver = {
.name = "s5m-rtc",
.owner = THIS_MODULE,
+ .pm = &s5m_rtc_pm_ops,
},
.probe = s5m_rtc_probe,
.shutdown = s5m_rtc_shutdown,
{
if (block->gdp) {
del_gendisk(block->gdp);
- block->gdp->queue = NULL;
block->gdp->private_data = NULL;
put_disk(block->gdp);
block->gdp = NULL;
u8 _reserved5[4096 - 112]; /* 112-4095 */
} __packed __aligned(PAGE_SIZE);
-static __initdata struct init_sccb early_event_mask_sccb __aligned(PAGE_SIZE);
static __initdata struct read_info_sccb early_read_info_sccb;
static __initdata char sccb_early[PAGE_SIZE] __aligned(PAGE_SIZE);
static unsigned long sclp_hsa_size;
bool __init sclp_has_linemode(void)
{
- struct init_sccb *sccb = &early_event_mask_sccb;
+ struct init_sccb *sccb = (void *) &sccb_early;
if (sccb->header.response_code != 0x20)
return 0;
bool __init sclp_has_vt220(void)
{
- struct init_sccb *sccb = &early_event_mask_sccb;
+ struct init_sccb *sccb = (void *) &sccb_early;
if (sccb->header.response_code != 0x20)
return 0;
release_firmware(fw);
}
- return ret;
+ return ret < 0 ? ret : 0;
}
EXPORT_SYMBOL_GPL(comedi_load_firmware);
BOARD_ADLINK_PCI7296,
BOARD_CB_PCIDIO24,
BOARD_CB_PCIDIO24H,
- BOARD_CB_PCIDIO48H,
+ BOARD_CB_PCIDIO48H_OLD,
+ BOARD_CB_PCIDIO48H_NEW,
BOARD_CB_PCIDIO96H,
BOARD_NI_PCIDIO96,
BOARD_NI_PCIDIO96B,
.dio_badr = 2,
.n_8255 = 1,
},
- [BOARD_CB_PCIDIO48H] = {
+ [BOARD_CB_PCIDIO48H_OLD] = {
.name = "cb_pci-dio48h",
.dio_badr = 1,
.n_8255 = 2,
},
+ [BOARD_CB_PCIDIO48H_NEW] = {
+ .name = "cb_pci-dio48h",
+ .dio_badr = 2,
+ .n_8255 = 2,
+ },
[BOARD_CB_PCIDIO96H] = {
.name = "cb_pci-dio96h",
.dio_badr = 2,
{ PCI_VDEVICE(ADLINK, 0x7296), BOARD_ADLINK_PCI7296 },
{ PCI_VDEVICE(CB, 0x0028), BOARD_CB_PCIDIO24 },
{ PCI_VDEVICE(CB, 0x0014), BOARD_CB_PCIDIO24H },
- { PCI_VDEVICE(CB, 0x000b), BOARD_CB_PCIDIO48H },
+ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CB, 0x000b, 0x0000, 0x0000),
+ .driver_data = BOARD_CB_PCIDIO48H_OLD },
+ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CB, 0x000b, PCI_VENDOR_ID_CB, 0x000b),
+ .driver_data = BOARD_CB_PCIDIO48H_NEW },
{ PCI_VDEVICE(CB, 0x0017), BOARD_CB_PCIDIO96H },
{ PCI_VDEVICE(NI, 0x0160), BOARD_NI_PCIDIO96 },
{ PCI_VDEVICE(NI, 0x1630), BOARD_NI_PCIDIO96B },
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) | \
BIT(IIO_CHAN_INFO_SAMP_FREQ), \
.scan_index = idx, \
- .scan_type = IIO_ST('s', 16, 16, IIO_BE), \
+ .scan_type = { \
+ .sign = 's', \
+ .realbits = 16, \
+ .storagebits = 16, \
+ .endianness = IIO_BE, \
+ }, \
}
static const struct iio_chan_spec hmc5843_channels[] = {
imx_drm_device_put();
- drm_mode_config_cleanup(imxdrm->drm);
+ drm_vblank_cleanup(imxdrm->drm);
drm_kms_helper_poll_fini(imxdrm->drm);
+ drm_mode_config_cleanup(imxdrm->drm);
return 0;
}
if (!file->is_master)
return;
- for (i = 0; i < 4; i++)
- imx_drm_disable_vblank(drm , i);
+ for (i = 0; i < MAX_CRTC; i++)
+ imx_drm_disable_vblank(drm, i);
}
static const struct file_operations imx_drm_driver_fops = {
struct imx_drm_device *imxdrm = __imx_drm_device();
int ret;
- drm_crtc_init(imxdrm->drm, imx_drm_crtc->crtc,
- imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs);
ret = drm_mode_crtc_set_gamma_size(imx_drm_crtc->crtc, 256);
if (ret)
return ret;
drm_crtc_helper_add(imx_drm_crtc->crtc,
imx_drm_crtc->imx_drm_helper_funcs.crtc_helper_funcs);
+ drm_crtc_init(imxdrm->drm, imx_drm_crtc->crtc,
+ imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs);
+
drm_mode_group_reinit(imxdrm->drm);
return 0;
ret = drm_mode_group_init_legacy_group(imxdrm->drm,
&imxdrm->drm->primary->mode_group);
if (ret)
- goto err_init;
+ goto err_kms;
ret = drm_vblank_init(imxdrm->drm, MAX_CRTC);
if (ret)
- goto err_init;
+ goto err_kms;
/*
* with vblank_disable_allowed = true, vblank interrupt will be disabled
*/
imxdrm->drm->vblank_disable_allowed = true;
- if (!imx_drm_device_get())
+ if (!imx_drm_device_get()) {
ret = -EINVAL;
+ goto err_vblank;
+ }
- ret = 0;
+ mutex_unlock(&imxdrm->mutex);
+ return 0;
-err_init:
+err_vblank:
+ drm_vblank_cleanup(drm);
+err_kms:
+ drm_kms_helper_poll_fini(drm);
+ drm_mode_config_cleanup(drm);
mutex_unlock(&imxdrm->mutex);
return ret;
mutex_lock(&imxdrm->mutex);
+ /*
+ * The vblank arrays are dimensioned by MAX_CRTC - we can't
+ * pass IDs greater than this to those functions.
+ */
+ if (imxdrm->pipes >= MAX_CRTC) {
+ ret = -EINVAL;
+ goto err_busy;
+ }
+
if (imxdrm->drm->open_count) {
ret = -EBUSY;
goto err_busy;
return 0;
err_register:
+ list_del(&imx_drm_crtc->list);
kfree(imx_drm_crtc);
err_alloc:
err_busy:
struct drm_encoder encoder;
struct imx_drm_encoder *imx_drm_encoder;
struct device *dev;
- spinlock_t enable_lock; /* serializes tve_enable/disable */
spinlock_t lock; /* register lock */
bool enabled;
int mode;
static void tve_enable(struct imx_tve *tve)
{
- unsigned long flags;
int ret;
- spin_lock_irqsave(&tve->enable_lock, flags);
if (!tve->enabled) {
tve->enabled = true;
clk_prepare_enable(tve->clk);
TVE_CD_SM_IEN |
TVE_CD_LM_IEN |
TVE_CD_MON_END_IEN);
-
- spin_unlock_irqrestore(&tve->enable_lock, flags);
}
static void tve_disable(struct imx_tve *tve)
{
- unsigned long flags;
int ret;
- spin_lock_irqsave(&tve->enable_lock, flags);
if (tve->enabled) {
tve->enabled = false;
ret = regmap_update_bits(tve->regmap, TVE_COM_CONF_REG,
TVE_IPU_CLK_EN | TVE_EN, 0);
clk_disable_unprepare(tve->clk);
}
- spin_unlock_irqrestore(&tve->enable_lock, flags);
}
static int tve_setup_tvout(struct imx_tve *tve)
tve->dev = &pdev->dev;
spin_lock_init(&tve->lock);
- spin_lock_init(&tve->enable_lock);
ddc_node = of_parse_phandle(np, "ddc", 0);
if (ddc_node) {
},
};
+static DEFINE_MUTEX(ipu_client_id_mutex);
static int ipu_client_id;
-static int ipu_add_subdevice_pdata(struct device *dev,
- const struct ipu_platform_reg *reg)
-{
- struct platform_device *pdev;
-
- pdev = platform_device_register_data(dev, reg->name, ipu_client_id++,
- ®->pdata, sizeof(struct ipu_platform_reg));
-
- return PTR_ERR_OR_ZERO(pdev);
-}
-
static int ipu_add_client_devices(struct ipu_soc *ipu)
{
- int ret;
- int i;
+ struct device *dev = ipu->dev;
+ unsigned i;
+ int id, ret;
+
+ mutex_lock(&ipu_client_id_mutex);
+ id = ipu_client_id;
+ ipu_client_id += ARRAY_SIZE(client_reg);
+ mutex_unlock(&ipu_client_id_mutex);
for (i = 0; i < ARRAY_SIZE(client_reg); i++) {
const struct ipu_platform_reg *reg = &client_reg[i];
- ret = ipu_add_subdevice_pdata(ipu->dev, reg);
- if (ret)
+ struct platform_device *pdev;
+
+ pdev = platform_device_register_data(dev, reg->name,
+ id++, ®->pdata, sizeof(reg->pdata));
+
+ if (IS_ERR(pdev))
goto err_register;
}
return 0;
err_register:
- platform_device_unregister_children(to_platform_device(ipu->dev));
+ platform_device_unregister_children(to_platform_device(dev));
return ret;
}
size_t canon_head;
size_t echo_head;
size_t echo_commit;
+ size_t echo_mark;
DECLARE_BITMAP(char_map, 256);
/* private to n_tty_receive_overrun (single-threaded) */
{
ldata->read_head = ldata->canon_head = ldata->read_tail = 0;
ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0;
+ ldata->echo_mark = 0;
ldata->line_start = 0;
ldata->erasing = 0;
size_t head;
head = ldata->echo_head;
+ ldata->echo_mark = head;
old = ldata->echo_commit - ldata->echo_tail;
/* Process committed echoes if the accumulated # of bytes
size_t echoed;
if ((!L_ECHO(tty) && !L_ECHONL(tty)) ||
- ldata->echo_commit == ldata->echo_tail)
+ ldata->echo_mark == ldata->echo_tail)
return;
mutex_lock(&ldata->output_lock);
+ ldata->echo_commit = ldata->echo_mark;
echoed = __process_echoes(tty);
mutex_unlock(&ldata->output_lock);
tty->ops->flush_chars(tty);
}
+/* NB: echo_mark and echo_head should be equivalent here */
static void flush_echoes(struct tty_struct *tty)
{
struct n_tty_data *ldata = tty->disc_data;
if (offset == UART_LCR) {
int tries = 1000;
while (tries--) {
- if (value == p->serial_in(p, UART_LCR))
+ unsigned int lcr = p->serial_in(p, UART_LCR);
+ if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR))
return;
dw8250_force_idle(p);
writeb(value, p->membase + (UART_LCR << p->regshift));
if (offset == UART_LCR) {
int tries = 1000;
while (tries--) {
- if (value == p->serial_in(p, UART_LCR))
+ unsigned int lcr = p->serial_in(p, UART_LCR);
+ if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR))
return;
dw8250_force_idle(p);
writel(value, p->membase + (UART_LCR << p->regshift));
static const struct acpi_device_id dw8250_acpi_match[] = {
{ "INT33C4", 0 },
{ "INT33C5", 0 },
+ { "INT3434", 0 },
+ { "INT3435", 0 },
{ "80860F0A", 0 },
{ },
};
continue;
}
+#ifdef SUPPORT_SYSRQ
/*
* uart_handle_sysrq_char() doesn't work if
* spinlocked, for some reason
}
spin_lock(&port->lock);
}
+#endif
port->icount.rx++;
return atomic_long_add_return(delta, (atomic_long_t *)&sem->count);
}
+/*
+ * ldsem_cmpxchg() updates @*old with the last-known sem->count value.
+ * Returns 1 if count was successfully changed; @*old will have @new value.
+ * Returns 0 if count was not changed; @*old will have most recent sem->count
+ */
static inline int ldsem_cmpxchg(long *old, long new, struct ld_semaphore *sem)
{
- long tmp = *old;
- *old = atomic_long_cmpxchg(&sem->count, *old, new);
- return *old == tmp;
+ long tmp = atomic_long_cmpxchg(&sem->count, *old, new);
+ if (tmp == *old) {
+ *old = new;
+ return 1;
+ } else {
+ *old = tmp;
+ return 0;
+ }
}
/*
: CI_ROLE_GADGET;
}
+ /* only update vbus status for peripheral */
+ if (ci->role == CI_ROLE_GADGET)
+ ci_handle_vbus_change(ci);
+
ret = ci_role_start(ci, ci->role);
if (ret) {
dev_err(dev, "can't start %s role\n", ci_role(ci)->name);
return ret;
disable_reg:
- regulator_disable(ci->platdata->reg_vbus);
+ if (ci->platdata->reg_vbus)
+ regulator_disable(ci->platdata->reg_vbus);
put_hcd:
usb_put_hcd(hcd);
pm_runtime_no_callbacks(&ci->gadget.dev);
pm_runtime_enable(&ci->gadget.dev);
- /* Update ci->vbus_active */
- ci_handle_vbus_change(ci);
-
return retval;
destroy_eps:
{
/* need autopm_get/put here to ensure the usbcore sees the new value */
int rv = usb_autopm_get_interface(intf);
- if (rv < 0)
- goto err;
intf->needs_remote_wakeup = on;
- usb_autopm_put_interface(intf);
-err:
- return rv;
+ if (!rv)
+ usb_autopm_put_interface(intf);
+ return 0;
}
static int wdm_probe(struct usb_interface *intf, const struct usb_device_id *id)
if (IS_ERR(regs))
return PTR_ERR(regs);
- usb_phy_set_suspend(dwc->usb2_phy, 0);
- usb_phy_set_suspend(dwc->usb3_phy, 0);
-
spin_lock_init(&dwc->lock);
platform_set_drvdata(pdev, dwc);
goto err0;
}
+ usb_phy_set_suspend(dwc->usb2_phy, 0);
+ usb_phy_set_suspend(dwc->usb3_phy, 0);
+
ret = dwc3_event_buffers_setup(dwc);
if (ret) {
dev_err(dwc->dev, "failed to setup event buffers\n");
dwc3_event_buffers_cleanup(dwc);
err1:
+ usb_phy_set_suspend(dwc->usb2_phy, 1);
+ usb_phy_set_suspend(dwc->usb3_phy, 1);
dwc3_core_exit(dwc);
err0:
struct ohci_hcd *ohci;
int retval;
struct usb_hcd *hcd = NULL;
-
- if (pdev->num_resources != 2) {
- pr_debug("hcd probe: invalid num_resources");
- return -ENODEV;
+ struct device *dev = &pdev->dev;
+ struct resource *res;
+ int irq;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_dbg(dev, "hcd probe: missing memory resource\n");
+ return -ENXIO;
}
- if ((pdev->resource[0].flags != IORESOURCE_MEM)
- || (pdev->resource[1].flags != IORESOURCE_IRQ)) {
- pr_debug("hcd probe: invalid resource type\n");
- return -ENODEV;
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_dbg(dev, "hcd probe: missing irq resource\n");
+ return irq;
}
hcd = usb_create_hcd(driver, &pdev->dev, "at91");
if (!hcd)
return -ENOMEM;
- hcd->rsrc_start = pdev->resource[0].start;
- hcd->rsrc_len = resource_size(&pdev->resource[0]);
+ hcd->rsrc_start = res->start;
+ hcd->rsrc_len = resource_size(res);
if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, hcd_name)) {
pr_debug("request_mem_region failed\n");
ohci->num_ports = board->ports;
at91_start_hc(pdev);
- retval = usb_add_hcd(hcd, pdev->resource[1].start, IRQF_SHARED);
+ retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (retval == 0)
return retval;
* any other sleep) on Haswell machines with LPT and LPT-LP
* with the new Intel BIOS
*/
- xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
+ /* Limit the quirk to only known vendors, as this triggers
+ * yet another BIOS bug on some other machines
+ * https://bugzilla.kernel.org/show_bug.cgi?id=66171
+ */
+ if (pdev->subsystem_vendor == PCI_VENDOR_ID_HP)
+ xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
}
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
in host mode, low speed.
config FSL_USB2_OTG
- bool "Freescale USB OTG Transceiver Driver"
+ tristate "Freescale USB OTG Transceiver Driver"
depends on USB_EHCI_FSL && USB_FSL_USB2 && PM_RUNTIME
+ depends on USB
select USB_OTG
select USB_PHY
help
config ISP1301_OMAP
tristate "Philips ISP1301 with OMAP OTG"
depends on I2C && ARCH_OMAP_OTG
+ depends on USB
select USB_PHY
help
If you say yes here you get support for the Philips ISP1301
tegra_phy->pad_regs = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
- if (!tegra_phy->regs) {
+ if (!tegra_phy->pad_regs) {
dev_err(&pdev->dev, "Failed to remap UTMI Pad regs\n");
return -ENOMEM;
}
static inline u8 twl6030_readb(struct twl6030_usb *twl, u8 module, u8 address)
{
- u8 data, ret = 0;
+ u8 data;
+ int ret;
ret = twl_i2c_read_u8(module, &data, address);
if (ret >= 0)
#define ZTE_PRODUCT_MF628 0x0015
#define ZTE_PRODUCT_MF626 0x0031
#define ZTE_PRODUCT_MC2718 0xffe8
+#define ZTE_PRODUCT_AC2726 0xfff1
#define BENQ_VENDOR_ID 0x04a5
#define BENQ_PRODUCT_H10 0x4068
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) },
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) },
{ USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) },
{ USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) },
{ USB_DEVICE(DLINK_VENDOR_ID, DLINK_PRODUCT_DWM_652) },
{ USB_DEVICE(0x19d2, 0xfffd) },
{ USB_DEVICE(0x19d2, 0xfffc) },
{ USB_DEVICE(0x19d2, 0xfffb) },
- /* AC2726, AC8710_V3 */
- { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xfff1, 0xff, 0xff, 0xff) },
+ /* AC8710_V3 */
{ USB_DEVICE(0x19d2, 0xfff6) },
{ USB_DEVICE(0x19d2, 0xfff7) },
{ USB_DEVICE(0x19d2, 0xfff8) },
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/of_address.h>
-#include <linux/miscdevice.h>
#define PM_RSTC 0x1c
#define PM_WDOG 0x24
#include <linux/platform_device.h>
#include <linux/module.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/timer.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/watchdog.h>
-#include <linux/miscdevice.h>
#include <linux/seq_file.h>
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/miscdevice.h>
#include <linux/uaccess.h>
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/bitops.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/platform_device.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/kernel.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/watchdog.h>
-#include <linux/miscdevice.h>
#include <linux/moduleparam.h>
#include <linux/platform_device.h>
#if defined CONFIG_PNP
/* now that the user has specified an IO port and we haven't detected
* any devices, disable pnp support */
+ if (isapnp)
+ pnp_unregister_driver(&scl200wdt_pnp_driver);
isapnp = 0;
- pnp_unregister_driver(&scl200wdt_pnp_driver);
#endif
if (!request_region(io, io_len, SC1200_MODULE_NAME)) {
#include <linux/init.h>
#include <linux/types.h>
#include <linux/spinlock.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/pm_runtime.h>
#include <linux/fs.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/timer.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/notifier.h>
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/platform_device.h>
#include <linux/stmp3xxx_rtc_wdt.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
-#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/moduleparam.h>
-#include <linux/miscdevice.h>
#include <linux/err.h>
#include <linux/uaccess.h>
#include <linux/watchdog.h>
if (!path)
return -ENOMEM;
- if (metadata) {
- key.objectid = bytenr;
- key.type = BTRFS_METADATA_ITEM_KEY;
- key.offset = offset;
- } else {
- key.objectid = bytenr;
- key.type = BTRFS_EXTENT_ITEM_KEY;
- key.offset = offset;
- }
-
if (!trans) {
path->skip_locking = 1;
path->search_commit_root = 1;
}
+
+search_again:
+ key.objectid = bytenr;
+ key.offset = offset;
+ if (metadata)
+ key.type = BTRFS_METADATA_ITEM_KEY;
+ else
+ key.type = BTRFS_EXTENT_ITEM_KEY;
+
again:
ret = btrfs_search_slot(trans, root->fs_info->extent_root,
&key, path, 0, 0);
goto out_free;
if (ret > 0 && metadata && key.type == BTRFS_METADATA_ITEM_KEY) {
- metadata = 0;
if (path->slots[0]) {
path->slots[0]--;
btrfs_item_key_to_cpu(path->nodes[0], &key,
mutex_lock(&head->mutex);
mutex_unlock(&head->mutex);
btrfs_put_delayed_ref(&head->node);
- goto again;
+ goto search_again;
}
if (head->extent_op && head->extent_op->update_flags)
extent_flags |= head->extent_op->flags_to_set;
err = mutex_lock_killable_nested(&dir->i_mutex, I_MUTEX_PARENT);
if (err == -EINTR)
- goto out;
+ goto out_drop_write;
dentry = lookup_one_len(vol_args->name, parent, namelen);
if (IS_ERR(dentry)) {
err = PTR_ERR(dentry);
dput(dentry);
out_unlock_dir:
mutex_unlock(&dir->i_mutex);
+out_drop_write:
mnt_drop_write_file(file);
out:
kfree(vol_args);
root_objectid == BTRFS_CHUNK_TREE_OBJECTID ||
root_objectid == BTRFS_DEV_TREE_OBJECTID ||
root_objectid == BTRFS_TREE_LOG_OBJECTID ||
- root_objectid == BTRFS_CSUM_TREE_OBJECTID)
+ root_objectid == BTRFS_CSUM_TREE_OBJECTID ||
+ root_objectid == BTRFS_UUID_TREE_OBJECTID ||
+ root_objectid == BTRFS_QUOTA_TREE_OBJECTID)
return 1;
return 0;
}
}
/*
- * helper to update/delete the 'address of tree root -> reloc tree'
+ * helper to delete the 'address of tree root -> reloc tree'
* mapping
*/
-static int __update_reloc_root(struct btrfs_root *root, int del)
+static void __del_reloc_root(struct btrfs_root *root)
{
struct rb_node *rb_node;
struct mapping_node *node = NULL;
spin_lock(&rc->reloc_root_tree.lock);
rb_node = tree_search(&rc->reloc_root_tree.rb_root,
- root->commit_root->start);
+ root->node->start);
if (rb_node) {
node = rb_entry(rb_node, struct mapping_node, rb_node);
rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
spin_unlock(&rc->reloc_root_tree.lock);
if (!node)
- return 0;
+ return;
BUG_ON((struct btrfs_root *)node->data != root);
- if (!del) {
- spin_lock(&rc->reloc_root_tree.lock);
- node->bytenr = root->node->start;
- rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
- node->bytenr, &node->rb_node);
- spin_unlock(&rc->reloc_root_tree.lock);
- if (rb_node)
- backref_tree_panic(rb_node, -EEXIST, node->bytenr);
- } else {
- spin_lock(&root->fs_info->trans_lock);
- list_del_init(&root->root_list);
- spin_unlock(&root->fs_info->trans_lock);
- kfree(node);
+ spin_lock(&root->fs_info->trans_lock);
+ list_del_init(&root->root_list);
+ spin_unlock(&root->fs_info->trans_lock);
+ kfree(node);
+}
+
+/*
+ * helper to update the 'address of tree root -> reloc tree'
+ * mapping
+ */
+static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr)
+{
+ struct rb_node *rb_node;
+ struct mapping_node *node = NULL;
+ struct reloc_control *rc = root->fs_info->reloc_ctl;
+
+ spin_lock(&rc->reloc_root_tree.lock);
+ rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+ root->node->start);
+ if (rb_node) {
+ node = rb_entry(rb_node, struct mapping_node, rb_node);
+ rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
}
+ spin_unlock(&rc->reloc_root_tree.lock);
+
+ if (!node)
+ return 0;
+ BUG_ON((struct btrfs_root *)node->data != root);
+
+ spin_lock(&rc->reloc_root_tree.lock);
+ node->bytenr = new_bytenr;
+ rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
+ node->bytenr, &node->rb_node);
+ spin_unlock(&rc->reloc_root_tree.lock);
+ if (rb_node)
+ backref_tree_panic(rb_node, -EEXIST, node->bytenr);
return 0;
}
{
struct btrfs_root *reloc_root;
struct btrfs_root_item *root_item;
- int del = 0;
int ret;
if (!root->reloc_root)
if (root->fs_info->reloc_ctl->merge_reloc_tree &&
btrfs_root_refs(root_item) == 0) {
root->reloc_root = NULL;
- del = 1;
+ __del_reloc_root(reloc_root);
}
- __update_reloc_root(reloc_root, del);
-
if (reloc_root->commit_root != reloc_root->node) {
btrfs_set_root_node(root_item, reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
while (!list_empty(list)) {
reloc_root = list_entry(list->next, struct btrfs_root,
root_list);
- __update_reloc_root(reloc_root, 1);
+ __del_reloc_root(reloc_root);
free_extent_buffer(reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
kfree(reloc_root);
ret = merge_reloc_root(rc, root);
if (ret) {
- __update_reloc_root(reloc_root, 1);
+ __del_reloc_root(reloc_root);
free_extent_buffer(reloc_root->node);
free_extent_buffer(reloc_root->commit_root);
kfree(reloc_root);
btrfs_std_error(root->fs_info, ret);
if (!list_empty(&reloc_roots))
free_reloc_roots(&reloc_roots);
+
+ /* new reloc root may be added */
+ mutex_lock(&root->fs_info->reloc_mutex);
+ list_splice_init(&rc->reloc_roots, &reloc_roots);
+ mutex_unlock(&root->fs_info->reloc_mutex);
+ if (!list_empty(&reloc_roots))
+ free_reloc_roots(&reloc_roots);
}
BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root));
BUG_ON(rc->stage == UPDATE_DATA_PTRS &&
root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID);
+ if (root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) {
+ if (buf == root->node)
+ __update_reloc_root(root, cow->start);
+ }
+
level = btrfs_header_level(buf);
if (btrfs_header_generation(buf) <=
btrfs_root_last_snapshot(&root->root_item))
}
if (!access_ok(VERIFY_READ, arg->clone_sources,
- sizeof(*arg->clone_sources *
- arg->clone_sources_count))) {
+ sizeof(*arg->clone_sources) *
+ arg->clone_sources_count)) {
ret = -EFAULT;
goto out;
}
} else {
printk(KERN_INFO "btrfs: setting nodatacow\n");
}
- info->compress_type = BTRFS_COMPRESS_NONE;
btrfs_clear_opt(info->mount_opt, COMPRESS);
btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
btrfs_set_opt(info->mount_opt, NODATACOW);
btrfs_set_fs_incompat(info, COMPRESS_LZO);
} else if (strncmp(args[0].from, "no", 2) == 0) {
compress_type = "no";
- info->compress_type = BTRFS_COMPRESS_NONE;
btrfs_clear_opt(info->mount_opt, COMPRESS);
btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
compress_force = false;
btrfs_set_opt(info->mount_opt, FORCE_COMPRESS);
pr_info("btrfs: force %s compression\n",
compress_type);
- } else
+ } else if (btrfs_test_opt(root, COMPRESS)) {
pr_info("btrfs: use %s compression\n",
compress_type);
+ }
break;
case Opt_ssd:
printk(KERN_INFO "btrfs: use ssd allocation scheme\n");
if (err < 0) {
SetPageError(page);
goto out;
- } else if (err < PAGE_CACHE_SIZE) {
+ } else {
+ if (err < PAGE_CACHE_SIZE) {
/* zero fill remainder of page */
- zero_user_segment(page, err, PAGE_CACHE_SIZE);
+ zero_user_segment(page, err, PAGE_CACHE_SIZE);
+ } else {
+ flush_dcache_page(page);
+ }
}
SetPageUptodate(page);
struct ceph_mds_reply_inode *ininfo;
struct ceph_vino vino;
struct ceph_fs_client *fsc = ceph_sb_to_client(sb);
- int i = 0;
int err = 0;
dout("fill_trace %p is_dentry %d is_target %d\n", req,
}
}
+ if (rinfo->head->is_target) {
+ vino.ino = le64_to_cpu(rinfo->targeti.in->ino);
+ vino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
+
+ in = ceph_get_inode(sb, vino);
+ if (IS_ERR(in)) {
+ err = PTR_ERR(in);
+ goto done;
+ }
+ req->r_target_inode = in;
+
+ err = fill_inode(in, &rinfo->targeti, NULL,
+ session, req->r_request_started,
+ (le32_to_cpu(rinfo->head->result) == 0) ?
+ req->r_fmode : -1,
+ &req->r_caps_reservation);
+ if (err < 0) {
+ pr_err("fill_inode badness %p %llx.%llx\n",
+ in, ceph_vinop(in));
+ goto done;
+ }
+ }
+
/*
* ignore null lease/binding on snapdir ENOENT, or else we
* will have trouble splicing in the virtual snapdir later
ceph_dentry(req->r_old_dentry)->offset);
dn = req->r_old_dentry; /* use old_dentry */
- in = dn->d_inode;
}
/* null dentry? */
}
/* attach proper inode */
- ininfo = rinfo->targeti.in;
- vino.ino = le64_to_cpu(ininfo->ino);
- vino.snap = le64_to_cpu(ininfo->snapid);
- in = dn->d_inode;
- if (!in) {
- in = ceph_get_inode(sb, vino);
- if (IS_ERR(in)) {
- pr_err("fill_trace bad get_inode "
- "%llx.%llx\n", vino.ino, vino.snap);
- err = PTR_ERR(in);
- d_drop(dn);
- goto done;
- }
+ if (!dn->d_inode) {
+ ihold(in);
dn = splice_dentry(dn, in, &have_lease, true);
if (IS_ERR(dn)) {
err = PTR_ERR(dn);
goto done;
}
req->r_dentry = dn; /* may have spliced */
- ihold(in);
- } else if (ceph_ino(in) == vino.ino &&
- ceph_snap(in) == vino.snap) {
- ihold(in);
- } else {
+ } else if (dn->d_inode && dn->d_inode != in) {
dout(" %p links to %p %llx.%llx, not %llx.%llx\n",
- dn, in, ceph_ino(in), ceph_snap(in),
- vino.ino, vino.snap);
+ dn, dn->d_inode, ceph_vinop(dn->d_inode),
+ ceph_vinop(in));
have_lease = false;
- in = NULL;
}
if (have_lease)
update_dentry_lease(dn, rinfo->dlease, session,
req->r_request_started);
dout(" final dn %p\n", dn);
- i++;
- } else if ((req->r_op == CEPH_MDS_OP_LOOKUPSNAP ||
- req->r_op == CEPH_MDS_OP_MKSNAP) && !req->r_aborted) {
+ } else if (!req->r_aborted &&
+ (req->r_op == CEPH_MDS_OP_LOOKUPSNAP ||
+ req->r_op == CEPH_MDS_OP_MKSNAP)) {
struct dentry *dn = req->r_dentry;
/* fill out a snapdir LOOKUPSNAP dentry */
ininfo = rinfo->targeti.in;
vino.ino = le64_to_cpu(ininfo->ino);
vino.snap = le64_to_cpu(ininfo->snapid);
- in = ceph_get_inode(sb, vino);
- if (IS_ERR(in)) {
- pr_err("fill_inode get_inode badness %llx.%llx\n",
- vino.ino, vino.snap);
- err = PTR_ERR(in);
- d_delete(dn);
- goto done;
- }
dout(" linking snapped dir %p to dn %p\n", in, dn);
+ ihold(in);
dn = splice_dentry(dn, in, NULL, true);
if (IS_ERR(dn)) {
err = PTR_ERR(dn);
goto done;
}
req->r_dentry = dn; /* may have spliced */
- ihold(in);
- rinfo->head->is_dentry = 1; /* fool notrace handlers */
- }
-
- if (rinfo->head->is_target) {
- vino.ino = le64_to_cpu(rinfo->targeti.in->ino);
- vino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
-
- if (in == NULL || ceph_ino(in) != vino.ino ||
- ceph_snap(in) != vino.snap) {
- in = ceph_get_inode(sb, vino);
- if (IS_ERR(in)) {
- err = PTR_ERR(in);
- goto done;
- }
- }
- req->r_target_inode = in;
-
- err = fill_inode(in,
- &rinfo->targeti, NULL,
- session, req->r_request_started,
- (le32_to_cpu(rinfo->head->result) == 0) ?
- req->r_fmode : -1,
- &req->r_caps_reservation);
- if (err < 0) {
- pr_err("fill_inode badness %p %llx.%llx\n",
- in, ceph_vinop(in));
- goto done;
- }
}
-
done:
dout("fill_trace done err=%d\n", err);
return err;
struct qstr dname;
struct dentry *dn;
struct inode *in;
- int err = 0, i;
+ int err = 0, ret, i;
struct inode *snapdir = NULL;
struct ceph_mds_request_head *rhead = req->r_request->front.iov_base;
struct ceph_dentry_info *di;
ceph_fill_dirfrag(parent->d_inode, rinfo->dir_dir);
}
+ /* FIXME: release caps/leases if error occurs */
for (i = 0; i < rinfo->dir_nr; i++) {
struct ceph_vino vino;
err = -ENOMEM;
goto out;
}
- err = ceph_init_dentry(dn);
- if (err < 0) {
+ ret = ceph_init_dentry(dn);
+ if (ret < 0) {
dput(dn);
+ err = ret;
goto out;
}
} else if (dn->d_inode &&
spin_unlock(&parent->d_lock);
}
- di = dn->d_fsdata;
- di->offset = ceph_make_fpos(frag, i + r_readdir_offset);
-
/* inode */
if (dn->d_inode) {
in = dn->d_inode;
err = PTR_ERR(in);
goto out;
}
- dn = splice_dentry(dn, in, NULL, false);
- if (IS_ERR(dn))
- dn = NULL;
}
if (fill_inode(in, &rinfo->dir_in[i], NULL, session,
req->r_request_started, -1,
&req->r_caps_reservation) < 0) {
pr_err("fill_inode badness on %p\n", in);
+ if (!dn->d_inode)
+ iput(in);
+ d_drop(dn);
goto next_item;
}
- if (dn)
- update_dentry_lease(dn, rinfo->dir_dlease[i],
- req->r_session,
- req->r_request_started);
+
+ if (!dn->d_inode) {
+ dn = splice_dentry(dn, in, NULL, false);
+ if (IS_ERR(dn)) {
+ err = PTR_ERR(dn);
+ dn = NULL;
+ goto next_item;
+ }
+ }
+
+ di = dn->d_fsdata;
+ di->offset = ceph_make_fpos(frag, i + r_readdir_offset);
+
+ update_dentry_lease(dn, rinfo->dir_dlease[i],
+ req->r_session,
+ req->r_request_started);
next_item:
if (dn)
dput(dn);
}
- req->r_did_prepopulate = true;
+ if (err == 0)
+ req->r_did_prepopulate = true;
out:
if (snapdir) {
if (!tcount)
return 0;
}
- mask = ~(~0ul << tcount*8);
+ mask = bytemask_from_count(tcount);
return unlikely(!!((a ^ b) & mask));
}
* do a "get_unaligned()" if this helps and is sufficiently
* fast.
*
- * - Little-endian machines (so that we can generate the mask
- * of low bytes efficiently). Again, we *could* do a byte
- * swapping load on big-endian architectures if that is not
- * expensive enough to make the optimization worthless.
- *
* - non-CONFIG_DEBUG_PAGEALLOC configurations (so that we
* do not trap on the (extremely unlikely) case of a page
* crossing operation.
if (!len)
goto done;
}
- mask = ~(~0ul << len*8);
+ mask = bytemask_from_count(len);
hash += mask & a;
done:
return fold_hash(hash);
return rp;
}
+static void
+nfsd_reply_cache_unhash(struct svc_cacherep *rp)
+{
+ hlist_del_init(&rp->c_hash);
+ list_del_init(&rp->c_lru);
+}
+
static void
nfsd_reply_cache_free_locked(struct svc_cacherep *rp)
{
rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru);
if (nfsd_cache_entry_expired(rp) ||
num_drc_entries >= max_drc_entries) {
- lru_put_end(rp);
+ nfsd_reply_cache_unhash(rp);
prune_cache_entries();
goto search_cache;
}
{
struct proc_dir_entry *pde = PDE(file_inode(file));
unsigned long rv = -EIO;
- unsigned long (*get_area)(struct file *, unsigned long, unsigned long,
- unsigned long, unsigned long) = NULL;
+
if (use_pde(pde)) {
+ typeof(proc_reg_get_unmapped_area) *get_area;
+
+ get_area = pde->proc_fops->get_unmapped_area;
#ifdef CONFIG_MMU
- get_area = current->mm->get_unmapped_area;
+ if (!get_area)
+ get_area = current->mm->get_unmapped_area;
#endif
- if (pde->proc_fops->get_unmapped_area)
- get_area = pde->proc_fops->get_unmapped_area;
+
if (get_area)
rv = get_area(file, orig_addr, len, pgoff, flags);
+ else
+ rv = orig_addr;
unuse_pde(pde);
}
return rv;
struct sysfs_dirent *attr_sd = file->f_path.dentry->d_fsdata;
struct kobject *kobj = attr_sd->s_parent->s_dir.kobj;
struct sysfs_open_file *of;
- bool has_read, has_write, has_mmap;
+ bool has_read, has_write;
int error = -EACCES;
/* need attr_sd for attr and ops, its parent for kobj */
has_read = battr->read || battr->mmap;
has_write = battr->write || battr->mmap;
- has_mmap = battr->mmap;
} else {
const struct sysfs_ops *ops = sysfs_file_ops(attr_sd);
has_read = ops->show;
has_write = ops->store;
- has_mmap = false;
}
/* check perms and supported operations */
* open file has a separate mutex, it's okay as long as those don't
* happen on the same file. At this point, we can't easily give
* each file a separate locking class. Let's differentiate on
- * whether the file has mmap or not for now.
+ * whether the file is bin or not for now.
*/
- if (has_mmap)
+ if (sysfs_is_bin(attr_sd))
mutex_init(&of->mutex);
else
mutex_init(&of->mutex);
struct xfs_mount *mp,
struct fstrim_range __user *urange)
{
- struct request_queue *q = mp->m_ddev_targp->bt_bdev->bd_disk->queue;
+ struct request_queue *q = bdev_get_queue(mp->m_ddev_targp->bt_bdev);
unsigned int granularity = q->limits.discard_granularity;
struct fstrim_range range;
xfs_daddr_t start, end, minlen;
* matter as trimming blocks is an advisory interface.
*/
if (range.start >= XFS_FSB_TO_B(mp, mp->m_sb.sb_dblocks) ||
- range.minlen > XFS_FSB_TO_B(mp, XFS_ALLOC_AG_MAX_USABLE(mp)))
+ range.minlen > XFS_FSB_TO_B(mp, XFS_ALLOC_AG_MAX_USABLE(mp)) ||
+ range.len < mp->m_sb.sb_blocksize)
return -XFS_ERROR(EINVAL);
start = BTOBB(range.start);
*/
nfree = 0;
for (agno = nagcount - 1; agno >= oagcount; agno--, new -= agsize) {
+ __be32 *agfl_bno;
+
/*
* AG freespace header block
*/
agfl->agfl_seqno = cpu_to_be32(agno);
uuid_copy(&agfl->agfl_uuid, &mp->m_sb.sb_uuid);
}
+
+ agfl_bno = XFS_BUF_TO_AGFL_BNO(mp, bp);
for (bucket = 0; bucket < XFS_AGFL_SIZE(mp); bucket++)
- agfl->agfl_bno[bucket] = cpu_to_be32(NULLAGBLOCK);
+ agfl_bno[bucket] = cpu_to_be32(NULLAGBLOCK);
error = xfs_bwrite(bp);
xfs_buf_relse(bp);
return -XFS_ERROR(EPERM);
if (copy_from_user(&al_hreq, arg, sizeof(xfs_fsop_attrlist_handlereq_t)))
return -XFS_ERROR(EFAULT);
- if (al_hreq.buflen > XATTR_LIST_MAX)
+ if (al_hreq.buflen < sizeof(struct attrlist) ||
+ al_hreq.buflen > XATTR_LIST_MAX)
return -XFS_ERROR(EINVAL);
/*
if (copy_from_user(&al_hreq, arg,
sizeof(compat_xfs_fsop_attrlist_handlereq_t)))
return -XFS_ERROR(EFAULT);
- if (al_hreq.buflen > XATTR_LIST_MAX)
+ if (al_hreq.buflen < sizeof(struct attrlist) ||
+ al_hreq.buflen > XATTR_LIST_MAX)
return -XFS_ERROR(EINVAL);
/*
#endif
#ifndef pte_accessible
-# define pte_accessible(pte) ((void)(pte),1)
+# define pte_accessible(mm, pte) ((void)(pte), 1)
#endif
#ifndef flush_tlb_fix_spurious_fault
return (val + c->high_bits) & ~rhs;
}
+#ifndef zero_bytemask
+#ifdef CONFIG_64BIT
+#define zero_bytemask(mask) (~0ul << fls64(mask))
+#else
+#define zero_bytemask(mask) (~0ul << fls(mask))
+#endif /* CONFIG_64BIT */
+#endif /* zero_bytemask */
+
#endif /* _ASM_WORD_AT_A_TIME_H */
/* Is this the object we're looking for? */
bool (*compare_object)(const void *object, const void *index_key);
- /* How different are two objects, to a bit position in their keys? (or
- * -1 if they're the same)
+ /* How different is an object from an index key, to a bit position in
+ * their keys? (or -1 if they're the same)
*/
- int (*diff_objects)(const void *a, const void *b);
+ int (*diff_objects)(const void *object, const void *index_key);
/* Method to free an object. */
void (*free_object)(void *object);
#endif
-#define uninitialized_var(x) x
-
#ifndef __HAVE_BUILTIN_BSWAP16__
/* icc has this, but it's called _bswap16 */
#define __HAVE_BUILTIN_BSWAP16__
/* The hash is always the low bits of hash_len */
#ifdef __LITTLE_ENDIAN
#define HASH_LEN_DECLARE u32 hash; u32 len;
+ #define bytemask_from_count(cnt) (~(~0ul << (cnt)*8))
#else
#define HASH_LEN_DECLARE u32 len; u32 hash;
+ #define bytemask_from_count(cnt) (~(~0ul >> (cnt)*8))
#endif
/*
return 0;
}
-#define isolate_huge_page(p, l) false
+static inline bool isolate_huge_page(struct page *page, struct list_head *list)
+{
+ return false;
+}
#define putback_active_hugepage(p) do {} while (0)
#define is_hugepage_active(x) false
#include <uapi/linux/ipv6.h>
#define ipv6_optlen(p) (((p)->hdrlen+1) << 3)
+#define ipv6_authlen(p) (((p)->hdrlen+2) << 2)
/*
* This structure contains configuration options per IPv6 link.
*/
(__x < 0) ? -__x : __x; \
})
-#if defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_DEBUG_ATOMIC_SLEEP)
+#if defined(CONFIG_MMU) && \
+ (defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_DEBUG_ATOMIC_SLEEP))
void might_fault(void);
#else
static inline void might_fault(void) { }
extern size_t vmcoreinfo_size;
extern size_t vmcoreinfo_max_size;
+/* flag to track if kexec reboot is in progress */
+extern bool kexec_in_progress;
+
int __init parse_crashkernel(char *cmdline, unsigned long long system_ram,
unsigned long long *crash_size, unsigned long long *crash_base);
int parse_crashkernel_high(char *cmdline, unsigned long long system_ram,
struct sec_pmic_dev {
struct device *dev;
struct sec_platform_data *pdata;
- struct regmap *regmap;
+ struct regmap *regmap_pmic;
+ struct regmap *regmap_rtc;
struct i2c_client *i2c;
struct i2c_client *rtc;
#define PHY_ID_KSZ8021 0x00221555
#define PHY_ID_KSZ8031 0x00221556
#define PHY_ID_KSZ8041 0x00221510
+/* undocumented */
+#define PHY_ID_KSZ8041RNLI 0x00221537
#define PHY_ID_KSZ8051 0x00221550
/* same id: ks8001 Rev. A/B, and ks8721 Rev 3. */
#define PHY_ID_KSZ8001 0x0022161A
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_NUMA_BALANCING
+extern bool pmd_trans_migrating(pmd_t pmd);
+extern void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd);
extern int migrate_misplaced_page(struct page *page,
struct vm_area_struct *vma, int node);
extern bool migrate_ratelimited(int node);
#else
+static inline bool pmd_trans_migrating(pmd_t pmd)
+{
+ return false;
+}
+static inline void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd)
+{
+}
static inline int migrate_misplaced_page(struct page *page,
struct vm_area_struct *vma, int node)
{
/* numa_scan_seq prevents two threads setting pte_numa */
int numa_scan_seq;
+#endif
+#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION)
+ /*
+ * An operation with batched TLB flushing is going on. Anything that
+ * can move process memory needs to flush the TLB when moving a
+ * PROT_NONE or PROT_NUMA mapped page.
+ */
+ bool tlb_flush_pending;
#endif
struct uprobes_state uprobes_state;
};
return mm->cpu_vm_mask_var;
}
+#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION)
+/*
+ * Memory barriers to keep this state in sync are graciously provided by
+ * the page table locks, outside of which no page table modifications happen.
+ * The barriers below prevent the compiler from re-ordering the instructions
+ * around the memory barriers that are already present in the code.
+ */
+static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
+{
+ barrier();
+ return mm->tlb_flush_pending;
+}
+static inline void set_tlb_flush_pending(struct mm_struct *mm)
+{
+ mm->tlb_flush_pending = true;
+
+ /*
+ * Guarantee that the tlb_flush_pending store does not leak into the
+ * critical section updating the page tables
+ */
+ smp_mb__before_spinlock();
+}
+/* Clearing is done after a TLB flush, which also provides a barrier. */
+static inline void clear_tlb_flush_pending(struct mm_struct *mm)
+{
+ barrier();
+ mm->tlb_flush_pending = false;
+}
+#else
+static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
+{
+ return false;
+}
+static inline void set_tlb_flush_pending(struct mm_struct *mm)
+{
+}
+static inline void clear_tlb_flush_pending(struct mm_struct *mm)
+{
+}
+#endif
+
#endif /* _LINUX_MM_TYPES_H */
int offset, size_t size, int flags);
ssize_t (*splice_read)(struct socket *sock, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len, unsigned int flags);
- void (*set_peek_off)(struct sock *sk, int val);
+ int (*set_peek_off)(struct sock *sk, int val);
};
#define DECLARE_SOCKADDR(type, dst, src) \
unsigned char perm_addr[MAX_ADDR_LEN]; /* permanent hw address */
unsigned char addr_assign_type; /* hw address assignment type */
unsigned char addr_len; /* hardware address length */
- unsigned char neigh_priv_len;
+ unsigned short neigh_priv_len;
unsigned short dev_id; /* Used to differentiate devices
* that share the same link
* layer address
int __must_check pci_assign_resource(struct pci_dev *dev, int i);
int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
int pci_select_bars(struct pci_dev *dev, unsigned long flags);
+bool pci_device_is_present(struct pci_dev *pdev);
/* ROM control related routines */
int pci_enable_rom(struct pci_dev *pdev);
/* Anonymous variables would be nice... */
#define DECLARE_PCI_FIXUP_SECTION(section, name, vendor, device, class, \
class_shift, hook) \
- static const struct pci_fixup __pci_fixup_##name __used \
+ static const struct pci_fixup __PASTE(__pci_fixup_##name,__LINE__) __used \
__attribute__((__section__(#section), aligned((sizeof(void *))))) \
= { vendor, device, class, class_shift, hook };
#define DECLARE_PCI_FIXUP_CLASS_EARLY(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_early, \
- vendor##device##hook, vendor, device, class, class_shift, hook)
+ hook, vendor, device, class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_HEADER(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_header, \
- vendor##device##hook, vendor, device, class, class_shift, hook)
+ hook, vendor, device, class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_FINAL(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_final, \
- vendor##device##hook, vendor, device, class, class_shift, hook)
+ hook, vendor, device, class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_ENABLE(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_enable, \
- vendor##device##hook, vendor, device, class, class_shift, hook)
+ hook, vendor, device, class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_RESUME(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_resume, \
- resume##vendor##device##hook, vendor, device, class, \
+ resume##hook, vendor, device, class, \
class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_RESUME_EARLY(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_resume_early, \
- resume_early##vendor##device##hook, vendor, device, \
+ resume_early##hook, vendor, device, \
class, class_shift, hook)
#define DECLARE_PCI_FIXUP_CLASS_SUSPEND(vendor, device, class, \
class_shift, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_suspend, \
- suspend##vendor##device##hook, vendor, device, class, \
+ suspend##hook, vendor, device, class, \
class_shift, hook)
#define DECLARE_PCI_FIXUP_EARLY(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_early, \
- vendor##device##hook, vendor, device, PCI_ANY_ID, 0, hook)
+ hook, vendor, device, PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_HEADER(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_header, \
- vendor##device##hook, vendor, device, PCI_ANY_ID, 0, hook)
+ hook, vendor, device, PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_FINAL(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_final, \
- vendor##device##hook, vendor, device, PCI_ANY_ID, 0, hook)
+ hook, vendor, device, PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_ENABLE(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_enable, \
- vendor##device##hook, vendor, device, PCI_ANY_ID, 0, hook)
+ hook, vendor, device, PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_RESUME(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_resume, \
- resume##vendor##device##hook, vendor, device, \
+ resume##hook, vendor, device, \
PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_RESUME_EARLY(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_resume_early, \
- resume_early##vendor##device##hook, vendor, device, \
+ resume_early##hook, vendor, device, \
PCI_ANY_ID, 0, hook)
#define DECLARE_PCI_FIXUP_SUSPEND(vendor, device, hook) \
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_suspend, \
- suspend##vendor##device##hook, vendor, device, \
+ suspend##hook, vendor, device, \
PCI_ANY_ID, 0, hook)
#ifdef CONFIG_PCI_QUIRKS
* Architecture-specific implementations of sys_reboot commands.
*/
+extern void migrate_to_reboot_cpu(void);
extern void machine_restart(char *cmd);
extern void machine_halt(void);
extern void machine_power_off(void);
extern int shmem_fill_super(struct super_block *sb, void *data, int silent);
extern struct file *shmem_file_setup(const char *name,
loff_t size, unsigned long flags);
+extern struct file *shmem_kernel_file_setup(const char *name, loff_t size,
+ unsigned long flags);
extern int shmem_zero_setup(struct vm_area_struct *);
extern int shmem_lock(struct file *file, int lock, struct user_struct *user);
extern void shmem_unlock_mapping(struct address_space *mapping);
unsigned char *skb_pull_rcsum(struct sk_buff *skb, unsigned int len);
+/**
+ * pskb_trim_rcsum - trim received skb and update checksum
+ * @skb: buffer to trim
+ * @len: new length
+ *
+ * This is exactly the same as pskb_trim except that it ensures the
+ * checksum of received packets are still valid after the operation.
+ */
+
+static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
+{
+ if (likely(len >= skb->len))
+ return 0;
+ if (skb->ip_summed == CHECKSUM_COMPLETE)
+ skb->ip_summed = CHECKSUM_NONE;
+ return __pskb_trim(skb, len);
+}
+
#define skb_queue_walk(queue, skb) \
for (skb = (queue)->next; \
skb != (struct sk_buff *)(queue); \
__wsum skb_checksum(const struct sk_buff *skb, int offset, int len,
__wsum csum);
-/**
- * pskb_trim_rcsum - trim received skb and update checksum
- * @skb: buffer to trim
- * @len: new length
- *
- * This is exactly the same as pskb_trim except that it ensures the
- * checksum of received packets are still valid after the operation.
- */
-
-static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
-{
- if (likely(len >= skb->len))
- return 0;
- if (skb->ip_summed == CHECKSUM_COMPLETE) {
- __wsum adj = skb_checksum(skb, len, skb->len - len, 0);
-
- skb->csum = csum_sub(skb->csum, adj);
- }
- return __pskb_trim(skb, len);
-}
-
static inline void *skb_header_pointer(const struct sk_buff *skb, int offset,
int len, void *buffer)
{
struct vb2_mem_ops {
void *(*alloc)(void *alloc_ctx, unsigned long size, gfp_t gfp_flags);
void (*put)(void *buf_priv);
- struct dma_buf *(*get_dmabuf)(void *buf_priv);
+ struct dma_buf *(*get_dmabuf)(void *buf_priv, unsigned long flags);
void *(*get_userptr)(void *alloc_ctx, unsigned long vaddr,
unsigned long size, int write);
__be32 identification;
};
-#define IP6_MF 0x0001
+#define IP6_MF 0x0001
+#define IP6_OFFSET 0xFFF8
#include <net/sock.h>
/* How many duplicated TSNs have we seen? */
int numduptsns;
- /* Number of seconds of idle time before an association is closed.
- * In the association context, this is really used as a boolean
- * since the real timeout is stored in the timeouts array
- */
- __u32 autoclose;
-
/* These are to support
* "SCTP Extensions for Dynamic Reconfiguration of IP Addresses
* and Enforcement of Flow and Message Limits"
};
struct cg_proto {
- void (*enter_memory_pressure)(struct sock *sk);
struct res_counter memory_allocated; /* Current allocated memory. */
struct percpu_counter sockets_allocated; /* Current number of sockets. */
int memory_pressure;
struct proto *prot = sk->sk_prot;
for (; cg_proto; cg_proto = parent_cg_proto(prot, cg_proto))
- if (cg_proto->memory_pressure)
- cg_proto->memory_pressure = 0;
+ cg_proto->memory_pressure = 0;
}
}
struct proto *prot = sk->sk_prot;
for (; cg_proto; cg_proto = parent_cg_proto(prot, cg_proto))
- cg_proto->enter_memory_pressure(sk);
+ cg_proto->memory_pressure = 1;
}
sk->sk_prot->enter_memory_pressure(sk);
{
struct snd_sg_buf *sgbuf = dmab->private_data;
dma_addr_t addr = sgbuf->table[offset >> PAGE_SHIFT].addr;
- addr &= PAGE_MASK;
+ addr &= ~((dma_addr_t)PAGE_SIZE - 1);
return addr + offset % PAGE_SIZE;
}
#define DRM_VMW_PARAM_FIFO_CAPS 4
#define DRM_VMW_PARAM_MAX_FB_SIZE 5
#define DRM_VMW_PARAM_FIFO_HW_VERSION 6
+#define DRM_VMW_PARAM_MAX_SURF_MEMORY 7
/**
* struct drm_vmw_getparam_arg
*
* { u64 weight; } && PERF_SAMPLE_WEIGHT
* { u64 data_src; } && PERF_SAMPLE_DATA_SRC
+ * { u64 transaction; } && PERF_SAMPLE_TRANSACTION
* };
*/
PERF_RECORD_SAMPLE = 9,
#include <sound/compress_params.h>
-#define SNDRV_COMPRESS_VERSION SNDRV_PROTOCOL_VERSION(0, 1, 1)
+#define SNDRV_COMPRESS_VERSION SNDRV_PROTOCOL_VERSION(0, 1, 2)
/**
* struct snd_compressed_buffer: compressed buffer
* @fragment_size: size of buffer fragment in bytes
struct snd_compr_tstamp {
__u32 byte_offset;
__u32 copied_total;
- snd_pcm_uframes_t pcm_frames;
- snd_pcm_uframes_t pcm_io_frames;
+ __u32 pcm_frames;
+ __u32 pcm_io_frames;
__u32 sampling_rate;
};
config_data.gz
timeconst.h
hz.bc
+x509_certificate_list
###############################################################################
ifeq ($(CONFIG_SYSTEM_TRUSTED_KEYRING),y)
X509_CERTIFICATES-y := $(wildcard *.x509) $(wildcard $(srctree)/*.x509)
-X509_CERTIFICATES-$(CONFIG_MODULE_SIG) += signing_key.x509
-X509_CERTIFICATES := $(sort $(foreach CERT,$(X509_CERTIFICATES-y), \
+X509_CERTIFICATES-$(CONFIG_MODULE_SIG) += $(objtree)/signing_key.x509
+X509_CERTIFICATES-raw := $(sort $(foreach CERT,$(X509_CERTIFICATES-y), \
$(or $(realpath $(CERT)),$(CERT))))
+X509_CERTIFICATES := $(subst $(realpath $(objtree))/,,$(X509_CERTIFICATES-raw))
ifeq ($(X509_CERTIFICATES),)
$(warning *** No X.509 certificates found ***)
targets += $(obj)/.x509.list
$(obj)/.x509.list:
@echo $(X509_CERTIFICATES) >$@
+endif
clean-files := x509_certificate_list .x509.list
-endif
ifeq ($(CONFIG_MODULE_SIG),y)
###############################################################################
if (event->state != PERF_EVENT_STATE_ACTIVE)
return;
+ perf_pmu_disable(event->pmu);
+
event->state = PERF_EVENT_STATE_INACTIVE;
if (event->pending_disable) {
event->pending_disable = 0;
ctx->nr_freq--;
if (event->attr.exclusive || !cpuctx->active_oncpu)
cpuctx->exclusive = 0;
+
+ perf_pmu_enable(event->pmu);
}
static void
struct perf_event_context *ctx)
{
u64 tstamp = perf_event_time(event);
+ int ret = 0;
if (event->state <= PERF_EVENT_STATE_OFF)
return 0;
*/
smp_wmb();
+ perf_pmu_disable(event->pmu);
+
if (event->pmu->add(event, PERF_EF_START)) {
event->state = PERF_EVENT_STATE_INACTIVE;
event->oncpu = -1;
- return -EAGAIN;
+ ret = -EAGAIN;
+ goto out;
}
event->tstamp_running += tstamp - event->tstamp_stopped;
if (event->attr.exclusive)
cpuctx->exclusive = 1;
- return 0;
+out:
+ perf_pmu_enable(event->pmu);
+
+ return ret;
}
static int
if (!event_filter_match(event))
continue;
+ perf_pmu_disable(event->pmu);
+
hwc = &event->hw;
if (hwc->interrupts == MAX_INTERRUPTS) {
}
if (!event->attr.freq || !event->attr.sample_freq)
- continue;
+ goto next;
/*
* stop the event and update event->count
perf_adjust_period(event, period, delta, false);
event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0);
+ next:
+ perf_pmu_enable(event->pmu);
}
perf_pmu_enable(ctx->pmu);
spin_lock_init(&mm->page_table_lock);
mm_init_aio(mm);
mm_init_owner(mm, p);
+ clear_tlb_flush_pending(mm);
if (likely(!mm_alloc_pgd(mm))) {
mm->def_flags = 0;
return -EINVAL;
address -= key->both.offset;
+ if (unlikely(!access_ok(rw, uaddr, sizeof(u32))))
+ return -EFAULT;
+
/*
* PROCESS_PRIVATE futexes are fast.
* As the mm cannot disappear under us and the 'key' only needs
* but access_ok() should be faster than find_vma()
*/
if (!fshared) {
- if (unlikely(!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))))
- return -EFAULT;
key->private.mm = mm;
key->private.address = address;
get_futex_key_refs(key);
put_page(page);
/* serialize against __split_huge_page_splitting() */
local_irq_disable();
- if (likely(__get_user_pages_fast(address, 1, 1, &page) == 1)) {
+ if (likely(__get_user_pages_fast(address, 1, !ro, &page) == 1)) {
page_head = compound_head(page);
/*
* page_head is valid pointer but we must pin
size_t vmcoreinfo_size;
size_t vmcoreinfo_max_size = sizeof(vmcoreinfo_data);
+/* Flag to indicate we are going to kexec a new kernel */
+bool kexec_in_progress = false;
+
/* Location of the reserved area for the crash kernel */
struct resource crashk_res = {
.name = "Crash kernel",
} else
#endif
{
+ kexec_in_progress = true;
kernel_restart_prepare(NULL);
+ migrate_to_reboot_cpu();
printk(KERN_EMERG "Starting new kernel\n");
machine_shutdown();
}
}
EXPORT_SYMBOL(unregister_reboot_notifier);
-static void migrate_to_reboot_cpu(void)
+void migrate_to_reboot_cpu(void)
{
/* The boot cpu is always logical cpu 0 */
int cpu = reboot_cpu;
(vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) == (VM_READ)))
continue;
+ /*
+ * Skip inaccessible VMAs to avoid any confusion between
+ * PROT_NONE and NUMA hinting ptes
+ */
+ if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))
+ continue;
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
__INITRODATA
+ .align 8
.globl VMLINUX_SYMBOL(system_certificate_list)
VMLINUX_SYMBOL(system_certificate_list):
+__cert_list_start:
.incbin "kernel/x509_certificate_list"
- .globl VMLINUX_SYMBOL(system_certificate_list_end)
-VMLINUX_SYMBOL(system_certificate_list_end):
+__cert_list_end:
+
+ .align 8
+ .globl VMLINUX_SYMBOL(system_certificate_list_size)
+VMLINUX_SYMBOL(system_certificate_list_size):
+#ifdef CONFIG_64BIT
+ .quad __cert_list_end - __cert_list_start
+#else
+ .long __cert_list_end - __cert_list_start
+#endif
EXPORT_SYMBOL_GPL(system_trusted_keyring);
extern __initconst const u8 system_certificate_list[];
-extern __initconst const u8 system_certificate_list_end[];
+extern __initconst const unsigned long system_certificate_list_size;
/*
* Load the compiled-in keys
pr_notice("Loading compiled-in X.509 certificates\n");
- end = system_certificate_list_end;
p = system_certificate_list;
+ end = p + system_certificate_list_size;
while (p < end) {
/* Each cert begins with an ASN.1 SEQUENCE tag and must be more
* than 256 bytes in size.
.owner = GLOBAL_ROOT_UID,
.group = GLOBAL_ROOT_GID,
.proc_inum = PROC_USER_INIT_INO,
-#ifdef CONFIG_KEYS_KERBEROS_CACHE
- .krb_cache_register_sem =
- __RWSEM_INITIALIZER(init_user_ns.krb_cache_register_sem),
+#ifdef CONFIG_PERSISTENT_KEYRINGS
+ .persistent_keyring_register_sem =
+ __RWSEM_INITIALIZER(init_user_ns.persistent_keyring_register_sem),
#endif
};
EXPORT_SYMBOL_GPL(init_user_ns);
return false;
}
-static bool __flush_work(struct work_struct *work)
-{
- struct wq_barrier barr;
-
- if (start_flush_work(work, &barr)) {
- wait_for_completion(&barr.done);
- destroy_work_on_stack(&barr.work);
- return true;
- } else {
- return false;
- }
-}
-
/**
* flush_work - wait for a work to finish executing the last queueing instance
* @work: the work to flush
*/
bool flush_work(struct work_struct *work)
{
+ struct wq_barrier barr;
+
lock_map_acquire(&work->lockdep_map);
lock_map_release(&work->lockdep_map);
- return __flush_work(work);
+ if (start_flush_work(work, &barr)) {
+ wait_for_completion(&barr.done);
+ destroy_work_on_stack(&barr.work);
+ return true;
+ } else {
+ return false;
+ }
}
EXPORT_SYMBOL_GPL(flush_work);
INIT_WORK_ONSTACK(&wfc.work, work_for_cpu_fn);
schedule_work_on(cpu, &wfc.work);
-
- /*
- * The work item is on-stack and can't lead to deadlock through
- * flushing. Use __flush_work() to avoid spurious lockdep warnings
- * when work_on_cpu()s are nested.
- */
- __flush_work(&wfc.work);
-
+ flush_work(&wfc.work);
return wfc.ret;
}
EXPORT_SYMBOL_GPL(work_on_cpu);
pr_devel("all leaves cluster together\n");
diff = INT_MAX;
for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) {
- int x = ops->diff_objects(assoc_array_ptr_to_leaf(edit->leaf),
- assoc_array_ptr_to_leaf(node->slots[i]));
+ int x = ops->diff_objects(assoc_array_ptr_to_leaf(node->slots[i]),
+ index_key);
if (x < diff) {
BUG_ON(x < 0);
diff = x;
config MEM_SOFT_DIRTY
bool "Track memory changes"
- depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY
+ depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS
select PROC_PAGE_MONITOR
help
This option enables memory changes tracking by introducing a
bool migrate_scanner)
{
struct zone *zone = cc->zone;
+
+ if (cc->ignore_skip_hint)
+ return;
+
if (!page)
return;
ret = 0;
goto out_unlock;
}
+
+ /* mmap_sem prevents this happening but warn if that changes */
+ WARN_ON(pmd_trans_migrating(pmd));
+
if (unlikely(pmd_trans_splitting(pmd))) {
/* split huge page running from under us */
spin_unlock(src_ptl);
if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
return ERR_PTR(-EFAULT);
+ /* Full NUMA hinting faults to serialise migration in fault paths */
+ if ((flags & FOLL_NUMA) && pmd_numa(*pmd))
+ goto out;
+
page = pmd_page(*pmd);
VM_BUG_ON(!PageHead(page));
if (flags & FOLL_TOUCH) {
if (unlikely(!pmd_same(pmd, *pmdp)))
goto out_unlock;
+ /*
+ * If there are potential migrations, wait for completion and retry
+ * without disrupting NUMA hinting information. Do not relock and
+ * check_same as the page may no longer be mapped.
+ */
+ if (unlikely(pmd_trans_migrating(*pmdp))) {
+ spin_unlock(ptl);
+ wait_migrate_huge_page(vma->anon_vma, pmdp);
+ goto out;
+ }
+
page = pmd_page(pmd);
BUG_ON(is_huge_zero_page(page));
page_nid = page_to_nid(page);
/* If the page was locked, there are no parallel migrations */
if (page_locked)
goto clear_pmdnuma;
+ }
- /*
- * Otherwise wait for potential migrations and retry. We do
- * relock and check_same as the page may no longer be mapped.
- * As the fault is being retried, do not account for it.
- */
+ /* Migration could have started since the pmd_trans_migrating check */
+ if (!page_locked) {
spin_unlock(ptl);
wait_on_page_locked(page);
page_nid = -1;
goto out;
}
- /* Page is misplaced, serialise migrations and parallel THP splits */
+ /*
+ * Page is misplaced. Page lock serialises migrations. Acquire anon_vma
+ * to serialises splits
+ */
get_page(page);
spin_unlock(ptl);
- if (!page_locked)
- lock_page(page);
anon_vma = page_lock_anon_vma_read(page);
/* Confirm the PMD did not change while page_table_lock was released */
goto out_unlock;
}
+ /* Bail if we fail to protect against THP splits for any reason */
+ if (unlikely(!anon_vma)) {
+ put_page(page);
+ page_nid = -1;
+ goto clear_pmdnuma;
+ }
+
/*
* Migrate the THP to the requested node, returns with page unlocked
* and pmd_numa cleared.
pmd = pmdp_get_and_clear(mm, old_addr, old_pmd);
VM_BUG_ON(!pmd_none(*new_pmd));
set_pmd_at(mm, new_addr, new_pmd, pmd_mksoft_dirty(pmd));
- if (new_ptl != old_ptl)
+ if (new_ptl != old_ptl) {
+ pgtable_t pgtable;
+
+ /*
+ * Move preallocated PTE page table if new_pmd is on
+ * different PMD page table.
+ */
+ pgtable = pgtable_trans_huge_withdraw(mm, old_pmd);
+ pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
+
spin_unlock(new_ptl);
+ }
spin_unlock(old_ptl);
}
out:
ret = 1;
if (!prot_numa) {
entry = pmdp_get_and_clear(mm, addr, pmd);
+ if (pmd_numa(entry))
+ entry = pmd_mknonnuma(entry);
entry = pmd_modify(entry, newprot);
ret = HPAGE_PMD_NR;
BUG_ON(pmd_write(entry));
*/
if (!is_huge_zero_page(page) &&
!pmd_numa(*pmd)) {
- entry = pmdp_get_and_clear(mm, addr, pmd);
+ entry = *pmd;
entry = pmd_mknuma(entry);
ret = HPAGE_PMD_NR;
}
goto bypass;
if (unlikely(task_in_memcg_oom(current)))
- goto bypass;
+ goto nomem;
+
+ if (gfp_mask & __GFP_NOFAIL)
+ oom = false;
/*
* We always charge the cgroup the mm_struct belongs to.
static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ /*
+ * XXX: css_offline() would be where we should reparent all
+ * memory to prepare the cgroup for destruction. However,
+ * memcg does not do css_tryget() and res_counter charging
+ * under the same RCU lock region, which means that charging
+ * could race with offlining. Offlining only happens to
+ * cgroups with no tasks in them but charges can show up
+ * without any tasks from the swapin path when the target
+ * memcg is looked up from the swapout record and not from the
+ * current task as it usually is. A race like this can leak
+ * charges and put pages with stale cgroup pointers into
+ * circulation:
+ *
+ * #0 #1
+ * lookup_swap_cgroup_id()
+ * rcu_read_lock()
+ * mem_cgroup_lookup()
+ * css_tryget()
+ * rcu_read_unlock()
+ * disable css_tryget()
+ * call_rcu()
+ * offline_css()
+ * reparent_charges()
+ * res_counter_charge()
+ * css_put()
+ * css_free()
+ * pc->mem_cgroup = dead memcg
+ * add page to lru
+ *
+ * The bulk of the charges are still moved in offline_css() to
+ * avoid pinning a lot of pages in case a long-term reference
+ * like a swapout record is deferring the css_free() to long
+ * after offlining. But this makes sure we catch any charges
+ * made after offlining:
+ */
+ mem_cgroup_reparent_charges(memcg);
memcg_destroy_kmem(memcg);
__mem_cgroup_free(memcg);
if (ret > 0)
ret = -EIO;
} else {
- set_page_hwpoison_huge_page(hpage);
- dequeue_hwpoisoned_huge_page(hpage);
- atomic_long_add(1 << compound_order(hpage),
- &num_poisoned_pages);
+ /* overcommit hugetlb page will be freed to buddy */
+ if (PageHuge(page)) {
+ set_page_hwpoison_huge_page(hpage);
+ dequeue_hwpoisoned_huge_page(hpage);
+ atomic_long_add(1 << compound_order(hpage),
+ &num_poisoned_pages);
+ } else {
+ SetPageHWPoison(page);
+ atomic_long_inc(&num_poisoned_pages);
+ }
}
return ret;
}
break;
vma = vma->vm_next;
}
+
+ if (PageHuge(page)) {
+ if (vma)
+ return alloc_huge_page_noerr(vma, address, 1);
+ else
+ return NULL;
+ }
/*
- * queue_pages_range() confirms that @page belongs to some vma,
- * so vma shouldn't be NULL.
+ * if !vma, alloc_page_vma() will use task or system default policy
*/
- BUG_ON(!vma);
-
- if (PageHuge(page))
- return alloc_huge_page_noerr(vma, address, 1);
return alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
}
#else
if (nr_failed && (flags & MPOL_MF_STRICT))
err = -EIO;
} else
- putback_lru_pages(&pagelist);
+ putback_movable_pages(&pagelist);
up_write(&mm->mmap_sem);
mpol_out:
#include <linux/hugetlb_cgroup.h>
#include <linux/gfp.h>
#include <linux/balloon_compaction.h>
+#include <linux/mmu_notifier.h>
#include <asm/tlbflush.h>
return 1;
}
+bool pmd_trans_migrating(pmd_t pmd)
+{
+ struct page *page = pmd_page(pmd);
+ return PageLocked(page);
+}
+
+void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd)
+{
+ struct page *page = pmd_page(*pmd);
+ wait_on_page_locked(page);
+}
+
/*
* Attempt to migrate a misplaced page to the specified destination
* node. Caller is expected to have an elevated reference count on
struct page *page, int node)
{
spinlock_t *ptl;
- unsigned long haddr = address & HPAGE_PMD_MASK;
pg_data_t *pgdat = NODE_DATA(node);
int isolated = 0;
struct page *new_page = NULL;
struct mem_cgroup *memcg = NULL;
int page_lru = page_is_file_cache(page);
+ unsigned long mmun_start = address & HPAGE_PMD_MASK;
+ unsigned long mmun_end = mmun_start + HPAGE_PMD_SIZE;
+ pmd_t orig_entry;
/*
* Rate-limit the amount of data that is being migrated to a node.
goto out_fail;
}
+ if (mm_tlb_flush_pending(mm))
+ flush_tlb_range(vma, mmun_start, mmun_end);
+
/* Prepare a page as a migration target */
__set_page_locked(new_page);
SetPageSwapBacked(new_page);
WARN_ON(PageLRU(new_page));
/* Recheck the target PMD */
+ mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
ptl = pmd_lock(mm, pmd);
- if (unlikely(!pmd_same(*pmd, entry))) {
+ if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
+fail_putback:
spin_unlock(ptl);
+ mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
/* Reverse changes made by migrate_page_copy() */
if (TestClearPageActive(new_page))
putback_lru_page(page);
mod_zone_page_state(page_zone(page),
NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR);
- goto out_fail;
+
+ goto out_unlock;
}
/*
*/
mem_cgroup_prepare_migration(page, new_page, &memcg);
+ orig_entry = *pmd;
entry = mk_pmd(new_page, vma->vm_page_prot);
- entry = pmd_mknonnuma(entry);
- entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
entry = pmd_mkhuge(entry);
+ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
- pmdp_clear_flush(vma, haddr, pmd);
- set_pmd_at(mm, haddr, pmd, entry);
- page_add_new_anon_rmap(new_page, vma, haddr);
+ /*
+ * Clear the old entry under pagetable lock and establish the new PTE.
+ * Any parallel GUP will either observe the old page blocking on the
+ * page lock, block on the page table lock or observe the new page.
+ * The SetPageUptodate on the new page and page_add_new_anon_rmap
+ * guarantee the copy is visible before the pagetable update.
+ */
+ flush_cache_range(vma, mmun_start, mmun_end);
+ page_add_new_anon_rmap(new_page, vma, mmun_start);
+ pmdp_clear_flush(vma, mmun_start, pmd);
+ set_pmd_at(mm, mmun_start, pmd, entry);
+ flush_tlb_range(vma, mmun_start, mmun_end);
update_mmu_cache_pmd(vma, address, &entry);
+
+ if (page_count(page) != 2) {
+ set_pmd_at(mm, mmun_start, pmd, orig_entry);
+ flush_tlb_range(vma, mmun_start, mmun_end);
+ update_mmu_cache_pmd(vma, address, &entry);
+ page_remove_rmap(new_page);
+ goto fail_putback;
+ }
+
page_remove_rmap(page);
+
/*
* Finish the charge transaction under the page table lock to
* prevent split_huge_page() from dividing up the charge
*/
mem_cgroup_end_migration(memcg, page, new_page, true);
spin_unlock(ptl);
+ mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
unlock_page(new_page);
unlock_page(page);
out_fail:
count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR);
out_dropref:
- entry = pmd_mknonnuma(entry);
- set_pmd_at(mm, haddr, pmd, entry);
- update_mmu_cache_pmd(vma, address, &entry);
+ ptl = pmd_lock(mm, pmd);
+ if (pmd_same(*pmd, entry)) {
+ entry = pmd_mknonnuma(entry);
+ set_pmd_at(mm, mmun_start, pmd, entry);
+ update_mmu_cache_pmd(vma, address, &entry);
+ }
+ spin_unlock(ptl);
+out_unlock:
unlock_page(page);
put_page(page);
return 0;
pte_t ptent;
bool updated = false;
- ptent = ptep_modify_prot_start(mm, addr, pte);
if (!prot_numa) {
+ ptent = ptep_modify_prot_start(mm, addr, pte);
+ if (pte_numa(ptent))
+ ptent = pte_mknonnuma(ptent);
ptent = pte_modify(ptent, newprot);
updated = true;
} else {
struct page *page;
+ ptent = *pte;
page = vm_normal_page(vma, addr, oldpte);
if (page) {
if (!pte_numa(oldpte)) {
ptent = pte_mknuma(ptent);
+ set_pte_at(mm, addr, pte, ptent);
updated = true;
}
}
if (updated)
pages++;
- ptep_modify_prot_commit(mm, addr, pte, ptent);
+
+ /* Only !prot_numa always clears the pte */
+ if (!prot_numa)
+ ptep_modify_prot_commit(mm, addr, pte, ptent);
} else if (IS_ENABLED(CONFIG_MIGRATION) && !pte_file(oldpte)) {
swp_entry_t entry = pte_to_swp_entry(oldpte);
BUG_ON(addr >= end);
pgd = pgd_offset(mm, addr);
flush_cache_range(vma, addr, end);
+ set_tlb_flush_pending(mm);
do {
next = pgd_addr_end(addr, end);
if (pgd_none_or_clear_bad(pgd))
/* Only flush the TLB if we actually modified any entries: */
if (pages)
flush_tlb_range(vma, start, end);
+ clear_tlb_flush_pending(mm);
return pages;
}
* back to remote zones that do not partake in the
* fairness round-robin cycle of this zonelist.
*/
- if (alloc_flags & ALLOC_WMARK_LOW) {
+ if ((alloc_flags & ALLOC_WMARK_LOW) &&
+ (gfp_mask & GFP_MOVABLE_MASK)) {
if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0)
continue;
if (zone_reclaim_mode &&
pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
pte_t *ptep)
{
+ struct mm_struct *mm = (vma)->vm_mm;
pte_t pte;
- pte = ptep_get_and_clear((vma)->vm_mm, address, ptep);
- if (pte_accessible(pte))
+ pte = ptep_get_and_clear(mm, address, ptep);
+ if (pte_accessible(mm, pte))
flush_tlb_page(vma, address);
return pte;
}
void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
+ pmd_t entry = *pmdp;
+ if (pmd_numa(entry))
+ entry = pmd_mknonnuma(entry);
set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(*pmdp));
flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
}
spinlock_t *ptl;
if (unlikely(PageHuge(page))) {
+ /* when pud is not present, pte will be NULL */
pte = huge_pte_offset(mm, address);
+ if (!pte)
+ return NULL;
+
ptl = huge_pte_lockptr(page_hstate(page), mm, pte);
goto check;
}
.d_dname = simple_dname
};
-/**
- * shmem_file_setup - get an unlinked file living in tmpfs
- * @name: name for dentry (to be seen in /proc/<pid>/maps
- * @size: size to be set for the file
- * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
- */
-struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags)
+static struct file *__shmem_file_setup(const char *name, loff_t size,
+ unsigned long flags, unsigned int i_flags)
{
struct file *res;
struct inode *inode;
if (!inode)
goto put_dentry;
+ inode->i_flags |= i_flags;
d_instantiate(path.dentry, inode);
inode->i_size = size;
clear_nlink(inode); /* It is unlinked */
shmem_unacct_size(flags, size);
return res;
}
+
+/**
+ * shmem_kernel_file_setup - get an unlinked file living in tmpfs which must be
+ * kernel internal. There will be NO LSM permission checks against the
+ * underlying inode. So users of this interface must do LSM checks at a
+ * higher layer. The one user is the big_key implementation. LSM checks
+ * are provided at the key level rather than the inode level.
+ * @name: name for dentry (to be seen in /proc/<pid>/maps
+ * @size: size to be set for the file
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
+ */
+struct file *shmem_kernel_file_setup(const char *name, loff_t size, unsigned long flags)
+{
+ return __shmem_file_setup(name, size, flags, S_PRIVATE);
+}
+
+/**
+ * shmem_file_setup - get an unlinked file living in tmpfs
+ * @name: name for dentry (to be seen in /proc/<pid>/maps
+ * @size: size to be set for the file
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
+ */
+struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags)
+{
+ return __shmem_file_setup(name, size, flags, 0);
+}
EXPORT_SYMBOL_GPL(shmem_file_setup);
/**
int br_handle_frame_finish(struct sk_buff *skb);
rx_handler_result_t br_handle_frame(struct sk_buff **pskb);
+static inline bool br_rx_handler_check_rcu(const struct net_device *dev)
+{
+ return rcu_dereference(dev->rx_handler) == br_handle_frame;
+}
+
+static inline struct net_bridge_port *br_port_get_check_rcu(const struct net_device *dev)
+{
+ return br_rx_handler_check_rcu(dev) ? br_port_get_rcu(dev) : NULL;
+}
+
/* br_ioctl.c */
int br_dev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
int br_ioctl_deviceless_stub(struct net *net, unsigned int cmd,
if (buf[0] != 0 || buf[1] != 0 || buf[2] != 0)
goto err;
- p = br_port_get_rcu(dev);
+ p = br_port_get_check_rcu(dev);
if (!p)
goto err;
.hdrsize = 0,
.name = "NET_DM",
.version = 2,
- .maxattr = NET_DM_CMD_MAX,
};
static DEFINE_PER_CPU(struct per_cpu_dm_data, dm_cpu_data);
neigh->parms->reachable_time :
0)));
neigh->nud_state = new;
+ notify = 1;
}
if (lladdr != neigh->ha) {
skb->tstamp.tv64 = 0;
skb->pkt_type = PACKET_HOST;
skb->skb_iif = 0;
+ skb->local_df = 0;
skb_dst_drop(skb);
skb->mark = 0;
secpath_reset(skb);
case SO_PEEK_OFF:
if (sock->ops->set_peek_off)
- sock->ops->set_peek_off(sk, val);
+ ret = sock->ops->set_peek_off(sk, val);
else
ret = -EOPNOTSUPP;
break;
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- usin->sin6_addr = flowlabel->dst;
fl6_sock_release(flowlabel);
}
}
static bool fib4_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg)
{
struct fib_result *result = (struct fib_result *) arg->result;
- struct net_device *dev = result->fi->fib_dev;
+ struct net_device *dev = NULL;
+
+ if (result->fi)
+ dev = result->fi->fib_dev;
/* do not accept result if the route does
* not meet the required prefix length
static struct xt_target synproxy_tg4_reg __read_mostly = {
.name = "SYNPROXY",
.family = NFPROTO_IPV4,
+ .hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD),
.target = synproxy_tg4,
.targetsize = sizeof(struct xt_synproxy_info),
.checkentry = synproxy_tg4_check,
{
const struct nft_reject *priv = nft_expr_priv(expr);
- if (nla_put_be32(skb, NFTA_REJECT_TYPE, priv->type))
+ if (nla_put_be32(skb, NFTA_REJECT_TYPE, htonl(priv->type)))
goto nla_put_failure;
switch (priv->type) {
#include <linux/memcontrol.h>
#include <linux/module.h>
-static void memcg_tcp_enter_memory_pressure(struct sock *sk)
-{
- if (sk->sk_cgrp->memory_pressure)
- sk->sk_cgrp->memory_pressure = 1;
-}
-EXPORT_SYMBOL(memcg_tcp_enter_memory_pressure);
-
int tcp_init_cgroup(struct mem_cgroup *memcg, struct cgroup_subsys *ss)
{
/*
__be16 sport, __be16 dport,
struct udp_table *udptable)
{
- struct sock *sk;
const struct iphdr *iph = ip_hdr(skb);
- if (unlikely(sk = skb_steal_sock(skb)))
- return sk;
- else
- return __udp4_lib_lookup(dev_net(skb_dst(skb)->dev), iph->saddr, sport,
- iph->daddr, dport, inet_iif(skb),
- udptable);
+ return __udp4_lib_lookup(dev_net(skb_dst(skb)->dev), iph->saddr, sport,
+ iph->daddr, dport, inet_iif(skb),
+ udptable);
}
struct sock *udp4_lib_lookup(struct net *net, __be32 saddr, __be16 sport,
kfree_skb(skb1);
}
-static void udp_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
+/* For TCP sockets, sk_rx_dst is protected by socket lock
+ * For UDP, we use xchg() to guard against concurrent changes.
+ */
+static void udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
{
- struct dst_entry *dst = skb_dst(skb);
+ struct dst_entry *old;
dst_hold(dst);
- sk->sk_rx_dst = dst;
+ old = xchg(&sk->sk_rx_dst, dst);
+ dst_release(old);
}
/*
if (udp4_csum_init(skb, uh, proto))
goto csum_error;
- if (skb->sk) {
+ sk = skb_steal_sock(skb);
+ if (sk) {
+ struct dst_entry *dst = skb_dst(skb);
int ret;
- sk = skb->sk;
- if (unlikely(sk->sk_rx_dst == NULL))
- udp_sk_rx_dst_set(sk, skb);
+ if (unlikely(sk->sk_rx_dst != dst))
+ udp_sk_rx_dst_set(sk, dst);
ret = udp_queue_rcv_skb(sk, skb);
-
+ sock_put(sk);
/* a return value > 0 means to resubmit the input, but
* it wants the return to be -protocol, or 0
*/
void udp_v4_early_demux(struct sk_buff *skb)
{
- const struct iphdr *iph = ip_hdr(skb);
- const struct udphdr *uh = udp_hdr(skb);
+ struct net *net = dev_net(skb->dev);
+ const struct iphdr *iph;
+ const struct udphdr *uh;
struct sock *sk;
struct dst_entry *dst;
- struct net *net = dev_net(skb->dev);
int dif = skb->dev->ifindex;
/* validate the packet */
if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct udphdr)))
return;
+ iph = ip_hdr(skb);
+ uh = udp_hdr(skb);
+
if (skb->pkt_type == PACKET_BROADCAST ||
skb->pkt_type == PACKET_MULTICAST)
sk = __udp4_lib_mcast_demux_lookup(net, uh->dest, iph->daddr,
if (sp_ifa->rt)
continue;
- sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, 0);
+ sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, false);
/* Failure cases are ignored */
if (!IS_ERR(sp_rt)) {
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- usin->sin6_addr = flowlabel->dst;
}
}
static bool fib6_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg)
{
struct rt6_info *rt = (struct rt6_info *) arg->result;
- struct net_device *dev = rt->rt6i_idev->dev;
+ struct net_device *dev = NULL;
+
+ if (rt->rt6i_idev)
+ dev = rt->rt6i_idev->dev;
+
/* do not accept result if the route does
* not meet the required prefix length
*/
ri->prefix_len == 0)
continue;
#endif
+ if (ri->prefix_len == 0 &&
+ !in6_dev->cnf.accept_ra_defrtr)
+ continue;
if (ri->prefix_len > in6_dev->cnf.accept_ra_rt_info_max_plen)
continue;
rt6_route_rcv(skb->dev, (u8*)p, (p->nd_opt_len) << 3,
static struct xt_target synproxy_tg6_reg __read_mostly = {
.name = "SYNPROXY",
.family = NFPROTO_IPV6,
+ .hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD),
.target = synproxy_tg6,
.targetsize = sizeof(struct xt_synproxy_info),
.checkentry = synproxy_tg6_check,
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- daddr = &flowlabel->dst;
}
}
static int ip6_pkt_discard(struct sk_buff *skb);
static int ip6_pkt_discard_out(struct sk_buff *skb);
+static int ip6_pkt_prohibit(struct sk_buff *skb);
+static int ip6_pkt_prohibit_out(struct sk_buff *skb);
static void ip6_link_failure(struct sk_buff *skb);
static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk,
struct sk_buff *skb, u32 mtu);
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
-static int ip6_pkt_prohibit(struct sk_buff *skb);
-static int ip6_pkt_prohibit_out(struct sk_buff *skb);
-
static const struct rt6_info ip6_prohibit_entry_template = {
.dst = {
.__refcnt = ATOMIC_INIT(1),
goto out;
}
}
- rt->dst.output = ip6_pkt_discard_out;
- rt->dst.input = ip6_pkt_discard;
rt->rt6i_flags = RTF_REJECT|RTF_NONEXTHOP;
switch (cfg->fc_type) {
case RTN_BLACKHOLE:
rt->dst.error = -EINVAL;
+ rt->dst.output = dst_discard;
+ rt->dst.input = dst_discard;
break;
case RTN_PROHIBIT:
rt->dst.error = -EACCES;
+ rt->dst.output = ip6_pkt_prohibit_out;
+ rt->dst.input = ip6_pkt_prohibit;
break;
case RTN_THROW:
- rt->dst.error = -EAGAIN;
- break;
default:
- rt->dst.error = -ENETUNREACH;
+ rt->dst.error = (cfg->fc_type == RTN_THROW) ? -EAGAIN
+ : -ENETUNREACH;
+ rt->dst.output = ip6_pkt_discard_out;
+ rt->dst.input = ip6_pkt_discard;
break;
}
goto install_route;
return ip6_pkt_drop(skb, ICMPV6_NOROUTE, IPSTATS_MIB_OUTNOROUTES);
}
-#ifdef CONFIG_IPV6_MULTIPLE_TABLES
-
static int ip6_pkt_prohibit(struct sk_buff *skb)
{
return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_INNOROUTES);
return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_OUTNOROUTES);
}
-#endif
-
/*
* Allocate a dst for local (unicast / anycast) address.
*/
bool anycast)
{
struct net *net = dev_net(idev->dev);
- struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev, 0, NULL);
-
- if (!rt) {
- net_warn_ratelimited("Maximum number of routes reached, consider increasing route/max_size\n");
+ struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev,
+ DST_NOCOUNT, NULL);
+ if (!rt)
return ERR_PTR(-ENOMEM);
- }
in6_dev_hold(idev);
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- usin->sin6_addr = flowlabel->dst;
fl6_sock_release(flowlabel);
}
}
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- daddr = &flowlabel->dst;
}
}
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
if (flowlabel == NULL)
return -EINVAL;
- daddr = &flowlabel->dst;
}
}
changed |=
ieee80211_mps_set_sta_local_pm(sta,
params->local_pm);
- ieee80211_bss_info_change_notify(sdata, changed);
+ ieee80211_mbss_info_change_notify(sdata, changed);
#endif
}
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr);
- if (sdata->vif.type != NL80211_IFTYPE_STATION &&
- sdata->vif.type != NL80211_IFTYPE_MESH_POINT)
+ if (sdata->vif.type != NL80211_IFTYPE_STATION)
return -EOPNOTSUPP;
if (!(local->hw.flags & IEEE80211_HW_SUPPORTS_PS))
params->chandef.chan->band)
return -EINVAL;
+ ifmsh->chsw_init = true;
+ if (!ifmsh->pre_value)
+ ifmsh->pre_value = 1;
+ else
+ ifmsh->pre_value++;
+
err = ieee80211_mesh_csa_beacon(sdata, params, true);
- if (err < 0)
+ if (err < 0) {
+ ifmsh->chsw_init = false;
return err;
+ }
break;
#endif
default:
if (err)
return false;
+ /* channel switch is not supported, disconnect */
+ if (!(sdata->local->hw.wiphy->flags & WIPHY_FLAG_HAS_CHANNEL_SWITCH))
+ goto disconnect;
+
params.count = csa_ie.count;
params.chandef = csa_ie.chandef;
u8 mode;
u8 count;
u8 ttl;
+ u16 pre_value;
};
/* Parsed Information Elements */
sdata->vif.bss_conf.bssid = NULL;
break;
case NL80211_IFTYPE_AP_VLAN:
- break;
case NL80211_IFTYPE_P2P_DEVICE:
sdata->vif.bss_conf.bssid = sdata->vif.addr;
break;
wiphy_debug(local->hw.wiphy, "Failed to initialize wep: %d\n",
result);
+ local->hw.conf.flags = IEEE80211_CONF_IDLE;
+
ieee80211_led_init(local);
rtnl_lock();
cancel_work_sync(&local->restart_work);
cancel_work_sync(&local->reconfig_filter);
+ flush_work(&local->sched_scan_stopped_work);
ieee80211_clear_tx_pending(local);
rate_control_deinitialize(local);
params.chandef.chan->center_freq);
params.block_tx = csa_ie.mode & WLAN_EID_CHAN_SWITCH_PARAM_TX_RESTRICT;
- if (beacon)
+ if (beacon) {
ifmsh->chsw_ttl = csa_ie.ttl - 1;
- else
- ifmsh->chsw_ttl = 0;
+ if (ifmsh->pre_value >= csa_ie.pre_value)
+ return false;
+ ifmsh->pre_value = csa_ie.pre_value;
+ }
- if (ifmsh->chsw_ttl > 0)
+ if (ifmsh->chsw_ttl < ifmsh->mshcfg.dot11MeshTTL) {
if (ieee80211_mesh_csa_beacon(sdata, ¶ms, false) < 0)
return false;
+ } else {
+ return false;
+ }
sdata->csa_radar_required = params.radar_required;
offset_ttl = (len < 42) ? 7 : 10;
*(pos + offset_ttl) -= 1;
*(pos + offset_ttl + 1) &= ~WLAN_EID_CHAN_SWITCH_PARAM_INITIATOR;
- sdata->u.mesh.chsw_ttl = *(pos + offset_ttl);
memcpy(mgmt_fwd, mgmt, len);
eth_broadcast_addr(mgmt_fwd->da);
u16 pre_value;
bool fwd_csa = true;
size_t baselen;
- u8 *pos, ttl;
+ u8 *pos;
if (mgmt->u.action.u.measurement.action_code !=
WLAN_ACTION_SPCT_CHL_SWITCH)
u.action.u.chan_switch.variable);
ieee802_11_parse_elems(pos, len - baselen, false, &elems);
- ttl = elems.mesh_chansw_params_ie->mesh_ttl;
- if (!--ttl)
+ ifmsh->chsw_ttl = elems.mesh_chansw_params_ie->mesh_ttl;
+ if (!--ifmsh->chsw_ttl)
fwd_csa = false;
pre_value = le16_to_cpu(elems.mesh_chansw_params_ie->mesh_pre_value);
if (ifmgd->flags & IEEE80211_STA_CONNECTION_POLL)
already = true;
+ ifmgd->flags |= IEEE80211_STA_CONNECTION_POLL;
+
mutex_unlock(&sdata->local->mtx);
if (already)
nsecs = 1000 * mi->overhead / MINSTREL_TRUNC(mi->avg_ampdu_len);
nsecs += minstrel_mcs_groups[group].duration[rate];
- tp = 1000000 * ((mr->probability * 1000) / nsecs);
+ tp = 1000000 * ((prob * 1000) / nsecs);
mr->cur_tp = MINSTREL_TRUNC(tp);
}
if (!(mg->supported & BIT(i)))
continue;
+ index = MCS_GROUP_RATES * group + i;
+
/* initialize rates selections starting indexes */
if (!mg_rates_valid) {
mg->max_tp_rate = mg->max_tp_rate2 =
mg->max_prob_rate = i;
if (!mi_rates_valid) {
mi->max_tp_rate = mi->max_tp_rate2 =
- mi->max_prob_rate = i;
+ mi->max_prob_rate = index;
mi_rates_valid = true;
}
mg_rates_valid = true;
mr = &mg->rates[i];
mr->retry_updated = false;
- index = MCS_GROUP_RATES * group + i;
minstrel_calc_rate_ewma(mr);
minstrel_ht_calc_tp(mi, group, i);
u16 sc;
u8 tid, ack_policy;
- if (!ieee80211_is_data_qos(hdr->frame_control))
+ if (!ieee80211_is_data_qos(hdr->frame_control) ||
+ is_multicast_ether_addr(hdr->addr1))
goto dont_reorder;
/*
trace_api_sched_scan_stopped(local);
- ieee80211_queue_work(&local->hw, &local->sched_scan_stopped_work);
+ schedule_work(&local->sched_scan_stopped_work);
}
EXPORT_SYMBOL(ieee80211_sched_scan_stopped);
if (elems->mesh_chansw_params_ie) {
csa_ie->ttl = elems->mesh_chansw_params_ie->mesh_ttl;
csa_ie->mode = elems->mesh_chansw_params_ie->mesh_flags;
+ csa_ie->pre_value = le16_to_cpu(
+ elems->mesh_chansw_params_ie->mesh_pre_value);
}
new_freq = ieee80211_channel_to_frequency(new_chan_no, new_band);
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local, radar_detected_work);
- struct cfg80211_chan_def chandef;
+ struct cfg80211_chan_def chandef = local->hw.conf.chandef;
ieee80211_dfs_cac_cancel(local);
if (local->use_chanctx)
/* currently not handled */
WARN_ON(1);
- else {
- chandef = local->hw.conf.chandef;
+ else
cfg80211_radar_event(local->hw.wiphy, &chandef, GFP_KERNEL);
- }
}
void ieee80211_radar_detected(struct ieee80211_hw *hw)
WLAN_EID_CHAN_SWITCH_PARAM_TX_RESTRICT : 0x00;
put_unaligned_le16(WLAN_REASON_MESH_CHAN, pos); /* Reason Cd */
pos += 2;
- if (!ifmsh->pre_value)
- ifmsh->pre_value = 1;
- else
- ifmsh->pre_value++;
pre_value = cpu_to_le16(ifmsh->pre_value);
memcpy(pos, &pre_value, 2); /* Precedence Value */
pos += 2;
- ifmsh->chsw_init = true;
}
ieee80211_tx_skb(sdata, skb);
u32 *multi)
{
return ip1->ipcmp == ip2->ipcmp &&
- ip2->ccmp == ip2->ccmp;
+ ip1->ccmp == ip2->ccmp;
}
static inline int
return -ENOENT;
}
+static int nf_table_delrule_by_chain(struct nft_ctx *ctx)
+{
+ struct nft_rule *rule;
+ int err;
+
+ list_for_each_entry(rule, &ctx->chain->rules, list) {
+ err = nf_tables_delrule_one(ctx, rule);
+ if (err < 0)
+ return err;
+ }
+ return 0;
+}
+
static int nf_tables_delrule(struct sock *nlsk, struct sk_buff *skb,
const struct nlmsghdr *nlh,
const struct nlattr * const nla[])
const struct nft_af_info *afi;
struct net *net = sock_net(skb->sk);
const struct nft_table *table;
- struct nft_chain *chain;
- struct nft_rule *rule, *tmp;
+ struct nft_chain *chain = NULL;
+ struct nft_rule *rule;
int family = nfmsg->nfgen_family, err = 0;
struct nft_ctx ctx;
if (IS_ERR(table))
return PTR_ERR(table);
- chain = nf_tables_chain_lookup(table, nla[NFTA_RULE_CHAIN]);
- if (IS_ERR(chain))
- return PTR_ERR(chain);
+ if (nla[NFTA_RULE_CHAIN]) {
+ chain = nf_tables_chain_lookup(table, nla[NFTA_RULE_CHAIN]);
+ if (IS_ERR(chain))
+ return PTR_ERR(chain);
+ }
nft_ctx_init(&ctx, skb, nlh, afi, table, chain, nla);
- if (nla[NFTA_RULE_HANDLE]) {
- rule = nf_tables_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
- if (IS_ERR(rule))
- return PTR_ERR(rule);
+ if (chain) {
+ if (nla[NFTA_RULE_HANDLE]) {
+ rule = nf_tables_rule_lookup(chain,
+ nla[NFTA_RULE_HANDLE]);
+ if (IS_ERR(rule))
+ return PTR_ERR(rule);
- err = nf_tables_delrule_one(&ctx, rule);
- } else {
- /* Remove all rules in this chain */
- list_for_each_entry_safe(rule, tmp, &chain->rules, list) {
err = nf_tables_delrule_one(&ctx, rule);
+ } else {
+ err = nf_table_delrule_by_chain(&ctx);
+ }
+ } else {
+ list_for_each_entry(chain, &table->chains, list) {
+ ctx.chain = chain;
+ err = nf_table_delrule_by_chain(&ctx);
if (err < 0)
break;
}
add_timer(&ht->timer);
}
-static void htable_destroy(struct xt_hashlimit_htable *hinfo)
+static void htable_remove_proc_entry(struct xt_hashlimit_htable *hinfo)
{
struct hashlimit_net *hashlimit_net = hashlimit_pernet(hinfo->net);
struct proc_dir_entry *parent;
- del_timer_sync(&hinfo->timer);
-
if (hinfo->family == NFPROTO_IPV4)
parent = hashlimit_net->ipt_hashlimit;
else
parent = hashlimit_net->ip6t_hashlimit;
- if(parent != NULL)
+ if (parent != NULL)
remove_proc_entry(hinfo->name, parent);
+}
+static void htable_destroy(struct xt_hashlimit_htable *hinfo)
+{
+ del_timer_sync(&hinfo->timer);
+ htable_remove_proc_entry(hinfo);
htable_selective_cleanup(hinfo, select_all);
kfree(hinfo->name);
vfree(hinfo);
static void __net_exit hashlimit_proc_net_exit(struct net *net)
{
struct xt_hashlimit_htable *hinfo;
- struct proc_dir_entry *pde;
struct hashlimit_net *hashlimit_net = hashlimit_pernet(net);
- /* recent_net_exit() is called before recent_mt_destroy(). Make sure
- * that the parent xt_recent proc entry is is empty before trying to
- * remove it.
+ /* hashlimit_net_exit() is called before hashlimit_mt_destroy().
+ * Make sure that the parent ipt_hashlimit and ip6t_hashlimit proc
+ * entries is empty before trying to remove it.
*/
mutex_lock(&hashlimit_mutex);
- pde = hashlimit_net->ipt_hashlimit;
- if (pde == NULL)
- pde = hashlimit_net->ip6t_hashlimit;
-
hlist_for_each_entry(hinfo, &hashlimit_net->htables, node)
- remove_proc_entry(hinfo->name, pde);
-
+ htable_remove_proc_entry(hinfo);
hashlimit_net->ipt_hashlimit = NULL;
hashlimit_net->ip6t_hashlimit = NULL;
mutex_unlock(&hashlimit_mutex);
static void __fanout_unlink(struct sock *sk, struct packet_sock *po);
static void __fanout_link(struct sock *sk, struct packet_sock *po);
+static struct net_device *packet_cached_dev_get(struct packet_sock *po)
+{
+ struct net_device *dev;
+
+ rcu_read_lock();
+ dev = rcu_dereference(po->cached_dev);
+ if (likely(dev))
+ dev_hold(dev);
+ rcu_read_unlock();
+
+ return dev;
+}
+
+static void packet_cached_dev_assign(struct packet_sock *po,
+ struct net_device *dev)
+{
+ rcu_assign_pointer(po->cached_dev, dev);
+}
+
+static void packet_cached_dev_reset(struct packet_sock *po)
+{
+ RCU_INIT_POINTER(po->cached_dev, NULL);
+}
+
/* register_prot_hook must be invoked with the po->bind_lock held,
* or from a context in which asynchronous accesses to the packet
* socket is not possible (packet_create()).
struct packet_sock *po = pkt_sk(sk);
if (!po->running) {
- if (po->fanout) {
+ if (po->fanout)
__fanout_link(sk, po);
- } else {
+ else
dev_add_pack(&po->prot_hook);
- rcu_assign_pointer(po->cached_dev, po->prot_hook.dev);
- }
sock_hold(sk);
po->running = 1;
struct packet_sock *po = pkt_sk(sk);
po->running = 0;
- if (po->fanout) {
+
+ if (po->fanout)
__fanout_unlink(sk, po);
- } else {
+ else
__dev_remove_pack(&po->prot_hook);
- RCU_INIT_POINTER(po->cached_dev, NULL);
- }
__sock_put(sk);
return tp_len;
}
-static struct net_device *packet_cached_dev_get(struct packet_sock *po)
-{
- struct net_device *dev;
-
- rcu_read_lock();
- dev = rcu_dereference(po->cached_dev);
- if (dev)
- dev_hold(dev);
- rcu_read_unlock();
-
- return dev;
-}
-
static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
{
struct sk_buff *skb;
mutex_lock(&po->pg_vec_lock);
- if (saddr == NULL) {
+ if (likely(saddr == NULL)) {
dev = packet_cached_dev_get(po);
proto = po->num;
addr = NULL;
* Get and verify the address.
*/
- if (saddr == NULL) {
+ if (likely(saddr == NULL)) {
dev = packet_cached_dev_get(po);
proto = po->num;
addr = NULL;
spin_lock(&po->bind_lock);
unregister_prot_hook(sk, false);
+ packet_cached_dev_reset(po);
+
if (po->prot_hook.dev) {
dev_put(po->prot_hook.dev);
po->prot_hook.dev = NULL;
spin_lock(&po->bind_lock);
unregister_prot_hook(sk, true);
+
po->num = protocol;
po->prot_hook.type = protocol;
if (po->prot_hook.dev)
dev_put(po->prot_hook.dev);
- po->prot_hook.dev = dev;
+ po->prot_hook.dev = dev;
po->ifindex = dev ? dev->ifindex : 0;
+ packet_cached_dev_assign(po, dev);
+
if (protocol == 0)
goto out_unlock;
po = pkt_sk(sk);
sk->sk_family = PF_PACKET;
po->num = proto;
- RCU_INIT_POINTER(po->cached_dev, NULL);
+
+ packet_cached_dev_reset(po);
sk->sk_destruct = packet_sock_destruct;
sk_refcnt_debug_inc(sk);
sk->sk_error_report(sk);
}
if (msg == NETDEV_UNREGISTER) {
+ packet_cached_dev_reset(po);
po->ifindex = -1;
if (po->prot_hook.dev)
dev_put(po->prot_hook.dev);
&& rm->m_inc.i_hdr.h_flags & RDS_FLAG_CONG_BITMAP) {
rds_cong_map_updated(conn->c_fcong, ~(u64) 0);
scat = &rm->data.op_sg[sg];
- ret = sizeof(struct rds_header) + RDS_CONG_MAP_BYTES;
- ret = min_t(int, ret, scat->length - conn->c_xmit_data_off);
- return ret;
+ ret = max_t(int, RDS_CONG_MAP_BYTES, scat->length);
+ return sizeof(struct rds_header) + ret;
}
/* FIXME we may overallocate here */
{
struct tc_action_ops *a, **ap;
+ /* Must supply act, dump, cleanup and init */
+ if (!act->act || !act->dump || !act->cleanup || !act->init)
+ return -EINVAL;
+
+ /* Supply defaults */
+ if (!act->lookup)
+ act->lookup = tcf_hash_search;
+ if (!act->walk)
+ act->walk = tcf_generic_walker;
+
write_lock(&act_mod_lock);
for (ap = &act_base; (a = *ap) != NULL; ap = &a->next) {
if (act->type == a->type || (strcmp(act->kind, a->kind) == 0)) {
}
while ((a = act) != NULL) {
repeat:
- if (a->ops && a->ops->act) {
+ if (a->ops) {
ret = a->ops->act(skb, a, res);
if (TC_MUNGED & skb->tc_verd) {
/* copied already, allow trampling */
struct tc_action *a;
for (a = act; a; a = act) {
- if (a->ops && a->ops->cleanup) {
+ if (a->ops) {
if (a->ops->cleanup(a, bind) == ACT_P_DELETED)
module_put(a->ops->owner);
act = act->next;
{
int err = -EINVAL;
- if (a->ops == NULL || a->ops->dump == NULL)
+ if (a->ops == NULL)
return err;
return a->ops->dump(skb, a, bind, ref);
}
unsigned char *b = skb_tail_pointer(skb);
struct nlattr *nest;
- if (a->ops == NULL || a->ops->dump == NULL)
+ if (a->ops == NULL)
return err;
if (nla_put_string(skb, TCA_KIND, a->ops->kind))
a->ops = tc_lookup_action(tb[TCA_ACT_KIND]);
if (a->ops == NULL)
goto err_free;
- if (a->ops->lookup == NULL)
- goto err_mod;
err = -ENOENT;
if (a->ops->lookup(a, index) == 0)
goto err_mod;
memset(&a, 0, sizeof(struct tc_action));
a.ops = a_o;
- if (a_o->walk == NULL) {
- WARN(1, "tc_dump_action: %s !capable of dumping table\n",
- a_o->kind);
- goto out_module_put;
- }
-
nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
cb->nlh->nlmsg_type, sizeof(*t), 0);
if (!nlh)
.act = tcf_csum,
.dump = tcf_csum_dump,
.cleanup = tcf_csum_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_csum_init,
- .walk = tcf_generic_walker
};
MODULE_DESCRIPTION("Checksum updating actions");
.act = tcf_gact,
.dump = tcf_gact_dump,
.cleanup = tcf_gact_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_gact_init,
- .walk = tcf_generic_walker
};
MODULE_AUTHOR("Jamal Hadi Salim(2002-4)");
.act = tcf_ipt,
.dump = tcf_ipt_dump,
.cleanup = tcf_ipt_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_ipt_init,
- .walk = tcf_generic_walker
};
static struct tc_action_ops act_xt_ops = {
.act = tcf_ipt,
.dump = tcf_ipt_dump,
.cleanup = tcf_ipt_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_ipt_init,
- .walk = tcf_generic_walker
};
MODULE_AUTHOR("Jamal Hadi Salim(2002-13)");
.act = tcf_mirred,
.dump = tcf_mirred_dump,
.cleanup = tcf_mirred_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_mirred_init,
- .walk = tcf_generic_walker
};
MODULE_AUTHOR("Jamal Hadi Salim(2002)");
.act = tcf_nat,
.dump = tcf_nat_dump,
.cleanup = tcf_nat_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_nat_init,
- .walk = tcf_generic_walker
};
MODULE_DESCRIPTION("Stateless NAT actions");
.act = tcf_pedit,
.dump = tcf_pedit_dump,
.cleanup = tcf_pedit_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_pedit_init,
- .walk = tcf_generic_walker
};
MODULE_AUTHOR("Jamal Hadi Salim(2002-4)");
.act = tcf_act_police,
.dump = tcf_act_police_dump,
.cleanup = tcf_act_police_cleanup,
- .lookup = tcf_hash_search,
.init = tcf_act_police_locate,
.walk = tcf_act_police_walker
};
.dump = tcf_simp_dump,
.cleanup = tcf_simp_cleanup,
.init = tcf_simp_init,
- .walk = tcf_generic_walker,
};
MODULE_AUTHOR("Jamal Hadi Salim(2005)");
.dump = tcf_skbedit_dump,
.cleanup = tcf_skbedit_cleanup,
.init = tcf_skbedit_init,
- .walk = tcf_generic_walker,
};
MODULE_AUTHOR("Alexander Duyck, <alexander.h.duyck@intel.com>");
sch_tree_lock(sch);
}
+ rate64 = tb[TCA_HTB_RATE64] ? nla_get_u64(tb[TCA_HTB_RATE64]) : 0;
+
+ ceil64 = tb[TCA_HTB_CEIL64] ? nla_get_u64(tb[TCA_HTB_CEIL64]) : 0;
+
+ psched_ratecfg_precompute(&cl->rate, &hopt->rate, rate64);
+ psched_ratecfg_precompute(&cl->ceil, &hopt->ceil, ceil64);
+
/* it used to be a nasty bug here, we have to check that node
* is really leaf before changing cl->un.leaf !
*/
if (!cl->level) {
- cl->quantum = hopt->rate.rate / q->rate2quantum;
+ u64 quantum = cl->rate.rate_bytes_ps;
+
+ do_div(quantum, q->rate2quantum);
+ cl->quantum = min_t(u64, quantum, INT_MAX);
+
if (!hopt->quantum && cl->quantum < 1000) {
pr_warning(
"HTB: quantum of class %X is small. Consider r2q change.\n",
cl->prio = TC_HTB_NUMPRIO - 1;
}
- rate64 = tb[TCA_HTB_RATE64] ? nla_get_u64(tb[TCA_HTB_RATE64]) : 0;
-
- ceil64 = tb[TCA_HTB_CEIL64] ? nla_get_u64(tb[TCA_HTB_CEIL64]) : 0;
-
- psched_ratecfg_precompute(&cl->rate, &hopt->rate, rate64);
- psched_ratecfg_precompute(&cl->ceil, &hopt->ceil, ceil64);
-
cl->buffer = PSCHED_TICKS2NS(hopt->buffer);
cl->cbuffer = PSCHED_TICKS2NS(hopt->cbuffer);
};
+/* Time to Length, convert time in ns to length in bytes
+ * to determinate how many bytes can be sent in given time.
+ */
+static u64 psched_ns_t2l(const struct psched_ratecfg *r,
+ u64 time_in_ns)
+{
+ /* The formula is :
+ * len = (time_in_ns * r->rate_bytes_ps) / NSEC_PER_SEC
+ */
+ u64 len = time_in_ns * r->rate_bytes_ps;
+
+ do_div(len, NSEC_PER_SEC);
+
+ if (unlikely(r->linklayer == TC_LINKLAYER_ATM)) {
+ do_div(len, 53);
+ len = len * 48;
+ }
+
+ if (len > r->overhead)
+ len -= r->overhead;
+ else
+ len = 0;
+
+ return len;
+}
+
/*
* Return length of individual segments of a gso packet,
* including all headers (MAC, IP, TCP/UDP)
struct tbf_sched_data *q = qdisc_priv(sch);
struct nlattr *tb[TCA_TBF_MAX + 1];
struct tc_tbf_qopt *qopt;
- struct qdisc_rate_table *rtab = NULL;
- struct qdisc_rate_table *ptab = NULL;
struct Qdisc *child = NULL;
- int max_size, n;
+ struct psched_ratecfg rate;
+ struct psched_ratecfg peak;
+ u64 max_size;
+ s64 buffer, mtu;
u64 rate64 = 0, prate64 = 0;
err = nla_parse_nested(tb, TCA_TBF_MAX, opt, tbf_policy);
goto done;
qopt = nla_data(tb[TCA_TBF_PARMS]);
- rtab = qdisc_get_rtab(&qopt->rate, tb[TCA_TBF_RTAB]);
- if (rtab == NULL)
- goto done;
-
- if (qopt->peakrate.rate) {
- if (qopt->peakrate.rate > qopt->rate.rate)
- ptab = qdisc_get_rtab(&qopt->peakrate, tb[TCA_TBF_PTAB]);
- if (ptab == NULL)
- goto done;
- }
-
- for (n = 0; n < 256; n++)
- if (rtab->data[n] > qopt->buffer)
- break;
- max_size = (n << qopt->rate.cell_log) - 1;
- if (ptab) {
- int size;
-
- for (n = 0; n < 256; n++)
- if (ptab->data[n] > qopt->mtu)
- break;
- size = (n << qopt->peakrate.cell_log) - 1;
- if (size < max_size)
- max_size = size;
- }
- if (max_size < 0)
- goto done;
+ if (qopt->rate.linklayer == TC_LINKLAYER_UNAWARE)
+ qdisc_put_rtab(qdisc_get_rtab(&qopt->rate,
+ tb[TCA_TBF_RTAB]));
- if (max_size < psched_mtu(qdisc_dev(sch)))
- pr_warn_ratelimited("sch_tbf: burst %u is lower than device %s mtu (%u) !\n",
- max_size, qdisc_dev(sch)->name,
- psched_mtu(qdisc_dev(sch)));
+ if (qopt->peakrate.linklayer == TC_LINKLAYER_UNAWARE)
+ qdisc_put_rtab(qdisc_get_rtab(&qopt->peakrate,
+ tb[TCA_TBF_PTAB]));
if (q->qdisc != &noop_qdisc) {
err = fifo_set_limit(q->qdisc, qopt->limit);
}
}
+ buffer = min_t(u64, PSCHED_TICKS2NS(qopt->buffer), ~0U);
+ mtu = min_t(u64, PSCHED_TICKS2NS(qopt->mtu), ~0U);
+
+ if (tb[TCA_TBF_RATE64])
+ rate64 = nla_get_u64(tb[TCA_TBF_RATE64]);
+ psched_ratecfg_precompute(&rate, &qopt->rate, rate64);
+
+ max_size = min_t(u64, psched_ns_t2l(&rate, buffer), ~0U);
+
+ if (qopt->peakrate.rate) {
+ if (tb[TCA_TBF_PRATE64])
+ prate64 = nla_get_u64(tb[TCA_TBF_PRATE64]);
+ psched_ratecfg_precompute(&peak, &qopt->peakrate, prate64);
+ if (peak.rate_bytes_ps <= rate.rate_bytes_ps) {
+ pr_warn_ratelimited("sch_tbf: peakrate %llu is lower than or equals to rate %llu !\n",
+ peak.rate_bytes_ps, rate.rate_bytes_ps);
+ err = -EINVAL;
+ goto done;
+ }
+
+ max_size = min_t(u64, max_size, psched_ns_t2l(&peak, mtu));
+ }
+
+ if (max_size < psched_mtu(qdisc_dev(sch)))
+ pr_warn_ratelimited("sch_tbf: burst %llu is lower than device %s mtu (%u) !\n",
+ max_size, qdisc_dev(sch)->name,
+ psched_mtu(qdisc_dev(sch)));
+
+ if (!max_size) {
+ err = -EINVAL;
+ goto done;
+ }
+
sch_tree_lock(sch);
if (child) {
qdisc_tree_decrease_qlen(q->qdisc, q->qdisc->q.qlen);
q->tokens = q->buffer;
q->ptokens = q->mtu;
- if (tb[TCA_TBF_RATE64])
- rate64 = nla_get_u64(tb[TCA_TBF_RATE64]);
- psched_ratecfg_precompute(&q->rate, &rtab->rate, rate64);
- if (ptab) {
- if (tb[TCA_TBF_PRATE64])
- prate64 = nla_get_u64(tb[TCA_TBF_PRATE64]);
- psched_ratecfg_precompute(&q->peak, &ptab->rate, prate64);
+ memcpy(&q->rate, &rate, sizeof(struct psched_ratecfg));
+ if (qopt->peakrate.rate) {
+ memcpy(&q->peak, &peak, sizeof(struct psched_ratecfg));
q->peak_present = true;
} else {
q->peak_present = false;
sch_tree_unlock(sch);
err = 0;
done:
- if (rtab)
- qdisc_put_rtab(rtab);
- if (ptab)
- qdisc_put_rtab(ptab);
return err;
}
asoc->timeouts[SCTP_EVENT_TIMEOUT_HEARTBEAT] = 0;
asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] = asoc->sackdelay;
- asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] =
- min_t(unsigned long, sp->autoclose, net->sctp.max_autoclose) * HZ;
+ asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = sp->autoclose * HZ;
/* Initializes the timers */
for (i = SCTP_EVENT_TIMEOUT_NONE; i < SCTP_NUM_TIMEOUT_TYPES; ++i)
asoc->peer.ipv6_address = 1;
INIT_LIST_HEAD(&asoc->asocs);
- asoc->autoclose = sp->autoclose;
-
asoc->default_stream = sp->default_stream;
asoc->default_ppid = sp->default_ppid;
asoc->default_flags = sp->default_flags;
unsigned long timeout;
/* Restart the AUTOCLOSE timer when sending data. */
- if (sctp_state(asoc, ESTABLISHED) && asoc->autoclose) {
+ if (sctp_state(asoc, ESTABLISHED) &&
+ asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) {
timer = &asoc->timers[SCTP_EVENT_TIMEOUT_AUTOCLOSE];
timeout = asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE];
#include <net/sctp/sctp.h>
#include <net/sctp/sm.h>
+MODULE_SOFTDEP("pre: sctp");
MODULE_AUTHOR("Wei Yongjun <yjwei@cn.fujitsu.com>");
MODULE_DESCRIPTION("SCTP snooper");
MODULE_LICENSE("GPL");
.entry = jsctp_sf_eat_sack,
};
+static __init int sctp_setup_jprobe(void)
+{
+ int ret = register_jprobe(&sctp_recv_probe);
+
+ if (ret) {
+ if (request_module("sctp"))
+ goto out;
+ ret = register_jprobe(&sctp_recv_probe);
+ }
+
+out:
+ return ret;
+}
+
static __init int sctpprobe_init(void)
{
int ret = -ENOMEM;
&sctpprobe_fops))
goto free_kfifo;
- ret = register_jprobe(&sctp_recv_probe);
+ ret = sctp_setup_jprobe();
if (ret)
goto remove_proc;
SCTP_INC_STATS(net, SCTP_MIB_PASSIVEESTABS);
sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
- if (new_asoc->autoclose)
+ if (new_asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE])
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_START,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
SCTP_INC_STATS(net, SCTP_MIB_ACTIVEESTABS);
sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
- if (asoc->autoclose)
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE])
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_START,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
if (chunk->chunk_hdr->flags & SCTP_DATA_SACK_IMM)
force = SCTP_FORCE();
- if (asoc->autoclose) {
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) {
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
}
SCTP_CHUNK(chunk));
/* Count this as receiving DATA. */
- if (asoc->autoclose) {
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) {
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
}
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,
SCTP_TO(SCTP_EVENT_TIMEOUT_T5_SHUTDOWN_GUARD));
- if (asoc->autoclose)
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE])
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART,
SCTP_TO(SCTP_EVENT_TIMEOUT_T2_SHUTDOWN));
- if (asoc->autoclose)
+ if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE])
sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,
SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE));
unsigned int optlen)
{
struct sctp_sock *sp = sctp_sk(sk);
+ struct net *net = sock_net(sk);
/* Applicable to UDP-style socket only */
if (sctp_style(sk, TCP))
if (copy_from_user(&sp->autoclose, optval, optlen))
return -EFAULT;
+ if (sp->autoclose > net->sctp.max_autoclose)
+ sp->autoclose = net->sctp.max_autoclose;
+
return 0;
}
{
struct sctp_rtoinfo rtoinfo;
struct sctp_association *asoc;
+ unsigned long rto_min, rto_max;
+ struct sctp_sock *sp = sctp_sk(sk);
if (optlen != sizeof (struct sctp_rtoinfo))
return -EINVAL;
if (!asoc && rtoinfo.srto_assoc_id && sctp_style(sk, UDP))
return -EINVAL;
+ rto_max = rtoinfo.srto_max;
+ rto_min = rtoinfo.srto_min;
+
+ if (rto_max)
+ rto_max = asoc ? msecs_to_jiffies(rto_max) : rto_max;
+ else
+ rto_max = asoc ? asoc->rto_max : sp->rtoinfo.srto_max;
+
+ if (rto_min)
+ rto_min = asoc ? msecs_to_jiffies(rto_min) : rto_min;
+ else
+ rto_min = asoc ? asoc->rto_min : sp->rtoinfo.srto_min;
+
+ if (rto_min > rto_max)
+ return -EINVAL;
+
if (asoc) {
if (rtoinfo.srto_initial != 0)
asoc->rto_initial =
msecs_to_jiffies(rtoinfo.srto_initial);
- if (rtoinfo.srto_max != 0)
- asoc->rto_max = msecs_to_jiffies(rtoinfo.srto_max);
- if (rtoinfo.srto_min != 0)
- asoc->rto_min = msecs_to_jiffies(rtoinfo.srto_min);
+ asoc->rto_max = rto_max;
+ asoc->rto_min = rto_min;
} else {
/* If there is no association or the association-id = 0
* set the values to the endpoint.
*/
- struct sctp_sock *sp = sctp_sk(sk);
-
if (rtoinfo.srto_initial != 0)
sp->rtoinfo.srto_initial = rtoinfo.srto_initial;
- if (rtoinfo.srto_max != 0)
- sp->rtoinfo.srto_max = rtoinfo.srto_max;
- if (rtoinfo.srto_min != 0)
- sp->rtoinfo.srto_min = rtoinfo.srto_min;
+ sp->rtoinfo.srto_max = rto_max;
+ sp->rtoinfo.srto_min = rto_min;
}
return 0;
extern int sysctl_sctp_rmem[3];
extern int sysctl_sctp_wmem[3];
-static int proc_sctp_do_hmac_alg(struct ctl_table *ctl,
- int write,
+static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos);
+static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos);
+static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write,
void __user *buffer, size_t *lenp,
-
loff_t *ppos);
+
static struct ctl_table sctp_table[] = {
{
.procname = "sctp_mem",
.data = &init_net.sctp.rto_min,
.maxlen = sizeof(unsigned int),
.mode = 0644,
- .proc_handler = proc_dointvec_minmax,
+ .proc_handler = proc_sctp_do_rto_min,
.extra1 = &one,
- .extra2 = &timer_max
+ .extra2 = &init_net.sctp.rto_max
},
{
.procname = "rto_max",
.data = &init_net.sctp.rto_max,
.maxlen = sizeof(unsigned int),
.mode = 0644,
- .proc_handler = proc_dointvec_minmax,
- .extra1 = &one,
+ .proc_handler = proc_sctp_do_rto_max,
+ .extra1 = &init_net.sctp.rto_min,
.extra2 = &timer_max
},
{
{ /* sentinel */ }
};
-static int proc_sctp_do_hmac_alg(struct ctl_table *ctl,
- int write,
+static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write,
void __user *buffer, size_t *lenp,
loff_t *ppos)
{
return ret;
}
+static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos)
+{
+ struct net *net = current->nsproxy->net_ns;
+ int new_value;
+ struct ctl_table tbl;
+ unsigned int min = *(unsigned int *) ctl->extra1;
+ unsigned int max = *(unsigned int *) ctl->extra2;
+ int ret;
+
+ memset(&tbl, 0, sizeof(struct ctl_table));
+ tbl.maxlen = sizeof(unsigned int);
+
+ if (write)
+ tbl.data = &new_value;
+ else
+ tbl.data = &net->sctp.rto_min;
+ ret = proc_dointvec(&tbl, write, buffer, lenp, ppos);
+ if (write) {
+ if (ret || new_value > max || new_value < min)
+ return -EINVAL;
+ net->sctp.rto_min = new_value;
+ }
+ return ret;
+}
+
+static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos)
+{
+ struct net *net = current->nsproxy->net_ns;
+ int new_value;
+ struct ctl_table tbl;
+ unsigned int min = *(unsigned int *) ctl->extra1;
+ unsigned int max = *(unsigned int *) ctl->extra2;
+ int ret;
+
+ memset(&tbl, 0, sizeof(struct ctl_table));
+ tbl.maxlen = sizeof(unsigned int);
+
+ if (write)
+ tbl.data = &new_value;
+ else
+ tbl.data = &net->sctp.rto_max;
+ ret = proc_dointvec(&tbl, write, buffer, lenp, ppos);
+ if (write) {
+ if (ret || new_value > max || new_value < min)
+ return -EINVAL;
+ net->sctp.rto_max = new_value;
+ }
+ return ret;
+}
+
int sctp_sysctl_net_register(struct net *net)
{
struct ctl_table *table;
u32 old_cwnd = t->cwnd;
u32 max_burst_bytes;
- if (t->burst_limited)
+ if (t->burst_limited || asoc->max_burst == 0)
return;
max_burst_bytes = t->flight_size + (asoc->max_burst * asoc->pathmtu);
static void tipc_core_stop(void)
{
tipc_netlink_stop();
- tipc_handler_stop();
tipc_cfg_stop();
tipc_subscr_stop();
tipc_nametbl_stop();
res = tipc_subscr_start();
if (!res)
res = tipc_cfg_init();
- if (res)
+ if (res) {
+ tipc_handler_stop();
tipc_core_stop();
-
+ }
return res;
}
static void __exit tipc_exit(void)
{
+ tipc_handler_stop();
tipc_core_stop_net();
tipc_core_stop();
pr_info("Deactivated\n");
{
struct queue_item *item;
+ spin_lock_bh(&qitem_lock);
if (!handler_enabled) {
pr_err("Signal request ignored by handler\n");
+ spin_unlock_bh(&qitem_lock);
return -ENOPROTOOPT;
}
- spin_lock_bh(&qitem_lock);
item = kmem_cache_alloc(tipc_queue_item_cache, GFP_ATOMIC);
if (!item) {
pr_err("Signal queue out of memory\n");
struct list_head *l, *n;
struct queue_item *item;
- if (!handler_enabled)
+ spin_lock_bh(&qitem_lock);
+ if (!handler_enabled) {
+ spin_unlock_bh(&qitem_lock);
return;
-
+ }
handler_enabled = 0;
+ spin_unlock_bh(&qitem_lock);
+
tasklet_kill(&tipc_tasklet);
spin_lock_bh(&qitem_lock);
static int unix_seqpacket_recvmsg(struct kiocb *, struct socket *,
struct msghdr *, size_t, int);
-static void unix_set_peek_off(struct sock *sk, int val)
+static int unix_set_peek_off(struct sock *sk, int val)
{
struct unix_sock *u = unix_sk(sk);
- mutex_lock(&u->readlock);
+ if (mutex_lock_interruptible(&u->readlock))
+ return -EINTR;
+
sk->sk_peek_off = val;
mutex_unlock(&u->readlock);
+
+ return 0;
}
int err;
unsigned int retries = 0;
- mutex_lock(&u->readlock);
+ err = mutex_lock_interruptible(&u->readlock);
+ if (err)
+ return err;
err = 0;
if (u->addr)
goto out;
addr_len = err;
- mutex_lock(&u->readlock);
+ err = mutex_lock_interruptible(&u->readlock);
+ if (err)
+ goto out;
err = -EINVAL;
if (u->addr)
int i;
u16 ifmodes = wiphy->interface_modes;
+ /* support for 5/10 MHz is broken due to nl80211 API mess - disable */
+ wiphy->flags &= ~WIPHY_FLAG_SUPPORTS_5_10_MHZ;
+
+ /*
+ * There are major locking problems in nl80211/mac80211 for CSA,
+ * disable for all drivers until this has been reworked.
+ */
+ wiphy->flags &= ~WIPHY_FLAG_HAS_CHANNEL_SWITCH;
+
#ifdef CONFIG_PM
if (WARN_ON(wiphy->wowlan &&
(wiphy->wowlan->flags & WIPHY_WOWLAN_GTK_REKEY_FAILURE) &&
/* try to find an IBSS channel if none requested ... */
if (!wdev->wext.ibss.chandef.chan) {
- wdev->wext.ibss.chandef.width = NL80211_CHAN_WIDTH_20_NOHT;
+ struct ieee80211_channel *new_chan = NULL;
for (band = 0; band < IEEE80211_NUM_BANDS; band++) {
struct ieee80211_supported_band *sband;
continue;
if (chan->flags & IEEE80211_CHAN_DISABLED)
continue;
- wdev->wext.ibss.chandef.chan = chan;
- wdev->wext.ibss.chandef.center_freq1 =
- chan->center_freq;
+ new_chan = chan;
break;
}
- if (wdev->wext.ibss.chandef.chan)
+ if (new_chan)
break;
}
- if (!wdev->wext.ibss.chandef.chan)
+ if (!new_chan)
return -EINVAL;
+
+ cfg80211_chandef_create(&wdev->wext.ibss.chandef, new_chan,
+ NL80211_CHAN_NO_HT);
}
/* don't join -- SSID is not there */
return err;
if (chan) {
- wdev->wext.ibss.chandef.chan = chan;
- wdev->wext.ibss.chandef.width = NL80211_CHAN_WIDTH_20_NOHT;
- wdev->wext.ibss.chandef.center_freq1 = freq;
+ cfg80211_chandef_create(&wdev->wext.ibss.chandef, chan,
+ NL80211_CHAN_NO_HT);
wdev->wext.ibss.channel_fixed = true;
} else {
/* cfg80211_ibss_wext_join will pick one if needed */
hdr = nl80211hdr_put(msg, info->snd_portid, info->snd_seq, 0,
NL80211_CMD_NEW_KEY);
if (!hdr)
- return -ENOBUFS;
+ goto nla_put_failure;
cookie.msg = msg;
cookie.idx = key_idx;
err = -EINVAL;
goto out_free;
}
+
+ if (!wiphy->bands[band])
+ continue;
+
err = ieee80211_get_ratemask(wiphy->bands[band],
nla_data(attr),
nla_len(attr),
nla_put(msg, NL80211_ATTR_IE, req->ie_len, req->ie))
goto nla_put_failure;
- if (req->flags)
- nla_put_u32(msg, NL80211_ATTR_SCAN_FLAGS, req->flags);
+ if (req->flags &&
+ nla_put_u32(msg, NL80211_ATTR_SCAN_FLAGS, req->flags))
+ goto nla_put_failure;
return 0;
nla_put_failure:
struct nlattr *reasons;
reasons = nla_nest_start(msg, NL80211_ATTR_WOWLAN_TRIGGERS);
+ if (!reasons)
+ goto free_msg;
if (wakeup->disconnect &&
nla_put_flag(msg, NL80211_WOWLAN_TRIG_DISCONNECT))
wakeup->pattern_idx))
goto free_msg;
- if (wakeup->tcp_match)
- nla_put_flag(msg, NL80211_WOWLAN_TRIG_WAKEUP_TCP_MATCH);
+ if (wakeup->tcp_match &&
+ nla_put_flag(msg, NL80211_WOWLAN_TRIG_WAKEUP_TCP_MATCH))
+ goto free_msg;
- if (wakeup->tcp_connlost)
- nla_put_flag(msg,
- NL80211_WOWLAN_TRIG_WAKEUP_TCP_CONNLOST);
+ if (wakeup->tcp_connlost &&
+ nla_put_flag(msg, NL80211_WOWLAN_TRIG_WAKEUP_TCP_CONNLOST))
+ goto free_msg;
- if (wakeup->tcp_nomoretokens)
- nla_put_flag(msg,
- NL80211_WOWLAN_TRIG_WAKEUP_TCP_NOMORETOKENS);
+ if (wakeup->tcp_nomoretokens &&
+ nla_put_flag(msg,
+ NL80211_WOWLAN_TRIG_WAKEUP_TCP_NOMORETOKENS))
+ goto free_msg;
if (wakeup->packet) {
u32 pkt_attr = NL80211_WOWLAN_TRIG_WAKEUP_PKT_80211;
return;
hdr = nl80211hdr_put(msg, 0, 0, 0, NL80211_CMD_FT_EVENT);
- if (!hdr) {
- nlmsg_free(msg);
- return;
- }
+ if (!hdr)
+ goto out;
- nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx);
- nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex);
- nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, ft_event->target_ap);
- if (ft_event->ies)
- nla_put(msg, NL80211_ATTR_IE, ft_event->ies_len, ft_event->ies);
- if (ft_event->ric_ies)
- nla_put(msg, NL80211_ATTR_IE_RIC, ft_event->ric_ies_len,
- ft_event->ric_ies);
+ if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
+ nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex) ||
+ nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, ft_event->target_ap))
+ goto out;
+
+ if (ft_event->ies &&
+ nla_put(msg, NL80211_ATTR_IE, ft_event->ies_len, ft_event->ies))
+ goto out;
+ if (ft_event->ric_ies &&
+ nla_put(msg, NL80211_ATTR_IE_RIC, ft_event->ric_ies_len,
+ ft_event->ric_ies))
+ goto out;
genlmsg_end(msg, hdr);
genlmsg_multicast_netns(&nl80211_fam, wiphy_net(&rdev->wiphy), msg, 0,
NL80211_MCGRP_MLME, GFP_KERNEL);
+ return;
+ out:
+ nlmsg_free(msg);
}
EXPORT_SYMBOL(cfg80211_ft_event);
#include <tools/be_byteshift.h>
#include <tools/le_byteshift.h>
+#ifndef EM_ARCOMPACT
+#define EM_ARCOMPACT 93
+#endif
+
#ifndef EM_AARCH64
#define EM_AARCH64 183
#endif
case EM_S390:
custom_sort = sort_relative_table;
break;
+ case EM_ARCOMPACT:
case EM_ARM:
case EM_AARCH64:
case EM_MIPS:
*
* TODO: Encrypt the stored data with a temporary key.
*/
- file = shmem_file_setup("", datalen, 0);
+ file = shmem_kernel_file_setup("", datalen, 0);
if (IS_ERR(file)) {
ret = PTR_ERR(file);
goto err_quota;
}
/* allocate and initialise the key and its description */
- key = kmem_cache_alloc(key_jar, GFP_KERNEL);
+ key = kmem_cache_zalloc(key_jar, GFP_KERNEL);
if (!key)
goto no_memory_2;
key->uid = uid;
key->gid = gid;
key->perm = perm;
- key->flags = 0;
- key->expiry = 0;
- key->payload.data = NULL;
- key->security = NULL;
if (!(flags & KEY_ALLOC_NOT_IN_QUOTA))
key->flags |= 1 << KEY_FLAG_IN_QUOTA;
if (flags & KEY_ALLOC_TRUSTED)
key->flags |= 1 << KEY_FLAG_TRUSTED;
- memset(&key->type_data, 0, sizeof(key->type_data));
-
#ifdef KEY_DEBUGGING
key->magic = KEY_DEBUG_MAGIC;
#endif
static unsigned long hash_key_type_and_desc(const struct keyring_index_key *index_key)
{
const unsigned level_shift = ASSOC_ARRAY_LEVEL_STEP;
- const unsigned long level_mask = ASSOC_ARRAY_LEVEL_STEP_MASK;
+ const unsigned long fan_mask = ASSOC_ARRAY_FAN_MASK;
const char *description = index_key->description;
unsigned long hash, type;
u32 piece;
* ordinary keys by making sure the lowest level segment in the hash is
* zero for keyrings and non-zero otherwise.
*/
- if (index_key->type != &key_type_keyring && (hash & level_mask) == 0)
+ if (index_key->type != &key_type_keyring && (hash & fan_mask) == 0)
return hash | (hash >> (ASSOC_ARRAY_KEY_CHUNK_SIZE - level_shift)) | 1;
- if (index_key->type == &key_type_keyring && (hash & level_mask) != 0)
- return (hash + (hash << level_shift)) & ~level_mask;
+ if (index_key->type == &key_type_keyring && (hash & fan_mask) != 0)
+ return (hash + (hash << level_shift)) & ~fan_mask;
return hash;
}
* Compare the index keys of a pair of objects and determine the bit position
* at which they differ - if they differ.
*/
-static int keyring_diff_objects(const void *_a, const void *_b)
+static int keyring_diff_objects(const void *object, const void *data)
{
- const struct key *key_a = keyring_ptr_to_key(_a);
- const struct key *key_b = keyring_ptr_to_key(_b);
+ const struct key *key_a = keyring_ptr_to_key(object);
const struct keyring_index_key *a = &key_a->index_key;
- const struct keyring_index_key *b = &key_b->index_key;
+ const struct keyring_index_key *b = data;
unsigned long seg_a, seg_b;
int level, i;
smp_read_barrier_depends();
ptr = ACCESS_ONCE(shortcut->next_node);
BUG_ON(!assoc_array_ptr_is_node(ptr));
- node = assoc_array_ptr_to_node(ptr);
}
+ node = assoc_array_ptr_to_node(ptr);
begin_node:
kdebug("begin_node");
#include <net/ip.h> /* for local_port_range[] */
#include <net/sock.h>
#include <net/tcp.h> /* struct or_callable used in sock_rcv_skb */
+#include <net/inet_connection_sock.h>
#include <net/net_namespace.h>
#include <net/netlabel.h>
#include <linux/uaccess.h>
#include "audit.h"
#include "avc_ss.h"
-#define SB_TYPE_FMT "%s%s%s"
-#define SB_SUBTYPE(sb) (sb->s_subtype && sb->s_subtype[0])
-#define SB_TYPE_ARGS(sb) sb->s_type->name, SB_SUBTYPE(sb) ? "." : "", SB_SUBTYPE(sb) ? sb->s_subtype : ""
-
extern struct security_operations *security_ops;
/* SECMARK reference count */
the first boot of the SELinux kernel before we have
assigned xattr values to the filesystem. */
if (!root_inode->i_op->getxattr) {
- printk(KERN_WARNING "SELinux: (dev %s, type "SB_TYPE_FMT") has no "
- "xattr support\n", sb->s_id, SB_TYPE_ARGS(sb));
+ printk(KERN_WARNING "SELinux: (dev %s, type %s) has no "
+ "xattr support\n", sb->s_id, sb->s_type->name);
rc = -EOPNOTSUPP;
goto out;
}
if (rc < 0 && rc != -ENODATA) {
if (rc == -EOPNOTSUPP)
printk(KERN_WARNING "SELinux: (dev %s, type "
- SB_TYPE_FMT") has no security xattr handler\n",
- sb->s_id, SB_TYPE_ARGS(sb));
+ "%s) has no security xattr handler\n",
+ sb->s_id, sb->s_type->name);
else
printk(KERN_WARNING "SELinux: (dev %s, type "
- SB_TYPE_FMT") getxattr errno %d\n", sb->s_id,
- SB_TYPE_ARGS(sb), -rc);
+ "%s) getxattr errno %d\n", sb->s_id,
+ sb->s_type->name, -rc);
goto out;
}
}
if (sbsec->behavior > ARRAY_SIZE(labeling_behaviors))
- printk(KERN_ERR "SELinux: initialized (dev %s, type "SB_TYPE_FMT"), unknown behavior\n",
- sb->s_id, SB_TYPE_ARGS(sb));
+ printk(KERN_ERR "SELinux: initialized (dev %s, type %s), unknown behavior\n",
+ sb->s_id, sb->s_type->name);
else
- printk(KERN_DEBUG "SELinux: initialized (dev %s, type "SB_TYPE_FMT"), %s\n",
- sb->s_id, SB_TYPE_ARGS(sb),
+ printk(KERN_DEBUG "SELinux: initialized (dev %s, type %s), %s\n",
+ sb->s_id, sb->s_type->name,
labeling_behaviors[sbsec->behavior-1]);
sbsec->flags |= SE_SBINITIALIZED;
const struct cred *cred = current_cred();
int rc = 0, i;
struct superblock_security_struct *sbsec = sb->s_security;
+ const char *name = sb->s_type->name;
struct inode *inode = sbsec->sb->s_root->d_inode;
struct inode_security_struct *root_isec = inode->i_security;
u32 fscontext_sid = 0, context_sid = 0, rootcontext_sid = 0;
strlen(mount_options[i]), &sid);
if (rc) {
printk(KERN_WARNING "SELinux: security_context_to_sid"
- "(%s) failed for (dev %s, type "SB_TYPE_FMT") errno=%d\n",
- mount_options[i], sb->s_id, SB_TYPE_ARGS(sb), rc);
+ "(%s) failed for (dev %s, type %s) errno=%d\n",
+ mount_options[i], sb->s_id, name, rc);
goto out;
}
switch (flags[i]) {
out_double_mount:
rc = -EINVAL;
printk(KERN_WARNING "SELinux: mount invalid. Same superblock, different "
- "security settings for (dev %s, type "SB_TYPE_FMT")\n", sb->s_id,
- SB_TYPE_ARGS(sb));
+ "security settings for (dev %s, type %s)\n", sb->s_id, name);
goto out;
}
rc = security_context_to_sid(mount_options[i], len, &sid);
if (rc) {
printk(KERN_WARNING "SELinux: security_context_to_sid"
- "(%s) failed for (dev %s, type "SB_TYPE_FMT") errno=%d\n",
- mount_options[i], sb->s_id, SB_TYPE_ARGS(sb), rc);
+ "(%s) failed for (dev %s, type %s) errno=%d\n",
+ mount_options[i], sb->s_id, sb->s_type->name, rc);
goto out_free_opts;
}
rc = -EINVAL;
return rc;
out_bad_option:
printk(KERN_WARNING "SELinux: unable to change security options "
- "during remount (dev %s, type "SB_TYPE_FMT")\n", sb->s_id,
- SB_TYPE_ARGS(sb));
+ "during remount (dev %s, type=%s)\n", sb->s_id,
+ sb->s_type->name);
goto out_free_opts;
}
u32 nlbl_sid;
u32 nlbl_type;
- err = selinux_skb_xfrm_sid(skb, &xfrm_sid);
+ err = selinux_xfrm_skb_sid(skb, &xfrm_sid);
if (unlikely(err))
return -EACCES;
err = selinux_netlbl_skbuff_getsid(skb, family, &nlbl_type, &nlbl_sid);
return 0;
}
+/**
+ * selinux_conn_sid - Determine the child socket label for a connection
+ * @sk_sid: the parent socket's SID
+ * @skb_sid: the packet's SID
+ * @conn_sid: the resulting connection SID
+ *
+ * If @skb_sid is valid then the user:role:type information from @sk_sid is
+ * combined with the MLS information from @skb_sid in order to create
+ * @conn_sid. If @skb_sid is not valid then then @conn_sid is simply a copy
+ * of @sk_sid. Returns zero on success, negative values on failure.
+ *
+ */
+static int selinux_conn_sid(u32 sk_sid, u32 skb_sid, u32 *conn_sid)
+{
+ int err = 0;
+
+ if (skb_sid != SECSID_NULL)
+ err = security_sid_mls_copy(sk_sid, skb_sid, conn_sid);
+ else
+ *conn_sid = sk_sid;
+
+ return err;
+}
+
/* socket security operations */
static int socket_sockcreate_sid(const struct task_security_struct *tsec,
struct sk_security_struct *sksec = sk->sk_security;
int err;
u16 family = sk->sk_family;
- u32 newsid;
+ u32 connsid;
u32 peersid;
/* handle mapped IPv4 packets arriving via IPv6 sockets */
err = selinux_skb_peerlbl_sid(skb, family, &peersid);
if (err)
return err;
- if (peersid == SECSID_NULL) {
- req->secid = sksec->sid;
- req->peer_secid = SECSID_NULL;
- } else {
- err = security_sid_mls_copy(sksec->sid, peersid, &newsid);
- if (err)
- return err;
- req->secid = newsid;
- req->peer_secid = peersid;
- }
+ err = selinux_conn_sid(sksec->sid, peersid, &connsid);
+ if (err)
+ return err;
+ req->secid = connsid;
+ req->peer_secid = peersid;
return selinux_netlbl_inet_conn_request(req, family);
}
static unsigned int selinux_ip_output(struct sk_buff *skb,
u16 family)
{
+ struct sock *sk;
u32 sid;
if (!netlbl_enabled())
/* we do this in the LOCAL_OUT path and not the POST_ROUTING path
* because we want to make sure we apply the necessary labeling
* before IPsec is applied so we can leverage AH protection */
- if (skb->sk) {
- struct sk_security_struct *sksec = skb->sk->sk_security;
+ sk = skb->sk;
+ if (sk) {
+ struct sk_security_struct *sksec;
+
+ if (sk->sk_state == TCP_LISTEN)
+ /* if the socket is the listening state then this
+ * packet is a SYN-ACK packet which means it needs to
+ * be labeled based on the connection/request_sock and
+ * not the parent socket. unfortunately, we can't
+ * lookup the request_sock yet as it isn't queued on
+ * the parent socket until after the SYN-ACK is sent.
+ * the "solution" is to simply pass the packet as-is
+ * as any IP option based labeling should be copied
+ * from the initial connection request (in the IP
+ * layer). it is far from ideal, but until we get a
+ * security label in the packet itself this is the
+ * best we can do. */
+ return NF_ACCEPT;
+
+ /* standard practice, label using the parent socket */
+ sksec = sk->sk_security;
sid = sksec->sid;
} else
sid = SECINITSID_KERNEL;
* as fast and as clean as possible. */
if (!selinux_policycap_netpeer)
return selinux_ip_postroute_compat(skb, ifindex, family);
+
+ secmark_active = selinux_secmark_enabled();
+ peerlbl_active = selinux_peerlbl_enabled();
+ if (!secmark_active && !peerlbl_active)
+ return NF_ACCEPT;
+
+ sk = skb->sk;
+
#ifdef CONFIG_XFRM
/* If skb->dst->xfrm is non-NULL then the packet is undergoing an IPsec
* packet transformation so allow the packet to pass without any checks
* since we'll have another chance to perform access control checks
* when the packet is on it's final way out.
* NOTE: there appear to be some IPv6 multicast cases where skb->dst
- * is NULL, in this case go ahead and apply access control. */
- if (skb_dst(skb) != NULL && skb_dst(skb)->xfrm != NULL)
+ * is NULL, in this case go ahead and apply access control.
+ * NOTE: if this is a local socket (skb->sk != NULL) that is in the
+ * TCP listening state we cannot wait until the XFRM processing
+ * is done as we will miss out on the SA label if we do;
+ * unfortunately, this means more work, but it is only once per
+ * connection. */
+ if (skb_dst(skb) != NULL && skb_dst(skb)->xfrm != NULL &&
+ !(sk != NULL && sk->sk_state == TCP_LISTEN))
return NF_ACCEPT;
#endif
- secmark_active = selinux_secmark_enabled();
- peerlbl_active = selinux_peerlbl_enabled();
- if (!secmark_active && !peerlbl_active)
- return NF_ACCEPT;
- /* if the packet is being forwarded then get the peer label from the
- * packet itself; otherwise check to see if it is from a local
- * application or the kernel, if from an application get the peer label
- * from the sending socket, otherwise use the kernel's sid */
- sk = skb->sk;
if (sk == NULL) {
+ /* Without an associated socket the packet is either coming
+ * from the kernel or it is being forwarded; check the packet
+ * to determine which and if the packet is being forwarded
+ * query the packet directly to determine the security label. */
if (skb->skb_iif) {
secmark_perm = PACKET__FORWARD_OUT;
if (selinux_skb_peerlbl_sid(skb, family, &peer_sid))
secmark_perm = PACKET__SEND;
peer_sid = SECINITSID_KERNEL;
}
+ } else if (sk->sk_state == TCP_LISTEN) {
+ /* Locally generated packet but the associated socket is in the
+ * listening state which means this is a SYN-ACK packet. In
+ * this particular case the correct security label is assigned
+ * to the connection/request_sock but unfortunately we can't
+ * query the request_sock as it isn't queued on the parent
+ * socket until after the SYN-ACK packet is sent; the only
+ * viable choice is to regenerate the label like we do in
+ * selinux_inet_conn_request(). See also selinux_ip_output()
+ * for similar problems. */
+ u32 skb_sid;
+ struct sk_security_struct *sksec = sk->sk_security;
+ if (selinux_skb_peerlbl_sid(skb, family, &skb_sid))
+ return NF_DROP;
+ /* At this point, if the returned skb peerlbl is SECSID_NULL
+ * and the packet has been through at least one XFRM
+ * transformation then we must be dealing with the "final"
+ * form of labeled IPsec packet; since we've already applied
+ * all of our access controls on this packet we can safely
+ * pass the packet. */
+ if (skb_sid == SECSID_NULL) {
+ switch (family) {
+ case PF_INET:
+ if (IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
+ return NF_ACCEPT;
+ break;
+ case PF_INET6:
+ if (IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED)
+ return NF_ACCEPT;
+ default:
+ return NF_DROP_ERR(-ECONNREFUSED);
+ }
+ }
+ if (selinux_conn_sid(sksec->sid, skb_sid, &peer_sid))
+ return NF_DROP;
+ secmark_perm = PACKET__SEND;
} else {
+ /* Locally generated packet, fetch the security label from the
+ * associated socket. */
struct sk_security_struct *sksec = sk->sk_security;
peer_sid = sksec->sid;
secmark_perm = PACKET__SEND;
int selinux_xfrm_postroute_last(u32 sk_sid, struct sk_buff *skb,
struct common_audit_data *ad, u8 proto);
int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid, int ckall);
+int selinux_xfrm_skb_sid(struct sk_buff *skb, u32 *sid);
static inline void selinux_xfrm_notify_policyload(void)
{
static inline void selinux_xfrm_notify_policyload(void)
{
}
-#endif
-static inline int selinux_skb_xfrm_sid(struct sk_buff *skb, u32 *sid)
+static inline int selinux_xfrm_skb_sid(struct sk_buff *skb, u32 *sid)
{
- return selinux_xfrm_decode_session(skb, sid, 0);
+ *sid = SECSID_NULL;
+ return 0;
}
+#endif
#endif /* _SELINUX_XFRM_H_ */
struct ocontext *c;
struct superblock_security_struct *sbsec = sb->s_security;
const char *fstype = sb->s_type->name;
- const char *subtype = (sb->s_subtype && sb->s_subtype[0]) ? sb->s_subtype : NULL;
- struct ocontext *base = NULL;
read_lock(&policy_rwlock);
- for (c = policydb.ocontexts[OCON_FSUSE]; c; c = c->next) {
- char *sub;
- int baselen;
-
- baselen = strlen(fstype);
-
- /* if base does not match, this is not the one */
- if (strncmp(fstype, c->u.name, baselen))
- continue;
-
- /* if there is no subtype, this is the one! */
- if (!subtype)
- break;
-
- /* skip past the base in this entry */
- sub = c->u.name + baselen;
-
- /* entry is only a base. save it. keep looking for subtype */
- if (sub[0] == '\0') {
- base = c;
- continue;
- }
-
- /* entry is not followed by a subtype, so it is not a match */
- if (sub[0] != '.')
- continue;
-
- /* whew, we found a subtype of this fstype */
- sub++; /* move past '.' */
-
- /* exact match of fstype AND subtype */
- if (!strcmp(subtype, sub))
+ c = policydb.ocontexts[OCON_FSUSE];
+ while (c) {
+ if (strcmp(fstype, c->u.name) == 0)
break;
+ c = c->next;
}
- /* in case we had found an fstype match but no subtype match */
- if (!c)
- c = base;
-
if (c) {
sbsec->behavior = c->v.behavior;
if (!c->sid[0]) {
NULL) ? 0 : 1);
}
-/*
- * LSM hook implementation that checks and/or returns the xfrm sid for the
- * incoming packet.
- */
-int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid, int ckall)
+static u32 selinux_xfrm_skb_sid_egress(struct sk_buff *skb)
{
- u32 sid_session = SECSID_NULL;
- struct sec_path *sp;
+ struct dst_entry *dst = skb_dst(skb);
+ struct xfrm_state *x;
- if (skb == NULL)
- goto out;
+ if (dst == NULL)
+ return SECSID_NULL;
+ x = dst->xfrm;
+ if (x == NULL || !selinux_authorizable_xfrm(x))
+ return SECSID_NULL;
+
+ return x->security->ctx_sid;
+}
+
+static int selinux_xfrm_skb_sid_ingress(struct sk_buff *skb,
+ u32 *sid, int ckall)
+{
+ u32 sid_session = SECSID_NULL;
+ struct sec_path *sp = skb->sp;
- sp = skb->sp;
if (sp) {
int i;
return 0;
}
+/*
+ * LSM hook implementation that checks and/or returns the xfrm sid for the
+ * incoming packet.
+ */
+int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid, int ckall)
+{
+ if (skb == NULL) {
+ *sid = SECSID_NULL;
+ return 0;
+ }
+ return selinux_xfrm_skb_sid_ingress(skb, sid, ckall);
+}
+
+int selinux_xfrm_skb_sid(struct sk_buff *skb, u32 *sid)
+{
+ int rc;
+
+ rc = selinux_xfrm_skb_sid_ingress(skb, sid, 0);
+ if (rc == 0 && *sid == SECSID_NULL)
+ *sid = selinux_xfrm_skb_sid_egress(skb);
+
+ return rc;
+}
+
/*
* LSM hook implementation that allocs and transfers uctx spec to xfrm_policy.
*/
return rc;
ctx = kmalloc(sizeof(*ctx) + str_len, GFP_ATOMIC);
- if (!ctx)
- return -ENOMEM;
+ if (!ctx) {
+ rc = -ENOMEM;
+ goto out;
+ }
ctx->ctx_doi = XFRM_SC_DOI_LSM;
ctx->ctx_alg = XFRM_SC_ALG_SELINUX;
ctx->ctx_sid = secid;
ctx->ctx_len = str_len;
memcpy(ctx->ctx_str, ctx_str, str_len);
- kfree(ctx_str);
x->security = ctx;
atomic_inc(&selinux_xfrm_refcount);
- return 0;
+out:
+ kfree(ctx_str);
+ return rc;
}
/*
case SNDRV_PCM_STATE_DISCONNECTED:
err = -EBADFD;
goto _endloop;
+ case SNDRV_PCM_STATE_PAUSED:
+ continue;
}
if (!tout) {
snd_printd("%s write error (DMA or IRQ trouble?)\n",
memset(path, 0, sizeof(*path));
}
+/* return a DAC if paired to the given pin by codec driver */
+static hda_nid_t get_preferred_dac(struct hda_codec *codec, hda_nid_t pin)
+{
+ struct hda_gen_spec *spec = codec->spec;
+ const hda_nid_t *list = spec->preferred_dacs;
+
+ if (!list)
+ return 0;
+ for (; *list; list += 2)
+ if (*list == pin)
+ return list[1];
+ return 0;
+}
+
/* look for an empty DAC slot */
static hda_nid_t look_for_dac(struct hda_codec *codec, hda_nid_t pin,
bool is_digital)
continue;
}
- dacs[i] = look_for_dac(codec, pin, false);
+ dacs[i] = get_preferred_dac(codec, pin);
+ if (dacs[i]) {
+ if (is_dac_already_used(codec, dacs[i]))
+ badness += bad->shared_primary;
+ }
+
+ if (!dacs[i])
+ dacs[i] = look_for_dac(codec, pin, false);
if (!dacs[i] && !i) {
/* try to steal the DAC of surrounds for the front */
for (j = 1; j < num_outs; j++) {
return AC_PWRST_D3;
}
+/* mute all aamix inputs initially; parse up to the first leaves */
+static void mute_all_mixer_nid(struct hda_codec *codec, hda_nid_t mix)
+{
+ int i, nums;
+ const hda_nid_t *conn;
+ bool has_amp;
+
+ nums = snd_hda_get_conn_list(codec, mix, &conn);
+ has_amp = nid_has_mute(codec, mix, HDA_INPUT);
+ for (i = 0; i < nums; i++) {
+ if (has_amp)
+ snd_hda_codec_amp_stereo(codec, mix,
+ HDA_INPUT, i,
+ 0xff, HDA_AMP_MUTE);
+ else if (nid_has_volume(codec, conn[i], HDA_OUTPUT))
+ snd_hda_codec_amp_stereo(codec, conn[i],
+ HDA_OUTPUT, 0,
+ 0xff, HDA_AMP_MUTE);
+ }
+}
/*
* Parse the given BIOS configuration and set up the hda_gen_spec
}
}
+ /* mute all aamix input initially */
+ if (spec->mixer_nid)
+ mute_all_mixer_nid(codec, spec->mixer_nid);
+
dig_only:
parse_digital(codec);
const struct badness_table *main_out_badness;
const struct badness_table *extra_out_badness;
+ /* preferred pin/DAC pairs; an array of paired NIDs */
+ const hda_nid_t *preferred_dacs;
+
/* loopback mixing mode */
bool aamix_mode;
* white/black-list for enable_msi
*/
static struct snd_pci_quirk msi_black_list[] = {
+ SND_PCI_QUIRK(0x103c, 0x2191, "HP", 0), /* AMD Hudson */
+ SND_PCI_QUIRK(0x103c, 0x2192, "HP", 0), /* AMD Hudson */
+ SND_PCI_QUIRK(0x103c, 0x21f7, "HP", 0), /* AMD Hudson */
+ SND_PCI_QUIRK(0x103c, 0x21fa, "HP", 0), /* AMD Hudson */
SND_PCI_QUIRK(0x1043, 0x81f2, "ASUS", 0), /* Athlon64 X2 + nvidia */
SND_PCI_QUIRK(0x1043, 0x81f6, "ASUS", 0), /* nvidia */
SND_PCI_QUIRK(0x1043, 0x822d, "ASUS", 0), /* Athlon64 X2 + nvidia MCP55 */
{
int err;
struct ad198x_spec *spec;
+ static hda_nid_t preferred_pairs[] = {
+ 0x1a, 0x03,
+ 0x1b, 0x03,
+ 0x1c, 0x04,
+ 0x1d, 0x05,
+ 0x1e, 0x03,
+ 0
+ };
err = alloc_ad_spec(codec);
if (err < 0)
* So, let's disable the shared stream.
*/
spec->gen.multiout.no_share_stream = 1;
+ /* give fixed DAC/pin pairs */
+ spec->gen.preferred_dacs = preferred_pairs;
/* AD1986A can't manage the dynamic pin on/off smoothly */
spec->gen.auto_mute_via_amp = 1;
SND_PCI_QUIRK(0x1028, 0x0401, "Dell Vostro 1014", CXT5066_DELL_VOSTRO),
SND_PCI_QUIRK(0x1028, 0x0408, "Dell Inspiron One 19T", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x1028, 0x050f, "Dell Inspiron", CXT5066_IDEAPAD),
- SND_PCI_QUIRK(0x1028, 0x0510, "Dell Vostro", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x103c, 0x360b, "HP G60", CXT5066_HP_LAPTOP),
SND_PCI_QUIRK(0x1043, 0x13f3, "Asus A52J", CXT5066_ASUS),
SND_PCI_QUIRK(0x1043, 0x1643, "Asus K52JU", CXT5066_ASUS),
int err;
per_cvt = get_cvt(spec, 0);
- err = snd_hda_create_spdif_out_ctls(codec, per_cvt->cvt_nid,
- per_cvt->cvt_nid);
+ err = snd_hda_create_dig_out_ctls(codec, per_cvt->cvt_nid,
+ per_cvt->cvt_nid,
+ HDA_PCM_TYPE_HDMI);
if (err < 0)
return err;
return simple_hdmi_build_jack(codec, 0);
ALC269_FIXUP_ASUS_X101,
ALC271_FIXUP_AMIC_MIC2,
ALC271_FIXUP_HP_GATE_MIC_JACK,
+ ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572,
ALC269_FIXUP_ACER_AC700,
ALC269_FIXUP_LIMIT_INT_MIC_BOOST,
ALC269VB_FIXUP_ASUS_ZENBOOK,
.chained = true,
.chain_id = ALC271_FIXUP_AMIC_MIC2,
},
+ [ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_limit_int_mic_boost,
+ .chained = true,
+ .chain_id = ALC271_FIXUP_HP_GATE_MIC_JACK,
+ },
[ALC269_FIXUP_ACER_AC700] = {
.type = HDA_FIXUP_PINS,
.v.pins = (const struct hda_pintbl[]) {
SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK_VENDOR(0x1025, "Acer Aspire", ALC271_FIXUP_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
SND_PCI_QUIRK(0x1028, 0x05bd, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x05be, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0606, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0610, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0614, "Dell Inspiron 3135", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS),
SND_PCI_QUIRK(0x1028, 0x061f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0629, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS),
+ SND_PCI_QUIRK(0x1028, 0x063e, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x063f, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0640, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE),
SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0623, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0624, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0625, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0626, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0628, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_BASS_1A_CHMAP),
SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_CHMAP),
dma_params = ssc_p->dma_params[dir];
- ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_enable);
+ ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_disable);
ssc_writel(ssc_p->ssc->regs, IDR, dma_params->mask->ssc_error);
pr_debug("%s enabled SSC_SR=0x%08x\n",
return 0;
}
+static int atmel_ssc_trigger(struct snd_pcm_substream *substream,
+ int cmd, struct snd_soc_dai *dai)
+{
+ struct atmel_ssc_info *ssc_p = &ssc_info[dai->id];
+ struct atmel_pcm_dma_params *dma_params;
+ int dir;
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ dir = 0;
+ else
+ dir = 1;
+
+ dma_params = ssc_p->dma_params[dir];
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_enable);
+ break;
+ default:
+ ssc_writel(ssc_p->ssc->regs, CR, dma_params->mask->ssc_disable);
+ break;
+ }
+
+ return 0;
+}
#ifdef CONFIG_PM
static int atmel_ssc_suspend(struct snd_soc_dai *cpu_dai)
.startup = atmel_ssc_startup,
.shutdown = atmel_ssc_shutdown,
.prepare = atmel_ssc_prepare,
+ .trigger = atmel_ssc_trigger,
.hw_params = atmel_ssc_hw_params,
.set_fmt = atmel_ssc_set_dai_fmt,
.set_clkdiv = atmel_ssc_set_dai_clkdiv,
dai->stream_name = "WM8731 PCM";
dai->codec_dai_name = "wm8731-hifi";
dai->init = sam9x5_wm8731_init;
- dai->dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF
+ dai->dai_fmt = SND_SOC_DAIFMT_DSP_A | SND_SOC_DAIFMT_NB_NF
| SND_SOC_DAIFMT_CBM_CFM;
ret = snd_soc_of_parse_card_name(card, "atmel,model");
{ "AEC Loopback", "HPOUT3L", "OUT3L" },
{ "AEC Loopback", "HPOUT3R", "OUT3R" },
{ "HPOUT3L", NULL, "OUT3L" },
- { "HPOUT3R", NULL, "OUT3L" },
+ { "HPOUT3R", NULL, "OUT3R" },
{ "AEC Loopback", "SPKOUTL", "OUT4L" },
{ "SPKOUTLN", NULL, "OUT4L" },
switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
case SND_SOC_DAIFMT_DSP_B:
- aif1 |= WM8904_AIF_LRCLK_INV;
+ aif1 |= 0x3 | WM8904_AIF_LRCLK_INV;
case SND_SOC_DAIFMT_DSP_A:
aif1 |= 0x3;
break;
snd_soc_update_bits(codec, WM8962_CLOCKING_4,
WM8962_SYSCLK_RATE_MASK, clocking4);
+ /* DSPCLK_DIV can be only generated correctly after enabling SYSCLK.
+ * So we here provisionally enable it and then disable it afterward
+ * if current bias_level hasn't reached SND_SOC_BIAS_ON.
+ */
+ if (codec->dapm.bias_level != SND_SOC_BIAS_ON)
+ snd_soc_update_bits(codec, WM8962_CLOCKING2,
+ WM8962_SYSCLK_ENA_MASK, WM8962_SYSCLK_ENA);
+
dspclk = snd_soc_read(codec, WM8962_CLOCKING1);
+
+ if (codec->dapm.bias_level != SND_SOC_BIAS_ON)
+ snd_soc_update_bits(codec, WM8962_CLOCKING2,
+ WM8962_SYSCLK_ENA_MASK, 0);
+
if (dspclk < 0) {
dev_err(codec->dev, "Failed to read DSPCLK: %d\n", dspclk);
return;
return ret;
/* Wait for the RAM to start, should be near instantaneous */
- count = 0;
- do {
+ for (count = 0; count < 10; ++count) {
ret = regmap_read(dsp->regmap, dsp->base + ADSP2_STATUS1,
&val);
if (ret != 0)
return ret;
- } while (!(val & ADSP2_RAM_RDY) && ++count < 10);
+
+ if (val & ADSP2_RAM_RDY)
+ break;
+
+ msleep(1);
+ }
if (!(val & ADSP2_RAM_RDY)) {
adsp_err(dsp, "Failed to start DSP RAM\n");
break;
}
- dapm->bias_level = level;
-
return 0;
}
.playback = {
.channels_min = 1,
.channels_max = 2,
- .rates = SNDRV_PCM_RATE_8000_192000 |
- SNDRV_PCM_RATE_CONTINUOUS |
- SNDRV_PCM_RATE_KNOT,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS,
+ .rate_min = 5512,
+ .rate_max = 192000,
.formats = KIRKWOOD_I2S_FORMATS,
},
.capture = {
.channels_min = 1,
.channels_max = 2,
- .rates = SNDRV_PCM_RATE_8000_192000 |
- SNDRV_PCM_RATE_CONTINUOUS |
- SNDRV_PCM_RATE_KNOT,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS,
+ .rate_min = 5512,
+ .rate_max = 192000,
.formats = KIRKWOOD_I2S_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
.playback = {
.channels_min = 1,
.channels_max = 2,
- .rates = SNDRV_PCM_RATE_8000_192000 |
- SNDRV_PCM_RATE_CONTINUOUS |
- SNDRV_PCM_RATE_KNOT,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS,
+ .rate_min = 5512,
+ .rate_max = 192000,
.formats = KIRKWOOD_SPDIF_FORMATS,
},
.capture = {
.channels_min = 1,
.channels_max = 2,
- .rates = SNDRV_PCM_RATE_8000_192000 |
- SNDRV_PCM_RATE_CONTINUOUS |
- SNDRV_PCM_RATE_KNOT,
+ .rates = SNDRV_PCM_RATE_CONTINUOUS,
+ .rate_min = 5512,
+ .rate_max = 192000,
.formats = KIRKWOOD_SPDIF_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
}
}
+static void dmaengine_pcm_release_chan(struct dmaengine_pcm *pcm)
+{
+ unsigned int i;
+
+ for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE;
+ i++) {
+ if (!pcm->chan[i])
+ continue;
+ dma_release_channel(pcm->chan[i]);
+ if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX)
+ break;
+ }
+}
+
/**
* snd_dmaengine_pcm_register - Register a dmaengine based PCM device
* @dev: The parent device for the PCM device
const struct snd_dmaengine_pcm_config *config, unsigned int flags)
{
struct dmaengine_pcm *pcm;
+ int ret;
pcm = kzalloc(sizeof(*pcm), GFP_KERNEL);
if (!pcm)
dmaengine_pcm_request_chan_of(pcm, dev);
if (flags & SND_DMAENGINE_PCM_FLAG_NO_RESIDUE)
- return snd_soc_add_platform(dev, &pcm->platform,
+ ret = snd_soc_add_platform(dev, &pcm->platform,
&dmaengine_no_residue_pcm_platform);
else
- return snd_soc_add_platform(dev, &pcm->platform,
+ ret = snd_soc_add_platform(dev, &pcm->platform,
&dmaengine_pcm_platform);
+ if (ret)
+ goto err_free_dma;
+
+ return 0;
+
+err_free_dma:
+ dmaengine_pcm_release_chan(pcm);
+ kfree(pcm);
+ return ret;
}
EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_register);
{
struct snd_soc_platform *platform;
struct dmaengine_pcm *pcm;
- unsigned int i;
platform = snd_soc_lookup_platform(dev);
if (!platform)
pcm = soc_platform_to_pcm(platform);
- for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE; i++) {
- if (pcm->chan[i]) {
- dma_release_channel(pcm->chan[i]);
- if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX)
- break;
- }
- }
-
snd_soc_remove_platform(platform);
+ dmaengine_pcm_release_chan(pcm);
kfree(pcm);
}
EXPORT_SYMBOL_GPL(snd_dmaengine_pcm_unregister);
struct snd_soc_platform *platform = rtd->platform;
struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
struct snd_soc_dai *codec_dai = rtd->codec_dai;
- struct snd_soc_codec *codec = rtd->codec;
+ bool playback = substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass);
/* apply codec digital mute */
- if (!codec->active)
+ if ((playback && codec_dai->playback_active == 1) ||
+ (!playback && codec_dai->capture_active == 1))
snd_soc_dai_digital_mute(codec_dai, 1, substream->stream);
/* free any machine hw params */
unsigned int fmt)
{
struct tegra20_i2s *i2s = snd_soc_dai_get_drvdata(dai);
- unsigned int mask, val;
+ unsigned int mask = 0, val = 0;
switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
case SND_SOC_DAIFMT_NB_NF:
return -EINVAL;
}
- mask = TEGRA20_I2S_CTRL_MASTER_ENABLE;
+ mask |= TEGRA20_I2S_CTRL_MASTER_ENABLE;
switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
case SND_SOC_DAIFMT_CBS_CFS:
- val = TEGRA20_I2S_CTRL_MASTER_ENABLE;
+ val |= TEGRA20_I2S_CTRL_MASTER_ENABLE;
break;
case SND_SOC_DAIFMT_CBM_CFM:
break;
{
struct device *dev = dai->dev;
struct tegra20_spdif *spdif = snd_soc_dai_get_drvdata(dai);
- unsigned int mask, val;
+ unsigned int mask = 0, val = 0;
int ret, spdifclock;
- mask = TEGRA20_SPDIF_CTRL_PACK |
- TEGRA20_SPDIF_CTRL_BIT_MODE_MASK;
+ mask |= TEGRA20_SPDIF_CTRL_PACK |
+ TEGRA20_SPDIF_CTRL_BIT_MODE_MASK;
switch (params_format(params)) {
case SNDRV_PCM_FORMAT_S16_LE:
- val = TEGRA20_SPDIF_CTRL_PACK |
- TEGRA20_SPDIF_CTRL_BIT_MODE_16BIT;
+ val |= TEGRA20_SPDIF_CTRL_PACK |
+ TEGRA20_SPDIF_CTRL_BIT_MODE_16BIT;
break;
default:
return -EINVAL;
unsigned int fmt)
{
struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(dai);
- unsigned int mask, val;
+ unsigned int mask = 0, val = 0;
switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
case SND_SOC_DAIFMT_NB_NF:
return -EINVAL;
}
- mask = TEGRA30_I2S_CTRL_MASTER_ENABLE;
+ mask |= TEGRA30_I2S_CTRL_MASTER_ENABLE;
switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
case SND_SOC_DAIFMT_CBS_CFS:
- val = TEGRA30_I2S_CTRL_MASTER_ENABLE;
+ val |= TEGRA30_I2S_CTRL_MASTER_ENABLE;
break;
case SND_SOC_DAIFMT_CBM_CFM:
break;
return err;
}
- return err;
+ return 0;
}
int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
#include "helpers/bitmask.h"
static struct option set_opts[] = {
- { .name = "perf-bias", .has_arg = optional_argument, .flag = NULL, .val = 'b'},
- { .name = "sched-mc", .has_arg = optional_argument, .flag = NULL, .val = 'm'},
- { .name = "sched-smt", .has_arg = optional_argument, .flag = NULL, .val = 's'},
+ { .name = "perf-bias", .has_arg = required_argument, .flag = NULL, .val = 'b'},
+ { .name = "sched-mc", .has_arg = required_argument, .flag = NULL, .val = 'm'},
+ { .name = "sched-smt", .has_arg = required_argument, .flag = NULL, .val = 's'},
{ },
};
int r;
struct kvm_vcpu *vcpu, *v;
+ if (id >= KVM_MAX_VCPUS)
+ return -EINVAL;
+
vcpu = kvm_arch_vcpu_create(kvm, id);
if (IS_ERR(vcpu))
return PTR_ERR(vcpu);