diff options
| author | Dhiren Parmar <dparmar@nvidia.com> | 2015-01-19 10:07:19 -0800 |
|---|---|---|
| committer | Dhiren Parmar <dparmar@nvidia.com> | 2015-01-20 00:32:41 -0800 |
| commit | 14f7ad1b1befe178caa36cb0ed920bdad7030663 (patch) | |
| tree | d3f5574e67e01829dab71848e3540a551ea2b5b1 | |
| parent | 1330d02ca9b244e6f5525c529c2ed5f859bdad6c (diff) | |
Revert "Merge branch 'linux-3.10.y' into HEAD"
Seems to have caused Encryption failure.
This reverts commit 077753ed93a33dc11e7273938870df9e1740ff5d.
Change-Id: I3d3ac40054fb5624bc45134d344a641689d6b5ed
Signed-off-by: Dhiren Parmar <dparmar@nvidia.com>
Reviewed-on: http://git-master/r/673728
1238 files changed, 5886 insertions, 13687 deletions
diff --git a/Documentation/DocBook/media/Makefile b/Documentation/DocBook/media/Makefile index 1d27f0a1abd..f9fd615427f 100644 --- a/Documentation/DocBook/media/Makefile +++ b/Documentation/DocBook/media/Makefile @@ -195,7 +195,7 @@ DVB_DOCUMENTED = \ # install_media_images = \ - $(Q)-cp $(OBJIMGFILES) $(MEDIA_SRC_DIR)/v4l/*.svg $(MEDIA_OBJ_DIR)/media_api + $(Q)cp $(OBJIMGFILES) $(MEDIA_SRC_DIR)/v4l/*.svg $(MEDIA_OBJ_DIR)/media_api $(MEDIA_OBJ_DIR)/%: $(MEDIA_SRC_DIR)/%.b64 $(Q)base64 -d $< >$@ diff --git a/Documentation/SubmittingPatches b/Documentation/SubmittingPatches index 4dbba7e100a..6e97e73d87b 100644 --- a/Documentation/SubmittingPatches +++ b/Documentation/SubmittingPatches @@ -131,20 +131,6 @@ If you cannot condense your patch set into a smaller set of patches, then only post say 15 or so at a time and wait for review and integration. -If your patch fixes a bug in a specific commit, e.g. you found an issue using -git-bisect, please use the 'Fixes:' tag with the first 12 characters of the -SHA-1 ID, and the one line summary. -Example: - - Fixes: e21d2170f366 ("video: remove unnecessary platform_set_drvdata()") - -The following git-config settings can be used to add a pretty format for -outputting the above style in the git log or git show commands - - [core] - abbrev = 12 - [pretty] - fixes = Fixes: %h (\"%s\") 4) Style check your changes. @@ -434,7 +420,7 @@ person it names. This tag documents that potentially interested parties have been included in the discussion -14) Using Reported-by:, Tested-by:, Reviewed-by:, Suggested-by: and Fixes: +14) Using Reported-by:, Tested-by:, Reviewed-by: and Suggested-by: If this patch fixes a problem reported by somebody else, consider adding a Reported-by: tag to credit the reporter for their contribution. Please @@ -489,12 +475,6 @@ idea was not posted in a public forum. That said, if we diligently credit our idea reporters, they will, hopefully, be inspired to help us again in the future. -A Fixes: tag indicates that the patch fixes an issue in a previous commit. It -is used to make it easy to determine where a bug originated, which can help -review a bug fix. This tag also assists the stable kernel team in determining -which stable kernel versions should receive your fix. This is the preferred -method for indicating a bug fixed by the patch. See #2 above for more details. - 15) The canonical patch format diff --git a/Documentation/i2c/busses/i2c-i801 b/Documentation/i2c/busses/i2c-i801 index babe2ef1613..d29dea0f323 100644 --- a/Documentation/i2c/busses/i2c-i801 +++ b/Documentation/i2c/busses/i2c-i801 @@ -25,8 +25,6 @@ Supported adapters: * Intel Avoton (SOC) * Intel Wellsburg (PCH) * Intel Coleto Creek (PCH) - * Intel Wildcat Point-LP (PCH) - * Intel BayTrail (SOC) Datasheets: Publicly available at the Intel website On Intel Patsburg and later chipsets, both the normal host SMBus controller diff --git a/Documentation/input/elantech.txt b/Documentation/input/elantech.txt index e1ae127ed09..5602eb71ad5 100644 --- a/Documentation/input/elantech.txt +++ b/Documentation/input/elantech.txt @@ -504,12 +504,9 @@ byte 5: * reg_10 bit 7 6 5 4 3 2 1 0 - 0 0 0 0 R F T A + 0 0 0 0 0 0 0 A A: 1 = enable absolute tracking - T: 1 = enable two finger mode auto correct - F: 1 = disable ABS Position Filter - R: 1 = enable real hardware resolution 6.2 Native absolute mode 6 byte packet format ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/Documentation/ja_JP/HOWTO b/Documentation/ja_JP/HOWTO index 46ed7359346..050d37fe6d4 100644 --- a/Documentation/ja_JP/HOWTO +++ b/Documentation/ja_JP/HOWTO @@ -315,7 +315,7 @@ Andrew Morton ㌠Linux-kernel メーリングリストã«ã‚«ãƒ¼ãƒãƒ«ãƒªãƒªãƒ¼ã ã‚‚ã—ã€2.6.x.y カーãƒãƒ«ãŒå˜åœ¨ã—ãªã„å ´åˆã«ã¯ã€ç•ªå·ãŒä¸€ç•ªå¤§ãã„ 2.6.x ㌠最新ã®å®‰å®šç‰ˆã‚«ãƒ¼ãƒãƒ«ã§ã™ã€‚ -2.6.x.y 㯠"stable" ãƒãƒ¼ãƒ <stable@vger.kernel.org> ã§ãƒ¡ãƒ³ãƒ†ã•れã¦ãŠã‚Šã€å¿… +2.6.x.y 㯠"stable" ãƒãƒ¼ãƒ <stable@kernel.org> ã§ãƒ¡ãƒ³ãƒ†ã•れã¦ãŠã‚Šã€å¿… è¦ã«å¿œã˜ã¦ãƒªãƒªãƒ¼ã‚¹ã•れã¾ã™ã€‚通常ã®ãƒªãƒªãƒ¼ã‚¹æœŸé–“㯠2週間毎ã§ã™ãŒã€å·®ã—迫㣠ãŸå•題ãŒãªã‘れã°ã‚‚ã†å°‘ã—é•·ããªã‚‹ã“ã¨ã‚‚ã‚りã¾ã™ã€‚ã‚»ã‚ュリティ関連ã®å•題 ã®å ´åˆã¯ã“れã«å¯¾ã—ã¦ã ã„ãŸã„ã®å ´åˆã€ã™ãã«ãƒªãƒªãƒ¼ã‚¹ãŒã•れã¾ã™ã€‚ diff --git a/Documentation/ja_JP/stable_kernel_rules.txt b/Documentation/ja_JP/stable_kernel_rules.txt index 9dbda9b5d21..14265837c4c 100644 --- a/Documentation/ja_JP/stable_kernel_rules.txt +++ b/Documentation/ja_JP/stable_kernel_rules.txt @@ -50,16 +50,16 @@ linux-2.6.29/Documentation/stable_kernel_rules.txt -stable ツリーã«ãƒ‘ッãƒã‚’é€ä»˜ã™ã‚‹æ‰‹ç¶šã- - - 上記ã®è¦å‰‡ã«å¾“ã£ã¦ã„ã‚‹ã‹ã‚’確èªã—ãŸå¾Œã«ã€stable@vger.kernel.org ã«ãƒ‘ッム+ - 上記ã®è¦å‰‡ã«å¾“ã£ã¦ã„ã‚‹ã‹ã‚’確èªã—ãŸå¾Œã«ã€stable@kernel.org ã«ãƒ‘ッムをé€ã‚‹ã€‚ - é€ä¿¡è€…ã¯ãƒ‘ッãƒãŒã‚ューã«å—ã‘付ã‘られãŸéš›ã«ã¯ ACK ã‚’ã€å´ä¸‹ã•れãŸå ´åˆ ã«ã¯ NAK ã‚’å—ã‘å–る。ã“ã®å応ã¯é–‹ç™ºè€…ãŸã¡ã®ã‚¹ã‚±ã‚¸ãƒ¥ãƒ¼ãƒ«ã«ã‚ˆã£ã¦ã€æ•° æ—¥ã‹ã‹ã‚‹å ´åˆãŒã‚る。 - ã‚‚ã—å—ã‘å–られãŸã‚‰ã€ãƒ‘ッãƒã¯ä»–ã®é–‹ç™ºè€…ãŸã¡ã¨é–¢é€£ã™ã‚‹ã‚µãƒ–システム㮠メンテナーã«ã‚ˆã‚‹ãƒ¬ãƒ“ューã®ãŸã‚ã« -stable ã‚ューã«è¿½åŠ ã•れる。 - - パッãƒã« stable@vger.kernel.org ã®ã‚¢ãƒ‰ãƒ¬ã‚¹ãŒä»˜åŠ ã•れã¦ã„ã‚‹ã¨ãã«ã¯ã€ãれ + - パッãƒã« stable@kernel.org ã®ã‚¢ãƒ‰ãƒ¬ã‚¹ãŒä»˜åŠ ã•れã¦ã„ã‚‹ã¨ãã«ã¯ã€ãれ ㌠Linus ã®ãƒ„リーã«å…¥ã‚‹æ™‚ã«è‡ªå‹•的㫠stable ãƒãƒ¼ãƒ ã« email ã•れる。 - - ã‚»ã‚ュリティパッãƒã¯ã“ã®ã‚¨ã‚¤ãƒªã‚¢ã‚¹ (stable@vger.kernel.org) ã«é€ã‚‰ã‚Œã‚‹ã¹ + - ã‚»ã‚ュリティパッãƒã¯ã“ã®ã‚¨ã‚¤ãƒªã‚¢ã‚¹ (stable@kernel.org) ã«é€ã‚‰ã‚Œã‚‹ã¹ ãã§ã¯ãªãã€ä»£ã‚り㫠security@kernel.org ã®ã‚¢ãƒ‰ãƒ¬ã‚¹ã«é€ã‚‰ã‚Œã‚‹ã€‚ レビューサイクル- diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt deleted file mode 100644 index ea45dd3901e..00000000000 --- a/Documentation/lzo.txt +++ /dev/null @@ -1,164 +0,0 @@ - -LZO stream format as understood by Linux's LZO decompressor -=========================================================== - -Introduction - - This is not a specification. No specification seems to be publicly available - for the LZO stream format. This document describes what input format the LZO - decompressor as implemented in the Linux kernel understands. The file subject - of this analysis is lib/lzo/lzo1x_decompress_safe.c. No analysis was made on - the compressor nor on any other implementations though it seems likely that - the format matches the standard one. The purpose of this document is to - better understand what the code does in order to propose more efficient fixes - for future bug reports. - -Description - - The stream is composed of a series of instructions, operands, and data. The - instructions consist in a few bits representing an opcode, and bits forming - the operands for the instruction, whose size and position depend on the - opcode and on the number of literals copied by previous instruction. The - operands are used to indicate : - - - a distance when copying data from the dictionary (past output buffer) - - a length (number of bytes to copy from dictionary) - - the number of literals to copy, which is retained in variable "state" - as a piece of information for next instructions. - - Optionally depending on the opcode and operands, extra data may follow. These - extra data can be a complement for the operand (eg: a length or a distance - encoded on larger values), or a literal to be copied to the output buffer. - - The first byte of the block follows a different encoding from other bytes, it - seems to be optimized for literal use only, since there is no dictionary yet - prior to that byte. - - Lengths are always encoded on a variable size starting with a small number - of bits in the operand. If the number of bits isn't enough to represent the - length, up to 255 may be added in increments by consuming more bytes with a - rate of at most 255 per extra byte (thus the compression ratio cannot exceed - around 255:1). The variable length encoding using #bits is always the same : - - length = byte & ((1 << #bits) - 1) - if (!length) { - length = ((1 << #bits) - 1) - length += 255*(number of zero bytes) - length += first-non-zero-byte - } - length += constant (generally 2 or 3) - - For references to the dictionary, distances are relative to the output - pointer. Distances are encoded using very few bits belonging to certain - ranges, resulting in multiple copy instructions using different encodings. - Certain encodings involve one extra byte, others involve two extra bytes - forming a little-endian 16-bit quantity (marked LE16 below). - - After any instruction except the large literal copy, 0, 1, 2 or 3 literals - are copied before starting the next instruction. The number of literals that - were copied may change the meaning and behaviour of the next instruction. In - practice, only one instruction needs to know whether 0, less than 4, or more - literals were copied. This is the information stored in the <state> variable - in this implementation. This number of immediate literals to be copied is - generally encoded in the last two bits of the instruction but may also be - taken from the last two bits of an extra operand (eg: distance). - - End of stream is declared when a block copy of distance 0 is seen. Only one - instruction may encode this distance (0001HLLL), it takes one LE16 operand - for the distance, thus requiring 3 bytes. - - IMPORTANT NOTE : in the code some length checks are missing because certain - instructions are called under the assumption that a certain number of bytes - follow because it has already been garanteed before parsing the instructions. - They just have to "refill" this credit if they consume extra bytes. This is - an implementation design choice independant on the algorithm or encoding. - -Byte sequences - - First byte encoding : - - 0..17 : follow regular instruction encoding, see below. It is worth - noting that codes 16 and 17 will represent a block copy from - the dictionary which is empty, and that they will always be - invalid at this place. - - 18..21 : copy 0..3 literals - state = (byte - 17) = 0..3 [ copy <state> literals ] - skip byte - - 22..255 : copy literal string - length = (byte - 17) = 4..238 - state = 4 [ don't copy extra literals ] - skip byte - - Instruction encoding : - - 0 0 0 0 X X X X (0..15) - Depends on the number of literals copied by the last instruction. - If last instruction did not copy any literal (state == 0), this - encoding will be a copy of 4 or more literal, and must be interpreted - like this : - - 0 0 0 0 L L L L (0..15) : copy long literal string - length = 3 + (L ?: 15 + (zero_bytes * 255) + non_zero_byte) - state = 4 (no extra literals are copied) - - If last instruction used to copy between 1 to 3 literals (encoded in - the instruction's opcode or distance), the instruction is a copy of a - 2-byte block from the dictionary within a 1kB distance. It is worth - noting that this instruction provides little savings since it uses 2 - bytes to encode a copy of 2 other bytes but it encodes the number of - following literals for free. It must be interpreted like this : - - 0 0 0 0 D D S S (0..15) : copy 2 bytes from <= 1kB distance - length = 2 - state = S (copy S literals after this block) - Always followed by exactly one byte : H H H H H H H H - distance = (H << 2) + D + 1 - - If last instruction used to copy 4 or more literals (as detected by - state == 4), the instruction becomes a copy of a 3-byte block from the - dictionary from a 2..3kB distance, and must be interpreted like this : - - 0 0 0 0 D D S S (0..15) : copy 3 bytes from 2..3 kB distance - length = 3 - state = S (copy S literals after this block) - Always followed by exactly one byte : H H H H H H H H - distance = (H << 2) + D + 2049 - - 0 0 0 1 H L L L (16..31) - Copy of a block within 16..48kB distance (preferably less than 10B) - length = 2 + (L ?: 7 + (zero_bytes * 255) + non_zero_byte) - Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S - distance = 16384 + (H << 14) + D - state = S (copy S literals after this block) - End of stream is reached if distance == 16384 - - 0 0 1 L L L L L (32..63) - Copy of small block within 16kB distance (preferably less than 34B) - length = 2 + (L ?: 31 + (zero_bytes * 255) + non_zero_byte) - Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S - distance = D + 1 - state = S (copy S literals after this block) - - 0 1 L D D D S S (64..127) - Copy 3-4 bytes from block within 2kB distance - state = S (copy S literals after this block) - length = 3 + L - Always followed by exactly one byte : H H H H H H H H - distance = (H << 3) + D + 1 - - 1 L L D D D S S (128..255) - Copy 5-8 bytes from block within 2kB distance - state = S (copy S literals after this block) - length = 5 + L - Always followed by exactly one byte : H H H H H H H H - distance = (H << 3) + D + 1 - -Authors - - This document was written by Willy Tarreau <w@1wt.eu> on 2014/07/19 during an - analysis of the decompression code available in Linux 3.16-rc5. The code is - tricky, it is possible that this document contains mistakes or that a few - corner cases were overlooked. In any case, please report any doubt, fix, or - proposed updates to the author(s) so that the document can be updated. diff --git a/Documentation/sound/alsa/ALSA-Configuration.txt b/Documentation/sound/alsa/ALSA-Configuration.txt index 8f08b2a7179..95731a08f25 100644 --- a/Documentation/sound/alsa/ALSA-Configuration.txt +++ b/Documentation/sound/alsa/ALSA-Configuration.txt @@ -2026,8 +2026,8 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed. ------------------- Module for sound cards based on the Asus AV66/AV100/AV200 chips, - i.e., Xonar D1, DX, D2, D2X, DS, DSX, Essence ST (Deluxe), - Essence STX (II), HDAV1.3 (Deluxe), and HDAV1.3 Slim. + i.e., Xonar D1, DX, D2, D2X, DS, Essence ST (Deluxe), Essence STX, + HDAV1.3 (Deluxe), and HDAV1.3 Slim. This module supports autoprobe and multiple cards. diff --git a/Documentation/stable_kernel_rules.txt b/Documentation/stable_kernel_rules.txt index 8dfb6a5f427..b0714d8f678 100644 --- a/Documentation/stable_kernel_rules.txt +++ b/Documentation/stable_kernel_rules.txt @@ -29,9 +29,6 @@ Rules on what kind of patches are accepted, and which ones are not, into the Procedure for submitting patches to the -stable tree: - - If the patch covers files in net/ or drivers/net please follow netdev stable - submission guidelines as described in - Documentation/networking/netdev-FAQ.txt - Send the patch, after verifying that it follows the above rules, to stable@vger.kernel.org. You must note the upstream commit ID in the changelog of your submission, as well as the kernel version you wish diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt index 8d90c42e5db..9b34b168507 100644 --- a/Documentation/sysctl/kernel.txt +++ b/Documentation/sysctl/kernel.txt @@ -438,32 +438,6 @@ This file shows up if CONFIG_DEBUG_STACKOVERFLOW is enabled. ============================================================== -perf_cpu_time_max_percent: - -Hints to the kernel how much CPU time it should be allowed to -use to handle perf sampling events. If the perf subsystem -is informed that its samples are exceeding this limit, it -will drop its sampling frequency to attempt to reduce its CPU -usage. - -Some perf sampling happens in NMIs. If these samples -unexpectedly take too long to execute, the NMIs can become -stacked up next to each other so much that nothing else is -allowed to execute. - -0: disable the mechanism. Do not monitor or correct perf's - sampling rate no matter how CPU time it takes. - -1-100: attempt to throttle perf's sample rate to this - percentage of CPU. Note: the kernel calculates an - "expected" length of each sample event. 100 here means - 100% of that expected length. Even if this is set to - 100, you may still see sample throttling if this - length is exceeded. Set to 0 if you truly do not care - how much CPU is consumed. - -============================================================== - pid_max: diff --git a/Documentation/video4linux/gspca.txt b/Documentation/video4linux/gspca.txt index d2ba80bb7af..1e6b6531bbc 100644 --- a/Documentation/video4linux/gspca.txt +++ b/Documentation/video4linux/gspca.txt @@ -55,7 +55,6 @@ zc3xx 0458:700f Genius VideoCam Web V2 sonixj 0458:7025 Genius Eye 311Q sn9c20x 0458:7029 Genius Look 320s sonixj 0458:702e Genius Slim 310 NB -sn9c20x 0458:7045 Genius Look 1320 V2 sn9c20x 0458:704a Genius Slim 1320 sn9c20x 0458:704c Genius i-Look 1321 sn9c20x 045e:00f4 LifeCam VX-6000 (SN9C20x + OV9650) diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt index bd4370487b0..881582f75c9 100644 --- a/Documentation/x86/x86_64/mm.txt +++ b/Documentation/x86/x86_64/mm.txt @@ -12,8 +12,6 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB) ... unused hole ... -ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks -... unused hole ... ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 0 ffffffffa0000000 - ffffffffff5fffff (=1525 MB) module mapping space ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls diff --git a/Documentation/zh_CN/HOWTO b/Documentation/zh_CN/HOWTO index 7599eb38b76..7fba5aab9ef 100644 --- a/Documentation/zh_CN/HOWTO +++ b/Documentation/zh_CN/HOWTO @@ -237,7 +237,7 @@ kernel.org网站的pub/linux/kernel/v2.6/目录下找到它。它的开å‘éµå¾ª 如果没有2.6.x.yç‰ˆæœ¬å†…æ ¸å˜åœ¨ï¼Œé‚£ä¹ˆæœ€æ–°çš„2.6.xç‰ˆæœ¬å†…æ ¸å°±ç›¸å½“äºŽæ˜¯å½“å‰çš„稳定 ç‰ˆå†…æ ¸ã€‚ -2.6.x.y版本由“稳定版â€å°ç»„(邮件地å€<stable@vger.kernel.org>ï¼‰ç»´æŠ¤ï¼Œä¸€èˆ¬éš”å‘¨å‘ +2.6.x.y版本由“稳定版â€å°ç»„(邮件地å€<stable@kernel.org>ï¼‰ç»´æŠ¤ï¼Œä¸€èˆ¬éš”å‘¨å‘ å¸ƒæ–°ç‰ˆæœ¬ã€‚ å†…æ ¸æºç ä¸çš„Documentation/stable_kernel_rules.txt文件具体æè¿°äº†å¯è¢«ç¨³å®š diff --git a/Documentation/zh_CN/stable_kernel_rules.txt b/Documentation/zh_CN/stable_kernel_rules.txt index 26ea5ed7cd9..b5b9b0ab02f 100644 --- a/Documentation/zh_CN/stable_kernel_rules.txt +++ b/Documentation/zh_CN/stable_kernel_rules.txt @@ -42,7 +42,7 @@ Documentation/stable_kernel_rules.txt çš„ä¸æ–‡ç¿»è¯‘ å‘ç¨³å®šç‰ˆä»£ç æ ‘æäº¤è¡¥ä¸çš„过程: - - 在确认了补ä¸ç¬¦åˆä»¥ä¸Šçš„规则åŽï¼Œå°†è¡¥ä¸å‘é€åˆ°stable@vger.kernel.org。 + - 在确认了补ä¸ç¬¦åˆä»¥ä¸Šçš„规则åŽï¼Œå°†è¡¥ä¸å‘é€åˆ°stable@kernel.org。 - 如果补ä¸è¢«æŽ¥å—到队列里,å‘é€è€…会收到一个ACK回å¤ï¼Œå¦‚果没有被接å—,收 到的是NAK回å¤ã€‚回å¤éœ€è¦å‡ 天的时间,这å–决于开å‘者的时间安排。 - 被接å—的补ä¸ä¼šè¢«åŠ åˆ°ç¨³å®šç‰ˆæœ¬é˜Ÿåˆ—é‡Œï¼Œç‰å¾…å…¶ä»–å¼€å‘者的审查。 @@ -1,6 +1,6 @@ VERSION = 3 PATCHLEVEL = 10 -SUBLEVEL = 60 +SUBLEVEL = 33 EXTRAVERSION = NAME = TOSSUG Baby Fish @@ -622,8 +622,6 @@ KBUILD_CFLAGS += -fomit-frame-pointer endif endif -KBUILD_CFLAGS += $(call cc-option, -fno-var-tracking-assignments) - ifdef CONFIG_DEBUG_INFO KBUILD_CFLAGS += -g KBUILD_AFLAGS += -gdwarf-2 diff --git a/arch/arc/boot/dts/nsimosci.dts b/arch/arc/boot/dts/nsimosci.dts index 398064cef74..ea16d782af5 100644 --- a/arch/arc/boot/dts/nsimosci.dts +++ b/arch/arc/boot/dts/nsimosci.dts @@ -11,16 +11,13 @@ / { compatible = "snps,nsimosci"; - clock-frequency = <20000000>; /* 20 MHZ */ + clock-frequency = <80000000>; /* 80 MHZ */ #address-cells = <1>; #size-cells = <1>; interrupt-parent = <&intc>; chosen { - /* this is for console on PGU */ - /* bootargs = "console=tty0 consoleblank=0"; */ - /* this is for console on serial */ - bootargs = "earlycon=uart8250,mmio32,0xc0000000,115200n8 console=tty0 console=ttyS0,115200n8 consoleblank=0 debug"; + bootargs = "console=tty0 consoleblank=0"; }; aliases { @@ -47,14 +44,15 @@ }; uart0: serial@c0000000 { - compatible = "ns8250"; + compatible = "snps,dw-apb-uart"; reg = <0xc0000000 0x2000>; interrupts = <11>; + #clock-frequency = <80000000>; clock-frequency = <3686400>; baud = <115200>; reg-shift = <2>; reg-io-width = <4>; - no-loopback-test = <1>; + status = "okay"; }; pgu0: pgu@c9000000 { diff --git a/arch/arc/configs/nsimosci_defconfig b/arch/arc/configs/nsimosci_defconfig index 00788e741ce..446c96c24ef 100644 --- a/arch/arc/configs/nsimosci_defconfig +++ b/arch/arc/configs/nsimosci_defconfig @@ -54,7 +54,6 @@ CONFIG_SERIO_ARC_PS2=y CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_8250_DW=y -CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_ARC=y CONFIG_SERIAL_ARC_CONSOLE=y # CONFIG_HW_RANDOM is not set diff --git a/arch/arc/include/asm/irqflags.h b/arch/arc/include/asm/irqflags.h index c29d56587bf..eac07166820 100644 --- a/arch/arc/include/asm/irqflags.h +++ b/arch/arc/include/asm/irqflags.h @@ -137,6 +137,13 @@ static inline void arch_unmask_irq(unsigned int irq) flag \scratch .endm +.macro IRQ_DISABLE_SAVE scratch, save + lr \scratch, [status32] + mov \save, \scratch /* Make a copy */ + bic \scratch, \scratch, (STATUS_E1_MASK | STATUS_E2_MASK) + flag \scratch +.endm + .macro IRQ_ENABLE scratch lr \scratch, [status32] or \scratch, \scratch, (STATUS_E1_MASK | STATUS_E2_MASK) diff --git a/arch/arc/include/asm/kgdb.h b/arch/arc/include/asm/kgdb.h index e897610c657..4930957ca3d 100644 --- a/arch/arc/include/asm/kgdb.h +++ b/arch/arc/include/asm/kgdb.h @@ -19,7 +19,7 @@ * register API yet */ #undef DBG_MAX_REG_NUM -#define GDB_MAX_REGS 87 +#define GDB_MAX_REGS 39 #define BREAK_INSTR_SIZE 2 #define CACHE_FLUSH_IS_SAFE 1 @@ -33,27 +33,23 @@ static inline void arch_kgdb_breakpoint(void) extern void kgdb_trap(struct pt_regs *regs, int param); -/* This is the numbering of registers according to the GDB. See GDB's - * arc-tdep.h for details. - * - * Registers are ordered for GDB 7.5. It is incompatible with GDB 6.8. */ -enum arc_linux_regnums { +enum arc700_linux_regnums { _R0 = 0, _R1, _R2, _R3, _R4, _R5, _R6, _R7, _R8, _R9, _R10, _R11, _R12, _R13, _R14, _R15, _R16, _R17, _R18, _R19, _R20, _R21, _R22, _R23, _R24, _R25, _R26, - _FP = 27, - __SP = 28, - _R30 = 30, - _BLINK = 31, - _LP_COUNT = 60, - _STOP_PC = 64, - _RET = 64, - _LP_START = 65, - _LP_END = 66, - _STATUS32 = 67, - _ECR = 76, - _BTA = 82, + _BTA = 27, + _LP_START = 28, + _LP_END = 29, + _LP_COUNT = 30, + _STATUS32 = 31, + _BLINK = 32, + _FP = 33, + __SP = 34, + _EFA = 35, + _RET = 36, + _ORIG_R8 = 37, + _STOP_PC = 38 }; #else diff --git a/arch/arc/include/uapi/asm/ptrace.h b/arch/arc/include/uapi/asm/ptrace.h index ef9d79a3db2..30333cec0fe 100644 --- a/arch/arc/include/uapi/asm/ptrace.h +++ b/arch/arc/include/uapi/asm/ptrace.h @@ -11,7 +11,6 @@ #ifndef _UAPI__ASM_ARC_PTRACE_H #define _UAPI__ASM_ARC_PTRACE_H -#define PTRACE_GET_THREAD_AREA 25 #ifndef __ASSEMBLY__ /* diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S index 6f3cd0fb4b5..6dbe359c760 100644 --- a/arch/arc/kernel/entry.S +++ b/arch/arc/kernel/entry.S @@ -589,7 +589,11 @@ ARC_ENTRY ret_from_exception ; Pre-{IRQ,Trap,Exception} K/U mode from pt_regs->status32 ld r8, [sp, PT_status32] ; returning to User/Kernel Mode +#ifdef CONFIG_PREEMPT bbit0 r8, STATUS_U_BIT, resume_kernel_mode +#else + bbit0 r8, STATUS_U_BIT, restore_regs +#endif ; Before returning to User mode check-for-and-complete any pending work ; such as rescheduling/signal-delivery etc. @@ -649,15 +653,10 @@ resume_user_mode_begin: b resume_user_mode_begin ; unconditionally back to U mode ret chks ; for single exit point from this block -resume_kernel_mode: - - ; Disable Interrupts from this point on - ; CONFIG_PREEMPT: This is a must for preempt_schedule_irq() - ; !CONFIG_PREEMPT: To ensure restore_regs is intr safe - IRQ_DISABLE r9 - #ifdef CONFIG_PREEMPT +resume_kernel_mode: + ; Can't preempt if preemption disabled GET_CURR_THR_INFO_FROM_SP r10 ld r8, [r10, THREAD_INFO_PREEMPT_COUNT] @@ -667,6 +666,8 @@ resume_kernel_mode: ld r9, [r10, THREAD_INFO_FLAGS] bbit0 r9, TIF_NEED_RESCHED, restore_regs + IRQ_DISABLE r9 + ; Invoke PREEMPTION bl preempt_schedule_irq @@ -679,11 +680,12 @@ resume_kernel_mode: ; ; Restore the saved sys context (common exit-path for EXCPN/IRQ/Trap) ; IRQ shd definitely not happen between now and rtie -; All 2 entry points to here already disable interrupts restore_regs : - lr r10, [status32] + ; Disable Interrupts while restoring reg-file back + ; XXX can this be optimised out + IRQ_DISABLE_SAVE r9, r10 ;@r10 has prisitine (pre-disable) copy #ifdef CONFIG_ARC_CURR_IN_REG ; Restore User R25 diff --git a/arch/arc/kernel/ptrace.c b/arch/arc/kernel/ptrace.c index f8a36ed9e0d..0851604bb9c 100644 --- a/arch/arc/kernel/ptrace.c +++ b/arch/arc/kernel/ptrace.c @@ -136,10 +136,6 @@ long arch_ptrace(struct task_struct *child, long request, pr_debug("REQ=%ld: ADDR =0x%lx, DATA=0x%lx)\n", request, addr, data); switch (request) { - case PTRACE_GET_THREAD_AREA: - ret = put_user(task_thread_info(child)->thr_ptr, - (unsigned long __user *)data); - break; default: ret = ptrace_request(child, request, addr, data); break; diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index e69890f1c70..fdccc759df5 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -4,7 +4,6 @@ config ARM select ARCH_BINFMT_ELF_RANDOMIZE_PIE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAVE_CUSTOM_GPIO_H - select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_WANT_IPC_PARSE_VERSION select BUILDTIME_EXTABLE_SORT if MMU diff --git a/arch/arm/boot/dts/armada-370-xp.dtsi b/arch/arm/boot/dts/armada-370-xp.dtsi index ddd068bb145..4d12d2347c1 100644 --- a/arch/arm/boot/dts/armada-370-xp.dtsi +++ b/arch/arm/boot/dts/armada-370-xp.dtsi @@ -92,7 +92,6 @@ #size-cells = <0>; compatible = "marvell,orion-mdio"; reg = <0x72004 0x4>; - clocks = <&gateclk 4>; }; ethernet@70000 { diff --git a/arch/arm/boot/dts/armada-xp-gp.dts b/arch/arm/boot/dts/armada-xp-gp.dts index f97550420fc..76db557adbe 100644 --- a/arch/arm/boot/dts/armada-xp-gp.dts +++ b/arch/arm/boot/dts/armada-xp-gp.dts @@ -124,7 +124,7 @@ /* Device Bus parameters are required */ /* Read parameters */ - devbus,bus-width = <16>; + devbus,bus-width = <8>; devbus,turn-off-ps = <60000>; devbus,badr-skew-ps = <0>; devbus,acc-first-ps = <124000>; diff --git a/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts b/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts index 9746d0e7fcb..fdea75c7341 100644 --- a/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts +++ b/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts @@ -152,7 +152,7 @@ /* Device Bus parameters are required */ /* Read parameters */ - devbus,bus-width = <16>; + devbus,bus-width = <8>; devbus,turn-off-ps = <60000>; devbus,badr-skew-ps = <0>; devbus,acc-first-ps = <124000>; diff --git a/arch/arm/boot/dts/exynos5250-arndale.dts b/arch/arm/boot/dts/exynos5250-arndale.dts index b64cb43a729..02cfc76d002 100644 --- a/arch/arm/boot/dts/exynos5250-arndale.dts +++ b/arch/arm/boot/dts/exynos5250-arndale.dts @@ -263,7 +263,6 @@ regulator-name = "vdd_g3d"; regulator-min-microvolt = <1000000>; regulator-max-microvolt = <1000000>; - regulator-always-on; regulator-boot-on; op_mode = <1>; }; diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi index e524316998f..eb83aa039b8 100644 --- a/arch/arm/boot/dts/imx53.dtsi +++ b/arch/arm/boot/dts/imx53.dtsi @@ -71,7 +71,7 @@ ipu: ipu@18000000 { #crtc-cells = <1>; compatible = "fsl,imx53-ipu"; - reg = <0x18000000 0x08000000>; + reg = <0x18000000 0x080000000>; interrupts = <11 10>; clocks = <&clks 59>, <&clks 110>, <&clks 61>; clock-names = "bus", "di0", "di1"; diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig index adb9aa5c88c..2e67a272df7 100644 --- a/arch/arm/configs/multi_v7_defconfig +++ b/arch/arm/configs/multi_v7_defconfig @@ -1,7 +1,6 @@ CONFIG_EXPERIMENTAL=y CONFIG_NO_HZ=y CONFIG_HIGH_RES_TIMERS=y -CONFIG_BLK_DEV_INITRD=y CONFIG_ARCH_MVEBU=y CONFIG_MACH_ARMADA_370=y CONFIG_ARCH_SIRF=y @@ -23,7 +22,6 @@ CONFIG_AEABI=y CONFIG_HIGHMEM=y CONFIG_HIGHPTE=y CONFIG_ARM_APPENDED_DTB=y -CONFIG_ARM_ATAG_DTB_COMPAT=y CONFIG_VFP=y CONFIG_NEON=y CONFIG_NET=y @@ -48,8 +46,6 @@ CONFIG_SERIAL_SIRFSOC=y CONFIG_SERIAL_SIRFSOC_CONSOLE=y CONFIG_SERIAL_VT8500=y CONFIG_SERIAL_VT8500_CONSOLE=y -CONFIG_SERIAL_XILINX_PS_UART=y -CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y CONFIG_IPMI_HANDLER=y CONFIG_IPMI_SI=y CONFIG_I2C=y diff --git a/arch/arm/include/asm/div64.h b/arch/arm/include/asm/div64.h index a66061aef29..fe92ccf1d0b 100644 --- a/arch/arm/include/asm/div64.h +++ b/arch/arm/include/asm/div64.h @@ -156,7 +156,7 @@ /* Select the best insn combination to perform the */ \ /* actual __m * __n / (__p << 64) operation. */ \ if (!__c) { \ - asm ( "umull %Q0, %R0, %Q1, %Q2\n\t" \ + asm ( "umull %Q0, %R0, %1, %Q2\n\t" \ "mov %Q0, #0" \ : "=&r" (__res) \ : "r" (__m), "r" (__n) \ diff --git a/arch/arm/include/asm/futex.h b/arch/arm/include/asm/futex.h index 2aff798fbef..e42cf597f6e 100644 --- a/arch/arm/include/asm/futex.h +++ b/arch/arm/include/asm/futex.h @@ -3,6 +3,11 @@ #ifdef __KERNEL__ +#if defined(CONFIG_CPU_USE_DOMAINS) && defined(CONFIG_SMP) +/* ARM doesn't provide unprivileged exclusive memory accessors */ +#include <asm-generic/futex.h> +#else + #include <linux/futex.h> #include <linux/uaccess.h> #include <asm/errno.h> @@ -159,5 +164,6 @@ futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr) return ret; } +#endif /* !(CPU_USE_DOMAINS && SMP) */ #endif /* __KERNEL__ */ #endif /* _ASM_ARM_FUTEX_H */ diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h index d070741b2b3..652b56086de 100644 --- a/arch/arm/include/asm/io.h +++ b/arch/arm/include/asm/io.h @@ -130,16 +130,16 @@ static inline u32 __raw_readl(const volatile void __iomem *addr) */ extern void __iomem *__arm_ioremap_pfn_caller(unsigned long, unsigned long, size_t, unsigned int, void *); -extern void __iomem *__arm_ioremap_caller(phys_addr_t, size_t, unsigned int, +extern void __iomem *__arm_ioremap_caller(unsigned long, size_t, unsigned int, void *); extern void __iomem *__arm_ioremap_pfn(unsigned long, unsigned long, size_t, unsigned int); -extern void __iomem *__arm_ioremap(phys_addr_t, size_t, unsigned int); -extern void __iomem *__arm_ioremap_exec(phys_addr_t, size_t, bool cached); +extern void __iomem *__arm_ioremap(unsigned long, size_t, unsigned int); +extern void __iomem *__arm_ioremap_exec(unsigned long, size_t, bool cached); extern void __iounmap(volatile void __iomem *addr); extern void __arm_iounmap(volatile void __iomem *addr); -extern void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, +extern void __iomem * (*arch_ioremap_caller)(unsigned long, size_t, unsigned int, void *); extern void (*arch_iounmap)(volatile void __iomem *); diff --git a/arch/arm/include/asm/outercache.h b/arch/arm/include/asm/outercache.h index 353a31b77cd..544563c4b5b 100644 --- a/arch/arm/include/asm/outercache.h +++ b/arch/arm/include/asm/outercache.h @@ -39,10 +39,10 @@ struct outer_cache_fns { void (*resume)(void); }; -extern struct outer_cache_fns outer_cache; - #ifdef CONFIG_OUTER_CACHE +extern struct outer_cache_fns outer_cache; + static inline void outer_inv_range(phys_addr_t start, phys_addr_t end) { if (outer_cache.inv_range) diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h index c98c9c89b95..f97ee02386e 100644 --- a/arch/arm/include/asm/pgtable-2level.h +++ b/arch/arm/include/asm/pgtable-2level.h @@ -140,7 +140,6 @@ #define L_PTE_MT_DEV_NONSHARED (_AT(pteval_t, 0x0c) << 2) /* 1100 */ #define L_PTE_MT_DEV_WC (_AT(pteval_t, 0x09) << 2) /* 1001 */ #define L_PTE_MT_DEV_CACHED (_AT(pteval_t, 0x0b) << 2) /* 1011 */ -#define L_PTE_MT_VECTORS (_AT(pteval_t, 0x0f) << 2) /* 1111 */ #define L_PTE_MT_MASK (_AT(pteval_t, 0x0f) << 2) #ifndef __ASSEMBLY__ diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h index b07c09e5a0a..dd64cc6f9cb 100644 --- a/arch/arm/include/asm/spinlock.h +++ b/arch/arm/include/asm/spinlock.h @@ -107,7 +107,7 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock) " subs %1, %0, %0, ror #16\n" " addeq %0, %0, %4\n" " strexeq %2, %0, [%3]" - : "=&r" (slock), "=&r" (contended), "=&r" (res) + : "=&r" (slock), "=&r" (contended), "=r" (res) : "r" (&lock->slock), "I" (1 << TICKET_SHIFT) : "cc"); } while (res); diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 20e1c994669..7e1f76027f6 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -164,9 +164,8 @@ extern int __put_user_8(void *, unsigned long long); #define __put_user_check(x,p) \ ({ \ unsigned long __limit = current_thread_info()->addr_limit - 1; \ - const typeof(*(p)) __user *__tmp_p = (p); \ register const typeof(*(p)) __r2 asm("r2") = (x); \ - register const typeof(*(p)) __user *__p asm("r0") = __tmp_p; \ + register const typeof(*(p)) __user *__p asm("r0") = (p);\ register unsigned long __l asm("r1") = __limit; \ register int __e asm("r0"); \ switch (sizeof(*(__p))) { \ diff --git a/arch/arm/include/asm/unistd.h b/arch/arm/include/asm/unistd.h index cbd61977c99..141baa3f9a7 100644 --- a/arch/arm/include/asm/unistd.h +++ b/arch/arm/include/asm/unistd.h @@ -48,5 +48,6 @@ */ #define __IGNORE_fadvise64_64 #define __IGNORE_migrate_pages +#define __IGNORE_kcmp #endif /* __ASM_ARM_UNISTD_H */ diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c index 5d1286d5115..90c50d4b43f 100644 --- a/arch/arm/kernel/crash_dump.c +++ b/arch/arm/kernel/crash_dump.c @@ -39,7 +39,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, if (!csize) return 0; - vaddr = ioremap(__pfn_to_phys(pfn), PAGE_SIZE); + vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE); if (!vaddr) return -ENOMEM; diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index 4bc816a74a2..bc5bc0a9713 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -362,16 +362,6 @@ ENTRY(vector_swi) str r0, [sp, #S_OLD_R0] @ Save OLD_R0 zero_fp -#ifdef CONFIG_ALIGNMENT_TRAP - ldr ip, __cr_alignment - ldr ip, [ip] - mcr p15, 0, ip, c1, c0 @ update control register -#endif - - enable_irq - ct_user_exit - get_thread_info tsk - /* * Get the system call number. */ @@ -385,9 +375,9 @@ ENTRY(vector_swi) #ifdef CONFIG_ARM_THUMB tst r8, #PSR_T_BIT movne r10, #0 @ no thumb OABI emulation - USER( ldreq r10, [lr, #-4] ) @ get SWI instruction + ldreq r10, [lr, #-4] @ get SWI instruction #else - USER( ldr r10, [lr, #-4] ) @ get SWI instruction + ldr r10, [lr, #-4] @ get SWI instruction #endif #ifdef CONFIG_CPU_ENDIAN_BE8 rev r10, r10 @ little endian instruction @@ -402,13 +392,22 @@ ENTRY(vector_swi) /* Legacy ABI only, possibly thumb mode. */ tst r8, #PSR_T_BIT @ this is SPSR from save_user_regs addne scno, r7, #__NR_SYSCALL_BASE @ put OS number in - USER( ldreq scno, [lr, #-4] ) + ldreq scno, [lr, #-4] #else /* Legacy ABI only. */ - USER( ldr scno, [lr, #-4] ) @ get SWI instruction + ldr scno, [lr, #-4] @ get SWI instruction #endif +#ifdef CONFIG_ALIGNMENT_TRAP + ldr ip, __cr_alignment + ldr ip, [ip] + mcr p15, 0, ip, c1, c0 @ update control register +#endif + enable_irq + ct_user_exit + + get_thread_info tsk adr tbl, sys_call_table @ load syscall table pointer #if defined(CONFIG_OABI_COMPAT) @@ -443,21 +442,6 @@ local_restart: eor r0, scno, #__NR_SYSCALL_BASE @ put OS number back bcs arm_syscall b sys_ni_syscall @ not private func - -#if defined(CONFIG_OABI_COMPAT) || !defined(CONFIG_AEABI) - /* - * We failed to handle a fault trying to access the page - * containing the swi instruction, but we're not really in a - * position to return -EFAULT. Instead, return back to the - * instruction and re-enter the user fault handling path trying - * to page it in. This will likely result in sending SEGV to the - * current task. - */ -9001: - sub lr, lr, #4 - str lr, [sp, #S_PC] - b ret_fast_syscall -#endif ENDPROC(vector_swi) /* diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c index 1e782bdeee4..9723d17b8f3 100644 --- a/arch/arm/kernel/irq.c +++ b/arch/arm/kernel/irq.c @@ -163,7 +163,7 @@ static bool migrate_one_irq(struct irq_desc *desc) c = irq_data_get_irq_chip(d); if (!c->irq_set_affinity) pr_debug("IRQ%u: unable to set affinity\n", d->irq); - else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret) + else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret) cpumask_copy(d->affinity, affinity); return ret; diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c index 70ae735dec5..163b160c69e 100644 --- a/arch/arm/kernel/machine_kexec.c +++ b/arch/arm/kernel/machine_kexec.c @@ -14,11 +14,10 @@ #include <asm/pgalloc.h> #include <asm/mmu_context.h> #include <asm/cacheflush.h> -#include <asm/fncpy.h> #include <asm/mach-types.h> #include <asm/system_misc.h> -extern void relocate_new_kernel(void); +extern const unsigned char relocate_new_kernel[]; extern const unsigned int relocate_new_kernel_size; extern unsigned long kexec_start_address; @@ -134,8 +133,6 @@ void machine_kexec(struct kimage *image) { unsigned long page_list; unsigned long reboot_code_buffer_phys; - unsigned long reboot_entry = (unsigned long)relocate_new_kernel; - unsigned long reboot_entry_phys; void *reboot_code_buffer; if (num_online_cpus() > 1) { @@ -159,23 +156,16 @@ void machine_kexec(struct kimage *image) /* copy our kernel relocation code to the control code page */ - reboot_entry = fncpy(reboot_code_buffer, - reboot_entry, - relocate_new_kernel_size); - reboot_entry_phys = (unsigned long)reboot_entry + - (reboot_code_buffer_phys - (unsigned long)reboot_code_buffer); + memcpy(reboot_code_buffer, + relocate_new_kernel, relocate_new_kernel_size); + + flush_icache_range((unsigned long) reboot_code_buffer, + (unsigned long) reboot_code_buffer + KEXEC_CONTROL_PAGE_SIZE); printk(KERN_INFO "Bye!\n"); if (kexec_reinit) kexec_reinit(); - soft_restart(reboot_entry_phys); -} - -void arch_crash_save_vmcoreinfo(void) -{ -#ifdef CONFIG_ARM_LPAE - VMCOREINFO_CONFIG(ARM_LPAE); -#endif + soft_restart(reboot_code_buffer_phys); } diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c index 72c55405378..44d9e574db0 100644 --- a/arch/arm/kernel/perf_event.c +++ b/arch/arm/kernel/perf_event.c @@ -304,18 +304,11 @@ static irqreturn_t armpmu_dispatch_irq(int irq, void *dev) struct arm_pmu *armpmu = (struct arm_pmu *) dev; struct platform_device *plat_device = armpmu->plat_device; struct arm_pmu_platdata *plat = dev_get_platdata(&plat_device->dev); - int ret; - u64 start_clock, finish_clock; - start_clock = sched_clock(); if (plat && plat->handle_irq) - ret = plat->handle_irq(irq, dev, armpmu->handle_irq); + return plat->handle_irq(irq, dev, armpmu->handle_irq); else - ret = armpmu->handle_irq(irq, dev); - finish_clock = sched_clock(); - - perf_sample_event_took(finish_clock - start_clock); - return ret; + return armpmu->handle_irq(irq, dev); } static void diff --git a/arch/arm/kernel/relocate_kernel.S b/arch/arm/kernel/relocate_kernel.S index 95858966d84..d0cdedf4864 100644 --- a/arch/arm/kernel/relocate_kernel.S +++ b/arch/arm/kernel/relocate_kernel.S @@ -2,12 +2,10 @@ * relocate_kernel.S - put the kernel image in place to boot */ -#include <linux/linkage.h> #include <asm/kexec.h> - .align 3 /* not needed for this code, but keeps fncpy() happy */ - -ENTRY(relocate_new_kernel) + .globl relocate_new_kernel +relocate_new_kernel: ldr r0,kexec_indirection_page ldr r1,kexec_start_address @@ -81,8 +79,6 @@ kexec_mach_type: kexec_boot_atags: .long 0x0 -ENDPROC(relocate_new_kernel) - relocate_new_kernel_end: .globl relocate_new_kernel_size diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 235b0365a65..137b485d1e2 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -530,7 +530,6 @@ void __init dump_machine_table(void) int __init arm_add_memory(phys_addr_t start, phys_addr_t size) { struct membank *bank = &meminfo.bank[meminfo.nr_banks]; - u64 aligned_start; if (meminfo.nr_banks >= NR_BANKS) { printk(KERN_CRIT "NR_BANKS too low, " @@ -543,16 +542,10 @@ int __init arm_add_memory(phys_addr_t start, phys_addr_t size) * Size is appropriately rounded down, start is rounded up. */ size -= start & ~PAGE_MASK; - aligned_start = PAGE_ALIGN(start); + bank->start = PAGE_ALIGN(start); -#ifndef CONFIG_ARCH_PHYS_ADDR_T_64BIT - if (aligned_start > ULONG_MAX) { - printk(KERN_CRIT "Ignoring memory at 0x%08llx outside " - "32-bit physical address space\n", (long long)start); - return -EINVAL; - } - - if (aligned_start + size > ULONG_MAX) { +#ifndef CONFIG_ARM_LPAE + if (bank->start + size < bank->start) { printk(KERN_CRIT "Truncating memory at 0x%08llx to fit in " "32-bit physical address space\n", (long long)start); /* @@ -560,25 +553,10 @@ int __init arm_add_memory(phys_addr_t start, phys_addr_t size) * 32 bits, we use ULONG_MAX as the upper limit rather than 4GB. * This means we lose a page after masking. */ - size = ULONG_MAX - aligned_start; + size = ULONG_MAX - bank->start; } #endif - if (aligned_start < PHYS_OFFSET) { - if (aligned_start + size <= PHYS_OFFSET) { - pr_info("Ignoring memory below PHYS_OFFSET: 0x%08llx-0x%08llx\n", - aligned_start, aligned_start + size); - return -EINVAL; - } - - pr_info("Ignoring memory below PHYS_OFFSET: 0x%08llx-0x%08llx\n", - aligned_start, (u64)PHYS_OFFSET); - - size -= PHYS_OFFSET - aligned_start; - aligned_start = PHYS_OFFSET; - } - - bank->start = aligned_start; bank->size = size & ~(phys_addr_t)(PAGE_SIZE - 1); /* diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c index 6582c4adc18..af4e8c8a542 100644 --- a/arch/arm/kernel/stacktrace.c +++ b/arch/arm/kernel/stacktrace.c @@ -83,16 +83,13 @@ static int save_trace(struct stackframe *frame, void *d) return trace->nr_entries >= trace->max_entries; } -/* This must be noinline to so that our skip calculation works correctly */ -static noinline void __save_stack_trace(struct task_struct *tsk, - struct stack_trace *trace, unsigned int nosched) +void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) { struct stack_trace_data data; struct stackframe frame; data.trace = trace; data.skip = trace->skip; - data.no_sched_functions = nosched; if (tsk != current) { #ifdef CONFIG_SMP @@ -105,6 +102,7 @@ static noinline void __save_stack_trace(struct task_struct *tsk, trace->entries[trace->nr_entries++] = ULONG_MAX; return; #else + data.no_sched_functions = 1; frame.fp = thread_saved_fp(tsk); frame.sp = thread_saved_sp(tsk); frame.lr = 0; /* recovered from the stack */ @@ -113,12 +111,11 @@ static noinline void __save_stack_trace(struct task_struct *tsk, } else { register unsigned long current_sp asm ("sp"); - /* We don't want this function nor the caller */ - data.skip += 2; + data.no_sched_functions = 0; frame.fp = (unsigned long)__builtin_frame_address(0); frame.sp = current_sp; frame.lr = (unsigned long)__builtin_return_address(0); - frame.pc = (unsigned long)__save_stack_trace; + frame.pc = (unsigned long)save_stack_trace_tsk; } walk_stackframe(&frame, save_trace, &data); @@ -126,14 +123,9 @@ static noinline void __save_stack_trace(struct task_struct *tsk, trace->entries[trace->nr_entries++] = ULONG_MAX; } -void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) -{ - __save_stack_trace(tsk, trace, 1); -} - void save_stack_trace(struct stack_trace *trace) { - __save_stack_trace(current, trace, 0); + save_stack_trace_tsk(current, trace); } EXPORT_SYMBOL_GPL(save_stack_trace); #endif diff --git a/arch/arm/mach-at91/clock.c b/arch/arm/mach-at91/clock.c index 64f9f104553..da841885d01 100644 --- a/arch/arm/mach-at91/clock.c +++ b/arch/arm/mach-at91/clock.c @@ -947,7 +947,6 @@ static int __init at91_clock_reset(void) } at91_pmc_write(AT91_PMC_SCDR, scdr); - at91_pmc_write(AT91_PMC_PCDR, pcdr); if (cpu_is_sama5d3()) at91_pmc_write(AT91_PMC_PCDR1, pcdr1); diff --git a/arch/arm/mach-at91/sysirq_mask.c b/arch/arm/mach-at91/sysirq_mask.c index f8bc3511a8c..2ba694f9626 100644 --- a/arch/arm/mach-at91/sysirq_mask.c +++ b/arch/arm/mach-at91/sysirq_mask.c @@ -25,28 +25,24 @@ #include "generic.h" -#define AT91_RTC_IDR 0x24 /* Interrupt Disable Register */ -#define AT91_RTC_IMR 0x28 /* Interrupt Mask Register */ -#define AT91_RTC_IRQ_MASK 0x1f /* Available IRQs mask */ +#define AT91_RTC_IDR 0x24 /* Interrupt Disable Register */ +#define AT91_RTC_IMR 0x28 /* Interrupt Mask Register */ void __init at91_sysirq_mask_rtc(u32 rtc_base) { void __iomem *base; + u32 mask; base = ioremap(rtc_base, 64); if (!base) return; - /* - * sam9x5 SoCs have the following errata: - * "RTC: Interrupt Mask Register cannot be used - * Interrupt Mask Register read always returns 0." - * - * Hence we're not relying on IMR values to disable - * interrupts. - */ - writel_relaxed(AT91_RTC_IRQ_MASK, base + AT91_RTC_IDR); - (void)readl_relaxed(base + AT91_RTC_IMR); /* flush */ + mask = readl_relaxed(base + AT91_RTC_IMR); + if (mask) { + pr_info("AT91: Disabling rtc irq\n"); + writel_relaxed(mask, base + AT91_RTC_IDR); + (void)readl_relaxed(base + AT91_RTC_IMR); /* flush */ + } iounmap(base); } diff --git a/arch/arm/mach-ebsa110/core.c b/arch/arm/mach-ebsa110/core.c index 8a53f346cdb..b13cc74114d 100644 --- a/arch/arm/mach-ebsa110/core.c +++ b/arch/arm/mach-ebsa110/core.c @@ -116,7 +116,7 @@ static void __init ebsa110_map_io(void) iotable_init(ebsa110_io_desc, ARRAY_SIZE(ebsa110_io_desc)); } -static void __iomem *ebsa110_ioremap_caller(phys_addr_t cookie, size_t size, +static void __iomem *ebsa110_ioremap_caller(unsigned long cookie, size_t size, unsigned int flags, void *caller) { return (void __iomem *)cookie; diff --git a/arch/arm/mach-highbank/highbank.c b/arch/arm/mach-highbank/highbank.c index 35d1029d7c9..5ed19e88874 100644 --- a/arch/arm/mach-highbank/highbank.c +++ b/arch/arm/mach-highbank/highbank.c @@ -65,12 +65,14 @@ void highbank_set_cpu_jump(int cpu, void *jump_addr) HB_JUMP_TABLE_PHYS(cpu) + 15); } +#ifdef CONFIG_CACHE_L2X0 static void highbank_l2x0_disable(void) { outer_flush_all(); /* Disable PL310 L2 Cache controller */ highbank_smc1(0x102, 0x0); } +#endif static void __init highbank_init_irq(void) { @@ -79,13 +81,12 @@ static void __init highbank_init_irq(void) if (of_find_compatible_node(NULL, NULL, "arm,cortex-a9")) highbank_scu_map_io(); +#ifdef CONFIG_CACHE_L2X0 /* Enable PL310 L2 Cache controller */ - if (IS_ENABLED(CONFIG_CACHE_L2X0) && - of_find_compatible_node(NULL, NULL, "arm,pl310-cache")) { - highbank_smc1(0x102, 0x1); - l2x0_of_init(0, ~0UL); - outer_cache.disable = highbank_l2x0_disable; - } + highbank_smc1(0x102, 0x1); + l2x0_of_init(0, ~0UL); + outer_cache.disable = highbank_l2x0_disable; +#endif } static void __init highbank_timer_init(void) diff --git a/arch/arm/mach-imx/devices/platform-ipu-core.c b/arch/arm/mach-imx/devices/platform-ipu-core.c index 6bd7c3f37ac..fc4dd7cedc1 100644 --- a/arch/arm/mach-imx/devices/platform-ipu-core.c +++ b/arch/arm/mach-imx/devices/platform-ipu-core.c @@ -77,7 +77,7 @@ struct platform_device *__init imx_alloc_mx3_camera( pdev = platform_device_alloc("mx3-camera", 0); if (!pdev) - return ERR_PTR(-ENOMEM); + goto err; pdev->dev.dma_mask = kmalloc(sizeof(*pdev->dev.dma_mask), GFP_KERNEL); if (!pdev->dev.dma_mask) diff --git a/arch/arm/mach-imx/mm-imx3.c b/arch/arm/mach-imx/mm-imx3.c index eed32ca0b8a..e0e69a68217 100644 --- a/arch/arm/mach-imx/mm-imx3.c +++ b/arch/arm/mach-imx/mm-imx3.c @@ -65,7 +65,7 @@ static void imx3_idle(void) : "=r" (reg)); } -static void __iomem *imx3_ioremap_caller(phys_addr_t phys_addr, size_t size, +static void __iomem *imx3_ioremap_caller(unsigned long phys_addr, size_t size, unsigned int mtype, void *caller) { if (mtype == MT_DEVICE) { diff --git a/arch/arm/mach-iop13xx/io.c b/arch/arm/mach-iop13xx/io.c index faaf7d4482c..183dc8b5511 100644 --- a/arch/arm/mach-iop13xx/io.c +++ b/arch/arm/mach-iop13xx/io.c @@ -23,7 +23,7 @@ #include "pci.h" -static void __iomem *__iop13xx_ioremap_caller(phys_addr_t cookie, +static void __iomem *__iop13xx_ioremap_caller(unsigned long cookie, size_t size, unsigned int mtype, void *caller) { void __iomem * retval; diff --git a/arch/arm/mach-ixp4xx/common.c b/arch/arm/mach-ixp4xx/common.c index 1f6c1fb353a..58307cff1f1 100644 --- a/arch/arm/mach-ixp4xx/common.c +++ b/arch/arm/mach-ixp4xx/common.c @@ -559,7 +559,7 @@ void ixp4xx_restart(char mode, const char *cmd) * fallback to the default. */ -static void __iomem *ixp4xx_ioremap_caller(phys_addr_t addr, size_t size, +static void __iomem *ixp4xx_ioremap_caller(unsigned long addr, size_t size, unsigned int mtype, void *caller) { if (!is_pci_memory(addr)) diff --git a/arch/arm/mach-msm/common.h b/arch/arm/mach-msm/common.h index 421cf7751a8..ce8215a269e 100644 --- a/arch/arm/mach-msm/common.h +++ b/arch/arm/mach-msm/common.h @@ -23,7 +23,7 @@ extern void msm_map_msm8x60_io(void); extern void msm_map_msm8960_io(void); extern void msm_map_qsd8x50_io(void); -extern void __iomem *__msm_ioremap_caller(phys_addr_t phys_addr, size_t size, +extern void __iomem *__msm_ioremap_caller(unsigned long phys_addr, size_t size, unsigned int mtype, void *caller); extern struct smp_operations msm_smp_ops; diff --git a/arch/arm/mach-msm/io.c b/arch/arm/mach-msm/io.c index fd65b6d42cd..123ef9cbce1 100644 --- a/arch/arm/mach-msm/io.c +++ b/arch/arm/mach-msm/io.c @@ -172,7 +172,7 @@ void __init msm_map_msm7x30_io(void) } #endif /* CONFIG_ARCH_MSM7X30 */ -void __iomem *__msm_ioremap_caller(phys_addr_t phys_addr, size_t size, +void __iomem *__msm_ioremap_caller(unsigned long phys_addr, size_t size, unsigned int mtype, void *caller) { if (mtype == MT_DEVICE) { diff --git a/arch/arm/mach-omap1/board-h2.c b/arch/arm/mach-omap1/board-h2.c index d712c517223..0dac3d239e3 100644 --- a/arch/arm/mach-omap1/board-h2.c +++ b/arch/arm/mach-omap1/board-h2.c @@ -379,7 +379,7 @@ static struct omap_usb_config h2_usb_config __initdata = { /* usb1 has a Mini-AB port and external isp1301 transceiver */ .otg = 2, -#if IS_ENABLED(CONFIG_USB_OMAP) +#ifdef CONFIG_USB_GADGET_OMAP .hmc_mode = 19, /* 0:host(off) 1:dev|otg 2:disabled */ /* .hmc_mode = 21,*/ /* 0:host(off) 1:dev(loopback) 2:host(loopback) */ #elif defined(CONFIG_USB_OHCI_HCD) || defined(CONFIG_USB_OHCI_HCD_MODULE) diff --git a/arch/arm/mach-omap1/board-h3.c b/arch/arm/mach-omap1/board-h3.c index bfed4f92866..816ecd13f81 100644 --- a/arch/arm/mach-omap1/board-h3.c +++ b/arch/arm/mach-omap1/board-h3.c @@ -366,7 +366,7 @@ static struct omap_usb_config h3_usb_config __initdata = { /* usb1 has a Mini-AB port and external isp1301 transceiver */ .otg = 2, -#if IS_ENABLED(CONFIG_USB_OMAP) +#ifdef CONFIG_USB_GADGET_OMAP .hmc_mode = 19, /* 0:host(off) 1:dev|otg 2:disabled */ #elif defined(CONFIG_USB_OHCI_HCD) || defined(CONFIG_USB_OHCI_HCD_MODULE) /* NONSTANDARD CABLE NEEDED (B-to-Mini-B) */ diff --git a/arch/arm/mach-omap1/board-innovator.c b/arch/arm/mach-omap1/board-innovator.c index c49ce83cc1e..bd5f02e9c35 100644 --- a/arch/arm/mach-omap1/board-innovator.c +++ b/arch/arm/mach-omap1/board-innovator.c @@ -312,7 +312,7 @@ static struct omap_usb_config h2_usb_config __initdata = { /* usb1 has a Mini-AB port and external isp1301 transceiver */ .otg = 2, -#if IS_ENABLED(CONFIG_USB_OMAP) +#ifdef CONFIG_USB_GADGET_OMAP .hmc_mode = 19, /* 0:host(off) 1:dev|otg 2:disabled */ /* .hmc_mode = 21,*/ /* 0:host(off) 1:dev(loopback) 2:host(loopback) */ #elif defined(CONFIG_USB_OHCI_HCD) || defined(CONFIG_USB_OHCI_HCD_MODULE) diff --git a/arch/arm/mach-omap1/board-osk.c b/arch/arm/mach-omap1/board-osk.c index 006fbb5f965..a7ce6928668 100644 --- a/arch/arm/mach-omap1/board-osk.c +++ b/arch/arm/mach-omap1/board-osk.c @@ -280,7 +280,7 @@ static struct omap_usb_config osk_usb_config __initdata = { * be used, with a NONSTANDARD gender-bending cable/dongle, as * a peripheral. */ -#if IS_ENABLED(CONFIG_USB_OMAP) +#ifdef CONFIG_USB_GADGET_OMAP .register_dev = 1, .hmc_mode = 0, #else diff --git a/arch/arm/mach-omap2/cclock3xxx_data.c b/arch/arm/mach-omap2/cclock3xxx_data.c index da6d407c21c..45cd26430d1 100644 --- a/arch/arm/mach-omap2/cclock3xxx_data.c +++ b/arch/arm/mach-omap2/cclock3xxx_data.c @@ -418,8 +418,7 @@ static struct clk_hw_omap dpll4_m5x2_ck_hw = { .clkdm_name = "dpll4_clkdm", }; -DEFINE_STRUCT_CLK_FLAGS(dpll4_m5x2_ck, dpll4_m5x2_ck_parent_names, - dpll4_m5x2_ck_ops, CLK_SET_RATE_PARENT); +DEFINE_STRUCT_CLK(dpll4_m5x2_ck, dpll4_m5x2_ck_parent_names, dpll4_m5x2_ck_ops); static struct clk dpll4_m5x2_ck_3630 = { .name = "dpll4_m5x2_ck", diff --git a/arch/arm/mach-omap2/control.c b/arch/arm/mach-omap2/control.c index 6124da1a07d..2adb2683f07 100644 --- a/arch/arm/mach-omap2/control.c +++ b/arch/arm/mach-omap2/control.c @@ -323,8 +323,7 @@ void omap3_save_scratchpad_contents(void) scratchpad_contents.public_restore_ptr = virt_to_phys(omap3_restore_3630); else if (omap_rev() != OMAP3430_REV_ES3_0 && - omap_rev() != OMAP3430_REV_ES3_1 && - omap_rev() != OMAP3430_REV_ES3_1_2) + omap_rev() != OMAP3430_REV_ES3_1) scratchpad_contents.public_restore_ptr = virt_to_phys(omap3_restore); else diff --git a/arch/arm/mach-omap2/cpuidle44xx.c b/arch/arm/mach-omap2/cpuidle44xx.c index f98410a257e..c443f2e97e1 100644 --- a/arch/arm/mach-omap2/cpuidle44xx.c +++ b/arch/arm/mach-omap2/cpuidle44xx.c @@ -14,7 +14,6 @@ #include <linux/cpuidle.h> #include <linux/cpu_pm.h> #include <linux/export.h> -#include <linux/clockchips.h> #include <asm/cpuidle.h> #include <asm/proc-fns.h> @@ -81,7 +80,6 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev, int index) { struct idle_statedata *cx = state_ptr + index; - int cpu_id = smp_processor_id(); /* * CPU0 has to wait and stay ON until CPU1 is OFF state. @@ -106,8 +104,6 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev, } } - clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu_id); - /* * Call idle CPU PM enter notifier chain so that * VFP and per CPU interrupt context is saved. @@ -151,8 +147,6 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev, (cx->mpu_logic_state == PWRDM_POWER_OFF)) cpu_cluster_pm_exit(); - clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu_id); - fail: cpuidle_coupled_parallel_barrier(dev, &abort_barrier); cpu_done[dev->cpu] = false; @@ -160,16 +154,6 @@ fail: return index; } -/* - * For each cpu, setup the broadcast timer because local timers - * stops for the states above C1. - */ -static void omap_setup_broadcast_timer(void *arg) -{ - int cpu = smp_processor_id(); - clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ON, &cpu); -} - static struct cpuidle_driver omap4_idle_driver = { .name = "omap4_idle", .owner = THIS_MODULE, @@ -187,7 +171,8 @@ static struct cpuidle_driver omap4_idle_driver = { /* C2 - CPU0 OFF + CPU1 OFF + MPU CSWR */ .exit_latency = 328 + 440, .target_residency = 960, - .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED, + .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED | + CPUIDLE_FLAG_TIMER_STOP, .enter = omap_enter_idle_coupled, .name = "C2", .desc = "CPUx OFF, MPUSS CSWR", @@ -196,7 +181,8 @@ static struct cpuidle_driver omap4_idle_driver = { /* C3 - CPU0 OFF + CPU1 OFF + MPU OSWR */ .exit_latency = 460 + 518, .target_residency = 1100, - .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED, + .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED | + CPUIDLE_FLAG_TIMER_STOP, .enter = omap_enter_idle_coupled, .name = "C3", .desc = "CPUx OFF, MPUSS OSWR", @@ -227,8 +213,5 @@ int __init omap4_idle_init(void) if (!cpu_clkdm[0] || !cpu_clkdm[1]) return -ENODEV; - /* Configure the broadcast timer on each cpu */ - on_each_cpu(omap_setup_broadcast_timer, NULL, 1); - return cpuidle_register(&omap4_idle_driver, cpu_online_mask); } diff --git a/arch/arm/mach-omap2/irq.c b/arch/arm/mach-omap2/irq.c index 6037a9a01ed..e022a869bff 100644 --- a/arch/arm/mach-omap2/irq.c +++ b/arch/arm/mach-omap2/irq.c @@ -222,7 +222,6 @@ void __init ti81xx_init_irq(void) static inline void omap_intc_handle_irq(void __iomem *base_addr, struct pt_regs *regs) { u32 irqnr; - int handled_irq = 0; do { irqnr = readl_relaxed(base_addr + 0x98); @@ -250,15 +249,8 @@ out: if (irqnr) { irqnr = irq_find_mapping(domain, irqnr); handle_IRQ(irqnr, regs); - handled_irq = 1; } } while (irqnr); - - /* If an irq is masked or deasserted while active, we will - * keep ending up here with no irq handled. So remove it from - * the INTC with an ack.*/ - if (!handled_irq) - omap_ack_irq(NULL); } asmlinkage void __exception_irq_entry omap2_intc_handle_irq(struct pt_regs *regs) diff --git a/arch/arm/mach-omap2/mux.c b/arch/arm/mach-omap2/mux.c index 94c2f6d17da..f82cf878d6a 100644 --- a/arch/arm/mach-omap2/mux.c +++ b/arch/arm/mach-omap2/mux.c @@ -183,10 +183,8 @@ static int __init _omap_mux_get_by_name(struct omap_mux_partition *partition, m0_entry = mux->muxnames[0]; /* First check for full name in mode0.muxmode format */ - if (mode0_len) - if (strncmp(muxname, m0_entry, mode0_len) || - (strlen(m0_entry) != mode0_len)) - continue; + if (mode0_len && strncmp(muxname, m0_entry, mode0_len)) + continue; /* Then check for muxmode only */ for (i = 0; i < OMAP_MUX_NR_MODES; i++) { diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c index 62e40a9fffa..44c609a1ec5 100644 --- a/arch/arm/mach-omap2/omap_hwmod.c +++ b/arch/arm/mach-omap2/omap_hwmod.c @@ -2177,8 +2177,6 @@ static int _enable(struct omap_hwmod *oh) oh->mux->pads_dynamic))) { omap_hwmod_mux(oh->mux, _HWMOD_STATE_ENABLED); _reconfigure_io_chain(); - } else if (oh->flags & HWMOD_FORCE_MSTANDBY) { - _reconfigure_io_chain(); } _add_initiator_dep(oh, mpu_oh); @@ -2285,8 +2283,6 @@ static int _idle(struct omap_hwmod *oh) if (oh->mux && oh->mux->pads_dynamic) { omap_hwmod_mux(oh->mux, _HWMOD_STATE_IDLE); _reconfigure_io_chain(); - } else if (oh->flags & HWMOD_FORCE_MSTANDBY) { - _reconfigure_io_chain(); } oh->_state = _HWMOD_STATE_IDLE; diff --git a/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c b/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c index 8691c8cbe2c..9f6238c9dfc 100644 --- a/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c +++ b/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c @@ -1955,7 +1955,7 @@ static struct omap_hwmod_irq_info omap3xxx_usb_host_hs_irqs[] = { static struct omap_hwmod omap3xxx_usb_host_hs_hwmod = { .name = "usb_host_hs", .class = &omap3xxx_usb_host_hs_hwmod_class, - .clkdm_name = "usbhost_clkdm", + .clkdm_name = "l3_init_clkdm", .mpu_irqs = omap3xxx_usb_host_hs_irqs, .main_clk = "usbhost_48m_fck", .prcm = { @@ -2040,7 +2040,7 @@ static struct omap_hwmod_irq_info omap3xxx_usb_tll_hs_irqs[] = { static struct omap_hwmod omap3xxx_usb_tll_hs_hwmod = { .name = "usb_tll_hs", .class = &omap3xxx_usb_tll_hs_hwmod_class, - .clkdm_name = "core_l4_clkdm", + .clkdm_name = "l3_init_clkdm", .mpu_irqs = omap3xxx_usb_tll_hs_irqs, .main_clk = "usbtll_fck", .prcm = { diff --git a/arch/arm/mach-omap2/pm.h b/arch/arm/mach-omap2/pm.h index d4d0fce325c..7bdd22afce6 100644 --- a/arch/arm/mach-omap2/pm.h +++ b/arch/arm/mach-omap2/pm.h @@ -103,7 +103,7 @@ static inline void enable_omap3630_toggle_l2_on_restore(void) { } #define PM_OMAP4_ROM_SMP_BOOT_ERRATUM_GICD (1 << 0) -#if defined(CONFIG_PM) && defined(CONFIG_ARCH_OMAP4) +#if defined(CONFIG_ARCH_OMAP4) extern u16 pm44xx_errata; #define IS_PM44XX_ERRATUM(id) (pm44xx_errata & (id)) #else diff --git a/arch/arm/mach-sa1100/include/mach/collie.h b/arch/arm/mach-sa1100/include/mach/collie.h index 50e1d850ee2..f33679d2d3e 100644 --- a/arch/arm/mach-sa1100/include/mach/collie.h +++ b/arch/arm/mach-sa1100/include/mach/collie.h @@ -13,8 +13,6 @@ #ifndef __ASM_ARCH_COLLIE_H #define __ASM_ARCH_COLLIE_H -#include "hardware.h" /* Gives GPIO_MAX */ - extern void locomolcd_power(int on); #define COLLIE_SCOOP_GPIO_BASE (GPIO_MAX + 1) diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 2c7e308f8ca..c47875f9dba 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -436,6 +436,7 @@ config CPU_32v5 config CPU_32v6 bool + select CPU_USE_DOMAINS if CPU_V6 && MMU select TLS_REG_EMUL if !CPU_32v6K && !MMU config CPU_32v6K @@ -650,7 +651,7 @@ config ARM_VIRT_EXT config SWP_EMULATE bool "Emulate SWP/SWPB instructions" - depends on CPU_V7 + depends on !CPU_USE_DOMAINS && CPU_V7 default y if SMP select HAVE_PROC_CPU if PROC_FS help diff --git a/arch/arm/mm/abort-ev6.S b/arch/arm/mm/abort-ev6.S index 5d777a567c3..80741992a9f 100644 --- a/arch/arm/mm/abort-ev6.S +++ b/arch/arm/mm/abort-ev6.S @@ -17,6 +17,12 @@ */ .align 5 ENTRY(v6_early_abort) +#ifdef CONFIG_CPU_V6 + sub r1, sp, #4 @ Get unused stack location + strex r0, r1, [r1] @ Clear the exclusive monitor +#elif defined(CONFIG_CPU_32v6K) + clrex +#endif mrc p15, 0, r1, c5, c0, 0 @ get FSR mrc p15, 0, r0, c6, c0, 0 @ get FAR /* diff --git a/arch/arm/mm/abort-ev7.S b/arch/arm/mm/abort-ev7.S index 4812ad05421..703375277ba 100644 --- a/arch/arm/mm/abort-ev7.S +++ b/arch/arm/mm/abort-ev7.S @@ -13,6 +13,12 @@ */ .align 5 ENTRY(v7_early_abort) + /* + * The effect of data aborts on on the exclusive access monitor are + * UNPREDICTABLE. Do a CLREX to clear the state + */ + clrex + mrc p15, 0, r1, c5, c0, 0 @ get FSR mrc p15, 0, r0, c6, c0, 0 @ get FAR diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c index 1fe0bf5c737..6f4585b8907 100644 --- a/arch/arm/mm/alignment.c +++ b/arch/arm/mm/alignment.c @@ -39,7 +39,6 @@ * This code is not portable to processors with late data abort handling. */ #define CODING_BITS(i) (i & 0x0e000000) -#define COND_BITS(i) (i & 0xf0000000) #define LDST_I_BIT(i) (i & (1 << 26)) /* Immediate constant */ #define LDST_P_BIT(i) (i & (1 << 24)) /* Preindex */ @@ -813,8 +812,6 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs) break; case 0x04000000: /* ldr or str immediate */ - if (COND_BITS(instr) == 0xf0000000) /* NEON VLDn, VSTn */ - goto bad; offset.un = OFFSET_BITS(instr); handler = do_alignment_ldrstr; break; diff --git a/arch/arm/mm/idmap.c b/arch/arm/mm/idmap.c index 5e74f6faa93..19479996bcb 100644 --- a/arch/arm/mm/idmap.c +++ b/arch/arm/mm/idmap.c @@ -24,13 +24,6 @@ static void idmap_add_pmd(pud_t *pud, unsigned long addr, unsigned long end, pr_warning("Failed to allocate identity pmd.\n"); return; } - /* - * Copy the original PMD to ensure that the PMD entries for - * the kernel image are preserved. - */ - if (!pud_none(*pud)) - memcpy(pmd, pmd_offset(pud, 0), - PTRS_PER_PMD * sizeof(pmd_t)); pud_populate(&init_mm, pud, pmd); pmd += pmd_index(addr); } else diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c index f123d6eb074..04d9006eab1 100644 --- a/arch/arm/mm/ioremap.c +++ b/arch/arm/mm/ioremap.c @@ -331,10 +331,10 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn, return (void __iomem *) (offset + addr); } -void __iomem *__arm_ioremap_caller(phys_addr_t phys_addr, size_t size, +void __iomem *__arm_ioremap_caller(unsigned long phys_addr, size_t size, unsigned int mtype, void *caller) { - phys_addr_t last_addr; + unsigned long last_addr; unsigned long offset = phys_addr & ~PAGE_MASK; unsigned long pfn = __phys_to_pfn(phys_addr); @@ -367,12 +367,12 @@ __arm_ioremap_pfn(unsigned long pfn, unsigned long offset, size_t size, } EXPORT_SYMBOL(__arm_ioremap_pfn); -void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, +void __iomem * (*arch_ioremap_caller)(unsigned long, size_t, unsigned int, void *) = __arm_ioremap_caller; void __iomem * -__arm_ioremap(phys_addr_t phys_addr, size_t size, unsigned int mtype) +__arm_ioremap(unsigned long phys_addr, size_t size, unsigned int mtype) { return arch_ioremap_caller(phys_addr, size, mtype, __builtin_return_address(0)); @@ -387,7 +387,7 @@ EXPORT_SYMBOL(__arm_ioremap); * CONFIG_GENERIC_ALLOCATOR for allocating external memory. */ void __iomem * -__arm_ioremap_exec(phys_addr_t phys_addr, size_t size, bool cached) +__arm_ioremap_exec(unsigned long phys_addr, size_t size, bool cached) { unsigned int mtype; diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 79b9ec3782b..66feda8b53f 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -459,16 +459,6 @@ static void __init build_mem_type_table(void) hyp_device_pgprot = s2_device_pgprot = mem_types[MT_DEVICE].prot_pte; /* - * We don't use domains on ARMv6 (since this causes problems with - * v6/v7 kernels), so we must use a separate memory type for user - * r/o, kernel r/w to map the vectors page. - */ -#ifndef CONFIG_ARM_LPAE - if (cpu_arch == CPU_ARCH_ARMv6) - vecs_pgprot |= L_PTE_MT_VECTORS; -#endif - - /* * ARMv6 and above have extended page tables. */ if (cpu_arch >= CPU_ARCH_ARMv6 && (cr & CR_XP)) { diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 7fe0524a544..eb5293a69a8 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -87,16 +87,16 @@ void __iomem *__arm_ioremap_pfn_caller(unsigned long pfn, unsigned long offset, return __arm_ioremap_pfn(pfn, offset, size, mtype); } -void __iomem *__arm_ioremap(phys_addr_t phys_addr, size_t size, +void __iomem *__arm_ioremap(unsigned long phys_addr, size_t size, unsigned int mtype) { return (void __iomem *)phys_addr; } EXPORT_SYMBOL(__arm_ioremap); -void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, unsigned int, void *); +void __iomem * (*arch_ioremap_caller)(unsigned long, size_t, unsigned int, void *); -void __iomem *__arm_ioremap_caller(phys_addr_t phys_addr, size_t size, +void __iomem *__arm_ioremap_caller(unsigned long phys_addr, size_t size, unsigned int mtype, void *caller) { return __arm_ioremap(phys_addr, size, mtype); diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S index ee1d8059395..e3c48a3fe06 100644 --- a/arch/arm/mm/proc-macros.S +++ b/arch/arm/mm/proc-macros.S @@ -112,9 +112,13 @@ * 100x 1 0 1 r/o no acc * 10x0 1 0 1 r/o no acc * 1011 0 0 1 r/w no acc + * 110x 0 1 0 r/w r/o + * 11x0 0 1 0 r/w r/o + * 1111 0 1 1 r/w r/w + * + * If !CONFIG_CPU_USE_DOMAINS, the following permissions are changed: * 110x 1 1 1 r/o r/o * 11x0 1 1 1 r/o r/o - * 1111 0 1 1 r/w r/w */ .macro armv6_mt_table pfx \pfx\()_mt_table: @@ -133,7 +137,7 @@ .long PTE_EXT_TEX(2) @ L_PTE_MT_DEV_NONSHARED .long 0x00 @ unused .long 0x00 @ unused - .long PTE_CACHEABLE | PTE_BUFFERABLE | PTE_EXT_APX @ L_PTE_MT_VECTORS + .long 0x00 @ unused .endm .macro armv6_set_pte_ext pfx @@ -154,21 +158,24 @@ tst r1, #L_PTE_USER orrne r3, r3, #PTE_EXT_AP1 +#ifdef CONFIG_CPU_USE_DOMAINS + @ allow kernel read/write access to read-only user pages tstne r3, #PTE_EXT_APX - - @ user read-only -> kernel read-only - bicne r3, r3, #PTE_EXT_AP0 + bicne r3, r3, #PTE_EXT_APX | PTE_EXT_AP0 +#endif tst r1, #L_PTE_XN orrne r3, r3, #PTE_EXT_XN - eor r3, r3, r2 + orr r3, r3, r2 tst r1, #L_PTE_YOUNG tstne r1, #L_PTE_PRESENT moveq r3, #0 +#ifndef CONFIG_CPU_USE_DOMAINS tstne r1, #L_PTE_NONE movne r3, #0 +#endif str r3, [r0] mcr p15, 0, r0, c7, c10, 1 @ flush_pte diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S index a0cf0dc9f0d..cc454e6cd76 100644 --- a/arch/arm/mm/proc-v7-2level.S +++ b/arch/arm/mm/proc-v7-2level.S @@ -90,14 +90,21 @@ ENTRY(cpu_v7_set_pte_ext) tst r1, #L_PTE_USER orrne r3, r3, #PTE_EXT_AP1 +#ifdef CONFIG_CPU_USE_DOMAINS + @ allow kernel read/write access to read-only user pages + tstne r3, #PTE_EXT_APX + bicne r3, r3, #PTE_EXT_APX | PTE_EXT_AP0 +#endif tst r1, #L_PTE_XN orrne r3, r3, #PTE_EXT_XN tst r1, #L_PTE_YOUNG tstne r1, #L_PTE_VALID +#ifndef CONFIG_CPU_USE_DOMAINS eorne r1, r1, #L_PTE_NONE tstne r1, #L_PTE_NONE +#endif moveq r3, #0 ARM( str r3, [r0, #2048]! ) diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S index 96a3b053044..e377cc4031b 100644 --- a/arch/arm/mm/proc-v7-3level.S +++ b/arch/arm/mm/proc-v7-3level.S @@ -65,14 +65,6 @@ ENTRY(cpu_v7_switch_mm) mov pc, lr ENDPROC(cpu_v7_switch_mm) -#ifdef __ARMEB__ -#define rl r3 -#define rh r2 -#else -#define rl r2 -#define rh r3 -#endif - /* * cpu_v7_set_pte_ext(ptep, pte) * @@ -82,13 +74,13 @@ ENDPROC(cpu_v7_switch_mm) */ ENTRY(cpu_v7_set_pte_ext) #ifdef CONFIG_MMU - tst rl, #L_PTE_VALID + tst r2, #L_PTE_VALID beq 1f - tst rh, #1 << (57 - 32) @ L_PTE_NONE - bicne rl, #L_PTE_VALID + tst r3, #1 << (57 - 32) @ L_PTE_NONE + bicne r2, #L_PTE_VALID bne 1f - tst rh, #1 << (55 - 32) @ L_PTE_DIRTY - orreq rl, #L_PTE_RDONLY + tst r3, #1 << (55 - 32) @ L_PTE_DIRTY + orreq r2, #L_PTE_RDONLY 1: strd r2, r3, [r0] ALT_SMP(W(nop)) ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte diff --git a/arch/arm64/include/asm/compat.h b/arch/arm64/include/asm/compat.h index c30a548cee5..899af807ef0 100644 --- a/arch/arm64/include/asm/compat.h +++ b/arch/arm64/include/asm/compat.h @@ -33,8 +33,8 @@ typedef s32 compat_ssize_t; typedef s32 compat_time_t; typedef s32 compat_clock_t; typedef s32 compat_pid_t; -typedef u16 __compat_uid_t; -typedef u16 __compat_gid_t; +typedef u32 __compat_uid_t; +typedef u32 __compat_gid_t; typedef u16 __compat_uid16_t; typedef u16 __compat_gid16_t; typedef u32 __compat_uid32_t; diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h index 52b484b6aa1..d064047612b 100644 --- a/arch/arm64/include/asm/hw_breakpoint.h +++ b/arch/arm64/include/asm/hw_breakpoint.h @@ -79,6 +79,7 @@ static inline void decode_ctrl_reg(u32 reg, */ #define ARM_MAX_BRP 16 #define ARM_MAX_WRP 16 +#define ARM_MAX_HBP_SLOTS (ARM_MAX_BRP + ARM_MAX_WRP) /* Virtual debug register bases. */ #define AARCH64_DBG_REG_BVR 0 diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index f3fbd415d5f..76131f7e06b 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -55,8 +55,6 @@ #define TASK_SIZE_32 UL(0x100000000) #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ TASK_SIZE_32 : TASK_SIZE_64) -#define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ - TASK_SIZE_32 : TASK_SIZE_64) #else #define TASK_SIZE TASK_SIZE_64 #endif /* CONFIG_COMPAT */ diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2b42fbbb894..b5d0aabebb1 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -176,7 +176,7 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { if (pte_valid_user(pte)) { - if (!pte_special(pte) && pte_exec(pte)) + if (pte_exec(pte)) __sync_icache_dcache(pte, addr); if (!pte_dirty(pte)) pte = pte_wrprotect(pte); @@ -197,11 +197,11 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, * Mark the prot value as uncacheable and unbufferable. */ #define pgprot_noncached(prot) \ - __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRnE) | PTE_PXN | PTE_UXN) + __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRnE)) #define pgprot_writecombine(prot) \ - __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC) | PTE_PXN | PTE_UXN) + __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC)) #define pgprot_dmacoherent(prot) \ - __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC) | PTE_PXN | PTE_UXN) + __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC)) #define __HAVE_PHYS_MEM_ACCESS_PROT struct file; extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 412867ebe84..6127bc91387 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -275,6 +275,7 @@ el1_sp_pc: * Stack or PC alignment exception handling */ mrs x0, far_el1 + mov x1, x25 mov x2, sp b do_sp_pc_abort el1_undef: diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 49818e1acf2..fbb26535057 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -196,27 +196,9 @@ void exit_thread(void) { } -static void tls_thread_flush(void) -{ - asm ("msr tpidr_el0, xzr"); - - if (is_compat_task()) { - current->thread.tp_value = 0; - - /* - * We need to ensure ordering between the shadow state and the - * hardware state, so that we don't corrupt the hardware state - * with a stale shadow state during context switch. - */ - barrier(); - asm ("msr tpidrro_el0, xzr"); - } -} - void flush_thread(void) { fpsimd_flush_thread(); - tls_thread_flush(); flush_ptrace_hw_breakpoint(current); } diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index ee79a1a6e96..c484d5625ff 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -81,8 +81,7 @@ static void ptrace_hbptriggered(struct perf_event *bp, break; } } - - for (i = 0; i < ARM_MAX_WRP; ++i) { + for (i = ARM_MAX_BRP; i < ARM_MAX_HBP_SLOTS && !bp; ++i) { if (current->thread.debug.hbp_watch[i] == bp) { info.si_errno = -((i << 1) + 1); break; @@ -824,7 +823,6 @@ static int compat_ptrace_write_user(struct task_struct *tsk, compat_ulong_t off, compat_ulong_t val) { int ret; - mm_segment_t old_fs = get_fs(); if (off & 3 || off >= COMPAT_USER_SZ) return -EIO; @@ -832,13 +830,10 @@ static int compat_ptrace_write_user(struct task_struct *tsk, compat_ulong_t off, if (off >= sizeof(compat_elf_gregset_t)) return 0; - set_fs(KERNEL_DS); ret = copy_regset_from_user(tsk, &user_aarch32_view, REGSET_COMPAT_GPR, off, sizeof(compat_ulong_t), &val); - set_fs(old_fs); - return ret; } diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c index 652691d48f9..1baf7eba79c 100644 --- a/arch/arm64/kernel/sys_compat.c +++ b/arch/arm64/kernel/sys_compat.c @@ -82,12 +82,6 @@ long compat_arm_syscall(struct pt_regs *regs) case __ARM_NR_compat_set_tls: current->thread.tp_value = regs->regs[0]; - - /* - * Protect against register corruption from context switch. - * See comment in tls_thread_flush. - */ - barrier(); asm ("msr tpidrro_el0, %0" : : "r" (regs->regs[0])); return 0; diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig index b969eea4e23..821170e5f6e 100644 --- a/arch/m68k/Kconfig +++ b/arch/m68k/Kconfig @@ -16,7 +16,6 @@ config M68K select FPU if MMU select ARCH_WANT_IPC_PARSE_VERSION select ARCH_USES_GETTIMEOFFSET if MMU && !COLDFIRE - select HAVE_FUTEX_CMPXCHG if MMU && FUTEX select HAVE_MOD_ARCH_SPECIFIC select MODULES_USE_ELF_REL select MODULES_USE_ELF_RELA diff --git a/arch/m68k/mm/hwtest.c b/arch/m68k/mm/hwtest.c index 2a5259fd23e..2c7dde3c643 100644 --- a/arch/m68k/mm/hwtest.c +++ b/arch/m68k/mm/hwtest.c @@ -28,11 +28,9 @@ int hwreg_present( volatile void *regp ) { int ret = 0; - unsigned long flags; long save_sp, save_vbr; long tmp_vectors[3]; - local_irq_save(flags); __asm__ __volatile__ ( "movec %/vbr,%2\n\t" "movel #Lberr1,%4@(8)\n\t" @@ -48,7 +46,6 @@ int hwreg_present( volatile void *regp ) : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr) : "a" (regp), "a" (tmp_vectors) ); - local_irq_restore(flags); return( ret ); } @@ -61,11 +58,9 @@ EXPORT_SYMBOL(hwreg_present); int hwreg_write( volatile void *regp, unsigned short val ) { int ret; - unsigned long flags; long save_sp, save_vbr; long tmp_vectors[3]; - local_irq_save(flags); __asm__ __volatile__ ( "movec %/vbr,%2\n\t" "movel #Lberr2,%4@(8)\n\t" @@ -83,7 +78,6 @@ int hwreg_write( volatile void *regp, unsigned short val ) : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr) : "a" (regp), "a" (tmp_vectors), "g" (val) ); - local_irq_restore(flags); return( ret ); } diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h index e355a4c1096..c90bfc6bf64 100644 --- a/arch/metag/include/asm/barrier.h +++ b/arch/metag/include/asm/barrier.h @@ -15,7 +15,6 @@ static inline void wr_fence(void) volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_FENCE; barrier(); *flushptr = 0; - barrier(); } #else /* CONFIG_METAG_META21 */ @@ -36,7 +35,6 @@ static inline void wr_fence(void) *flushptr = 0; *flushptr = 0; *flushptr = 0; - barrier(); } #endif /* !CONFIG_METAG_META21 */ @@ -70,7 +68,6 @@ static inline void fence(void) volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK; barrier(); *flushptr = 0; - barrier(); } #define smp_mb() fence() #define smp_rmb() fence() diff --git a/arch/metag/include/asm/processor.h b/arch/metag/include/asm/processor.h index 579e3d93a5c..9b029a7911c 100644 --- a/arch/metag/include/asm/processor.h +++ b/arch/metag/include/asm/processor.h @@ -22,8 +22,6 @@ /* Add an extra page of padding at the top of the stack for the guard page. */ #define STACK_TOP (TASK_SIZE - PAGE_SIZE) #define STACK_TOP_MAX STACK_TOP -/* Maximum virtual space for stack */ -#define STACK_SIZE_MAX (1 << 28) /* 256 MB */ /* This decides where the kernel will search for a free chunk of vm * space during mmap's. diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c index d498a1f9bcc..2c9573098c0 100644 --- a/arch/mips/boot/compressed/decompress.c +++ b/arch/mips/boot/compressed/decompress.c @@ -13,7 +13,6 @@ #include <linux/types.h> #include <linux/kernel.h> -#include <linux/string.h> #include <asm/addrspace.h> diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c index 45c1a6caa20..a22f06a6f7c 100644 --- a/arch/mips/cavium-octeon/octeon-irq.c +++ b/arch/mips/cavium-octeon/octeon-irq.c @@ -635,7 +635,7 @@ static void octeon_irq_cpu_offline_ciu(struct irq_data *data) cpumask_clear(&new_affinity); cpumask_set_cpu(cpumask_first(cpu_online_mask), &new_affinity); } - irq_set_affinity_locked(data, &new_affinity, false); + __irq_set_affinity_locked(data, &new_affinity); } static int octeon_irq_ciu_set_affinity(struct irq_data *data, diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c index 6430e7acb1e..2a75ff249e7 100644 --- a/arch/mips/cavium-octeon/setup.c +++ b/arch/mips/cavium-octeon/setup.c @@ -463,18 +463,6 @@ static void octeon_halt(void) octeon_kill_core(NULL); } -static char __read_mostly octeon_system_type[80]; - -static int __init init_octeon_system_type(void) -{ - snprintf(octeon_system_type, sizeof(octeon_system_type), "%s (%s)", - cvmx_board_type_to_string(octeon_bootinfo->board_type), - octeon_model_get_string(read_c0_prid())); - - return 0; -} -early_initcall(init_octeon_system_type); - /** * Handle all the error condition interrupts that might occur. * @@ -494,7 +482,11 @@ static irqreturn_t octeon_rlm_interrupt(int cpl, void *dev_id) */ const char *octeon_board_type_string(void) { - return octeon_system_type; + static char name[80]; + sprintf(name, "%s (%s)", + cvmx_board_type_to_string(octeon_bootinfo->board_type), + octeon_model_get_string(read_c0_prid())); + return name; } const char *get_system_type(void) diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h index 3d0074e1059..87e6207b05e 100644 --- a/arch/mips/include/asm/mipsregs.h +++ b/arch/mips/include/asm/mipsregs.h @@ -14,7 +14,6 @@ #define _ASM_MIPSREGS_H #include <linux/linkage.h> -#include <linux/types.h> #include <asm/hazards.h> #include <asm/war.h> diff --git a/arch/mips/include/asm/reg.h b/arch/mips/include/asm/reg.h index b8343ccbc98..910e71a1246 100644 --- a/arch/mips/include/asm/reg.h +++ b/arch/mips/include/asm/reg.h @@ -12,194 +12,116 @@ #ifndef __ASM_MIPS_REG_H #define __ASM_MIPS_REG_H -#define MIPS32_EF_R0 6 -#define MIPS32_EF_R1 7 -#define MIPS32_EF_R2 8 -#define MIPS32_EF_R3 9 -#define MIPS32_EF_R4 10 -#define MIPS32_EF_R5 11 -#define MIPS32_EF_R6 12 -#define MIPS32_EF_R7 13 -#define MIPS32_EF_R8 14 -#define MIPS32_EF_R9 15 -#define MIPS32_EF_R10 16 -#define MIPS32_EF_R11 17 -#define MIPS32_EF_R12 18 -#define MIPS32_EF_R13 19 -#define MIPS32_EF_R14 20 -#define MIPS32_EF_R15 21 -#define MIPS32_EF_R16 22 -#define MIPS32_EF_R17 23 -#define MIPS32_EF_R18 24 -#define MIPS32_EF_R19 25 -#define MIPS32_EF_R20 26 -#define MIPS32_EF_R21 27 -#define MIPS32_EF_R22 28 -#define MIPS32_EF_R23 29 -#define MIPS32_EF_R24 30 -#define MIPS32_EF_R25 31 + +#if defined(CONFIG_32BIT) || defined(WANT_COMPAT_REG_H) + +#define EF_R0 6 +#define EF_R1 7 +#define EF_R2 8 +#define EF_R3 9 +#define EF_R4 10 +#define EF_R5 11 +#define EF_R6 12 +#define EF_R7 13 +#define EF_R8 14 +#define EF_R9 15 +#define EF_R10 16 +#define EF_R11 17 +#define EF_R12 18 +#define EF_R13 19 +#define EF_R14 20 +#define EF_R15 21 +#define EF_R16 22 +#define EF_R17 23 +#define EF_R18 24 +#define EF_R19 25 +#define EF_R20 26 +#define EF_R21 27 +#define EF_R22 28 +#define EF_R23 29 +#define EF_R24 30 +#define EF_R25 31 /* * k0/k1 unsaved */ -#define MIPS32_EF_R26 32 -#define MIPS32_EF_R27 33 +#define EF_R26 32 +#define EF_R27 33 -#define MIPS32_EF_R28 34 -#define MIPS32_EF_R29 35 -#define MIPS32_EF_R30 36 -#define MIPS32_EF_R31 37 +#define EF_R28 34 +#define EF_R29 35 +#define EF_R30 36 +#define EF_R31 37 /* * Saved special registers */ -#define MIPS32_EF_LO 38 -#define MIPS32_EF_HI 39 - -#define MIPS32_EF_CP0_EPC 40 -#define MIPS32_EF_CP0_BADVADDR 41 -#define MIPS32_EF_CP0_STATUS 42 -#define MIPS32_EF_CP0_CAUSE 43 -#define MIPS32_EF_UNUSED0 44 - -#define MIPS32_EF_SIZE 180 - -#define MIPS64_EF_R0 0 -#define MIPS64_EF_R1 1 -#define MIPS64_EF_R2 2 -#define MIPS64_EF_R3 3 -#define MIPS64_EF_R4 4 -#define MIPS64_EF_R5 5 -#define MIPS64_EF_R6 6 -#define MIPS64_EF_R7 7 -#define MIPS64_EF_R8 8 -#define MIPS64_EF_R9 9 -#define MIPS64_EF_R10 10 -#define MIPS64_EF_R11 11 -#define MIPS64_EF_R12 12 -#define MIPS64_EF_R13 13 -#define MIPS64_EF_R14 14 -#define MIPS64_EF_R15 15 -#define MIPS64_EF_R16 16 -#define MIPS64_EF_R17 17 -#define MIPS64_EF_R18 18 -#define MIPS64_EF_R19 19 -#define MIPS64_EF_R20 20 -#define MIPS64_EF_R21 21 -#define MIPS64_EF_R22 22 -#define MIPS64_EF_R23 23 -#define MIPS64_EF_R24 24 -#define MIPS64_EF_R25 25 +#define EF_LO 38 +#define EF_HI 39 + +#define EF_CP0_EPC 40 +#define EF_CP0_BADVADDR 41 +#define EF_CP0_STATUS 42 +#define EF_CP0_CAUSE 43 +#define EF_UNUSED0 44 + +#define EF_SIZE 180 + +#endif + +#if defined(CONFIG_64BIT) && !defined(WANT_COMPAT_REG_H) + +#define EF_R0 0 +#define EF_R1 1 +#define EF_R2 2 +#define EF_R3 3 +#define EF_R4 4 +#define EF_R5 5 +#define EF_R6 6 +#define EF_R7 7 +#define EF_R8 8 +#define EF_R9 9 +#define EF_R10 10 +#define EF_R11 11 +#define EF_R12 12 +#define EF_R13 13 +#define EF_R14 14 +#define EF_R15 15 +#define EF_R16 16 +#define EF_R17 17 +#define EF_R18 18 +#define EF_R19 19 +#define EF_R20 20 +#define EF_R21 21 +#define EF_R22 22 +#define EF_R23 23 +#define EF_R24 24 +#define EF_R25 25 /* * k0/k1 unsaved */ -#define MIPS64_EF_R26 26 -#define MIPS64_EF_R27 27 +#define EF_R26 26 +#define EF_R27 27 -#define MIPS64_EF_R28 28 -#define MIPS64_EF_R29 29 -#define MIPS64_EF_R30 30 -#define MIPS64_EF_R31 31 +#define EF_R28 28 +#define EF_R29 29 +#define EF_R30 30 +#define EF_R31 31 /* * Saved special registers */ -#define MIPS64_EF_LO 32 -#define MIPS64_EF_HI 33 - -#define MIPS64_EF_CP0_EPC 34 -#define MIPS64_EF_CP0_BADVADDR 35 -#define MIPS64_EF_CP0_STATUS 36 -#define MIPS64_EF_CP0_CAUSE 37 - -#define MIPS64_EF_SIZE 304 /* size in bytes */ - -#if defined(CONFIG_32BIT) - -#define EF_R0 MIPS32_EF_R0 -#define EF_R1 MIPS32_EF_R1 -#define EF_R2 MIPS32_EF_R2 -#define EF_R3 MIPS32_EF_R3 -#define EF_R4 MIPS32_EF_R4 -#define EF_R5 MIPS32_EF_R5 -#define EF_R6 MIPS32_EF_R6 -#define EF_R7 MIPS32_EF_R7 -#define EF_R8 MIPS32_EF_R8 -#define EF_R9 MIPS32_EF_R9 -#define EF_R10 MIPS32_EF_R10 -#define EF_R11 MIPS32_EF_R11 -#define EF_R12 MIPS32_EF_R12 -#define EF_R13 MIPS32_EF_R13 -#define EF_R14 MIPS32_EF_R14 -#define EF_R15 MIPS32_EF_R15 -#define EF_R16 MIPS32_EF_R16 -#define EF_R17 MIPS32_EF_R17 -#define EF_R18 MIPS32_EF_R18 -#define EF_R19 MIPS32_EF_R19 -#define EF_R20 MIPS32_EF_R20 -#define EF_R21 MIPS32_EF_R21 -#define EF_R22 MIPS32_EF_R22 -#define EF_R23 MIPS32_EF_R23 -#define EF_R24 MIPS32_EF_R24 -#define EF_R25 MIPS32_EF_R25 -#define EF_R26 MIPS32_EF_R26 -#define EF_R27 MIPS32_EF_R27 -#define EF_R28 MIPS32_EF_R28 -#define EF_R29 MIPS32_EF_R29 -#define EF_R30 MIPS32_EF_R30 -#define EF_R31 MIPS32_EF_R31 -#define EF_LO MIPS32_EF_LO -#define EF_HI MIPS32_EF_HI -#define EF_CP0_EPC MIPS32_EF_CP0_EPC -#define EF_CP0_BADVADDR MIPS32_EF_CP0_BADVADDR -#define EF_CP0_STATUS MIPS32_EF_CP0_STATUS -#define EF_CP0_CAUSE MIPS32_EF_CP0_CAUSE -#define EF_UNUSED0 MIPS32_EF_UNUSED0 -#define EF_SIZE MIPS32_EF_SIZE - -#elif defined(CONFIG_64BIT) - -#define EF_R0 MIPS64_EF_R0 -#define EF_R1 MIPS64_EF_R1 -#define EF_R2 MIPS64_EF_R2 -#define EF_R3 MIPS64_EF_R3 -#define EF_R4 MIPS64_EF_R4 -#define EF_R5 MIPS64_EF_R5 -#define EF_R6 MIPS64_EF_R6 -#define EF_R7 MIPS64_EF_R7 -#define EF_R8 MIPS64_EF_R8 -#define EF_R9 MIPS64_EF_R9 -#define EF_R10 MIPS64_EF_R10 -#define EF_R11 MIPS64_EF_R11 -#define EF_R12 MIPS64_EF_R12 -#define EF_R13 MIPS64_EF_R13 -#define EF_R14 MIPS64_EF_R14 -#define EF_R15 MIPS64_EF_R15 -#define EF_R16 MIPS64_EF_R16 -#define EF_R17 MIPS64_EF_R17 -#define EF_R18 MIPS64_EF_R18 -#define EF_R19 MIPS64_EF_R19 -#define EF_R20 MIPS64_EF_R20 -#define EF_R21 MIPS64_EF_R21 -#define EF_R22 MIPS64_EF_R22 -#define EF_R23 MIPS64_EF_R23 -#define EF_R24 MIPS64_EF_R24 -#define EF_R25 MIPS64_EF_R25 -#define EF_R26 MIPS64_EF_R26 -#define EF_R27 MIPS64_EF_R27 -#define EF_R28 MIPS64_EF_R28 -#define EF_R29 MIPS64_EF_R29 -#define EF_R30 MIPS64_EF_R30 -#define EF_R31 MIPS64_EF_R31 -#define EF_LO MIPS64_EF_LO -#define EF_HI MIPS64_EF_HI -#define EF_CP0_EPC MIPS64_EF_CP0_EPC -#define EF_CP0_BADVADDR MIPS64_EF_CP0_BADVADDR -#define EF_CP0_STATUS MIPS64_EF_CP0_STATUS -#define EF_CP0_CAUSE MIPS64_EF_CP0_CAUSE -#define EF_SIZE MIPS64_EF_SIZE +#define EF_LO 32 +#define EF_HI 33 + +#define EF_CP0_EPC 34 +#define EF_CP0_BADVADDR 35 +#define EF_CP0_STATUS 36 +#define EF_CP0_CAUSE 37 + +#define EF_SIZE 304 /* size in bytes */ #endif /* CONFIG_64BIT */ diff --git a/arch/mips/include/asm/thread_info.h b/arch/mips/include/asm/thread_info.h index e6e5d916221..895320e2566 100644 --- a/arch/mips/include/asm/thread_info.h +++ b/arch/mips/include/asm/thread_info.h @@ -131,8 +131,6 @@ static inline struct thread_info *current_thread_info(void) #define _TIF_FPUBOUND (1<<TIF_FPUBOUND) #define _TIF_LOAD_WATCH (1<<TIF_LOAD_WATCH) -#define _TIF_WORK_SYSCALL_ENTRY (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | _TIF_SECCOMP) - /* work to do in syscall_trace_leave() */ #define _TIF_WORK_SYSCALL_EXIT (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT) diff --git a/arch/mips/kernel/binfmt_elfo32.c b/arch/mips/kernel/binfmt_elfo32.c index 7fdf1de0447..202e581e609 100644 --- a/arch/mips/kernel/binfmt_elfo32.c +++ b/arch/mips/kernel/binfmt_elfo32.c @@ -58,6 +58,12 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG]; #include <asm/processor.h> +/* + * When this file is selected, we are definitely running a 64bit kernel. + * So using the right regs define in asm/reg.h + */ +#define WANT_COMPAT_REG_H + /* These MUST be defined before elf.h gets included */ extern void elf32_core_copy_regs(elf_gregset_t grp, struct pt_regs *regs); #define ELF_CORE_COPY_REGS(_dest, _regs) elf32_core_copy_regs(_dest, _regs); @@ -129,21 +135,21 @@ void elf32_core_copy_regs(elf_gregset_t grp, struct pt_regs *regs) { int i; - for (i = 0; i < MIPS32_EF_R0; i++) + for (i = 0; i < EF_R0; i++) grp[i] = 0; - grp[MIPS32_EF_R0] = 0; + grp[EF_R0] = 0; for (i = 1; i <= 31; i++) - grp[MIPS32_EF_R0 + i] = (elf_greg_t) regs->regs[i]; - grp[MIPS32_EF_R26] = 0; - grp[MIPS32_EF_R27] = 0; - grp[MIPS32_EF_LO] = (elf_greg_t) regs->lo; - grp[MIPS32_EF_HI] = (elf_greg_t) regs->hi; - grp[MIPS32_EF_CP0_EPC] = (elf_greg_t) regs->cp0_epc; - grp[MIPS32_EF_CP0_BADVADDR] = (elf_greg_t) regs->cp0_badvaddr; - grp[MIPS32_EF_CP0_STATUS] = (elf_greg_t) regs->cp0_status; - grp[MIPS32_EF_CP0_CAUSE] = (elf_greg_t) regs->cp0_cause; -#ifdef MIPS32_EF_UNUSED0 - grp[MIPS32_EF_UNUSED0] = 0; + grp[EF_R0 + i] = (elf_greg_t) regs->regs[i]; + grp[EF_R26] = 0; + grp[EF_R27] = 0; + grp[EF_LO] = (elf_greg_t) regs->lo; + grp[EF_HI] = (elf_greg_t) regs->hi; + grp[EF_CP0_EPC] = (elf_greg_t) regs->cp0_epc; + grp[EF_CP0_BADVADDR] = (elf_greg_t) regs->cp0_badvaddr; + grp[EF_CP0_STATUS] = (elf_greg_t) regs->cp0_status; + grp[EF_CP0_CAUSE] = (elf_greg_t) regs->cp0_cause; +#ifdef EF_UNUSED0 + grp[EF_UNUSED0] = 0; #endif } diff --git a/arch/mips/kernel/irq-gic.c b/arch/mips/kernel/irq-gic.c index bffbbc55787..c01b307317a 100644 --- a/arch/mips/kernel/irq-gic.c +++ b/arch/mips/kernel/irq-gic.c @@ -256,13 +256,11 @@ static void __init gic_setup_intr(unsigned int intr, unsigned int cpu, /* Setup Intr to Pin mapping */ if (pin & GIC_MAP_TO_NMI_MSK) { - int i; - GICWRITE(GIC_REG_ADDR(SHARED, GIC_SH_MAP_TO_PIN(intr)), pin); /* FIXME: hack to route NMI to all cpu's */ - for (i = 0; i < NR_CPUS; i += 32) { + for (cpu = 0; cpu < NR_CPUS; cpu += 32) { GICWRITE(GIC_REG_ADDR(SHARED, - GIC_SH_MAP_TO_VPE_REG_OFF(intr, i)), + GIC_SH_MAP_TO_VPE_REG_OFF(intr, cpu)), 0xffffffff); } } else { diff --git a/arch/mips/kernel/irq-msc01.c b/arch/mips/kernel/irq-msc01.c index ac9facc0869..fab40f7d2e0 100644 --- a/arch/mips/kernel/irq-msc01.c +++ b/arch/mips/kernel/irq-msc01.c @@ -131,7 +131,7 @@ void __init init_msc_irqs(unsigned long icubase, unsigned int irqbase, msc_irqma board_bind_eic_interrupt = &msc_bind_eic_interrupt; - for (; nirq > 0; nirq--, imp++) { + for (; nirq >= 0; nirq--, imp++) { int n = imp->im_irq; switch (imp->im_type) { diff --git a/arch/mips/kernel/mcount.S b/arch/mips/kernel/mcount.S index 3efbf0b29c1..33d067148e6 100644 --- a/arch/mips/kernel/mcount.S +++ b/arch/mips/kernel/mcount.S @@ -123,11 +123,7 @@ NESTED(_mcount, PT_SIZE, ra) nop #endif b ftrace_stub -#ifdef CONFIG_32BIT - addiu sp, sp, 8 -#else nop -#endif static_trace: MCOUNT_SAVE_REGS @@ -137,9 +133,6 @@ static_trace: move a1, AT /* arg2: parent's return address */ MCOUNT_RESTORE_REGS -#ifdef CONFIG_32BIT - addiu sp, sp, 8 -#endif .globl ftrace_stub ftrace_stub: RETURN_BACK @@ -188,11 +181,6 @@ NESTED(ftrace_graph_caller, PT_SIZE, ra) jal prepare_ftrace_return nop MCOUNT_RESTORE_REGS -#ifndef CONFIG_DYNAMIC_FTRACE -#ifdef CONFIG_32BIT - addiu sp, sp, 8 -#endif -#endif RETURN_BACK END(ftrace_graph_caller) diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c index 1b95b244322..9c6299c733a 100644 --- a/arch/mips/kernel/ptrace.c +++ b/arch/mips/kernel/ptrace.c @@ -161,7 +161,6 @@ int ptrace_setfpregs(struct task_struct *child, __u32 __user *data) __get_user(fregs[i], i + (__u64 __user *) data); __get_user(child->thread.fpu.fcr31, data + 64); - child->thread.fpu.fcr31 &= ~FPU_CSR_ALL_X; /* FIR may not be written. */ @@ -452,7 +451,7 @@ long arch_ptrace(struct task_struct *child, long request, break; #endif case FPC_CSR: - child->thread.fpu.fcr31 = data & ~FPU_CSR_ALL_X; + child->thread.fpu.fcr31 = data; break; case DSP_BASE ... DSP_BASE + 5: { dspreg_t *dregs; diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S index ed5bafb5d63..9b36424b03c 100644 --- a/arch/mips/kernel/scall32-o32.S +++ b/arch/mips/kernel/scall32-o32.S @@ -52,7 +52,7 @@ NESTED(handle_sys, PT_SIZE, sp) stack_done: lw t0, TI_FLAGS($28) # syscall tracing enabled? - li t1, _TIF_WORK_SYSCALL_ENTRY + li t1, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT and t0, t1 bnez t0, syscall_trace_entry # -> yes diff --git a/arch/mips/kernel/scall64-64.S b/arch/mips/kernel/scall64-64.S index be6627ead61..97a5909a61c 100644 --- a/arch/mips/kernel/scall64-64.S +++ b/arch/mips/kernel/scall64-64.S @@ -54,7 +54,7 @@ NESTED(handle_sys64, PT_SIZE, sp) sd a3, PT_R26(sp) # save a3 for syscall restarting - li t1, _TIF_WORK_SYSCALL_ENTRY + li t1, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? and t0, t1, t0 bnez t0, syscall_trace_entry diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S index cab150789c8..edcb6594e7b 100644 --- a/arch/mips/kernel/scall64-n32.S +++ b/arch/mips/kernel/scall64-n32.S @@ -47,7 +47,7 @@ NESTED(handle_sysn32, PT_SIZE, sp) sd a3, PT_R26(sp) # save a3 for syscall restarting - li t1, _TIF_WORK_SYSCALL_ENTRY + li t1, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? and t0, t1, t0 bnez t0, n32_syscall_trace_entry diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S index 37605dc8eef..74f485d3c0e 100644 --- a/arch/mips/kernel/scall64-o32.S +++ b/arch/mips/kernel/scall64-o32.S @@ -81,7 +81,7 @@ NESTED(handle_sys, PT_SIZE, sp) PTR 4b, bad_stack .previous - li t1, _TIF_WORK_SYSCALL_ENTRY + li t1, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? and t0, t1, t0 bnez t0, trace_a_syscall diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c index 2c81265bcf4..203d8857070 100644 --- a/arch/mips/kernel/unaligned.c +++ b/arch/mips/kernel/unaligned.c @@ -604,6 +604,7 @@ static void emulate_load_store_insn(struct pt_regs *regs, case sdc1_op: die_if_kernel("Unaligned FP access in kernel code", regs); BUG_ON(!used_math()); + BUG_ON(!is_fpu_owner()); lose_fpu(1); /* Save FPU state for the emulator. */ res = fpu_emulator_cop1Handler(regs, ¤t->thread.fpu, 1, diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c index 2c7b3ade8ec..dd203e59e6f 100644 --- a/arch/mips/kvm/kvm_mips.c +++ b/arch/mips/kvm/kvm_mips.c @@ -149,7 +149,9 @@ void kvm_mips_free_vcpus(struct kvm *kvm) if (kvm->arch.guest_pmap[i] != KVM_INVALID_PAGE) kvm_mips_release_pfn_clean(kvm->arch.guest_pmap[i]); } - kfree(kvm->arch.guest_pmap); + + if (kvm->arch.guest_pmap) + kfree(kvm->arch.guest_pmap); kvm_for_each_vcpu(i, vcpu, kvm) { kvm_arch_vcpu_free(vcpu); @@ -297,7 +299,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) if (cpu_has_veic || cpu_has_vint) { size = 0x200 + VECTORSPACING * 64; } else { - size = 0x4000; + size = 0x200; } /* Save Linux EBASE */ @@ -382,9 +384,12 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) kvm_mips_dump_stats(vcpu); - kfree(vcpu->arch.guest_ebase); - kfree(vcpu->arch.kseg0_commpage); - kfree(vcpu); + if (vcpu->arch.guest_ebase) + kfree(vcpu->arch.guest_ebase); + + if (vcpu->arch.kseg0_commpage) + kfree(vcpu->arch.kseg0_commpage); + } void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) diff --git a/arch/mips/kvm/kvm_mips_emul.c b/arch/mips/kvm/kvm_mips_emul.c index e75ef8219ca..4b6274b47f3 100644 --- a/arch/mips/kvm/kvm_mips_emul.c +++ b/arch/mips/kvm/kvm_mips_emul.c @@ -1571,17 +1571,17 @@ kvm_mips_handle_ri(unsigned long cause, uint32_t *opc, arch->gprs[rt] = kvm_read_c0_guest_userlocal(cop0); #else /* UserLocal not implemented */ - er = EMULATE_FAIL; + er = kvm_mips_emulate_ri_exc(cause, opc, run, vcpu); #endif break; default: - kvm_debug("RDHWR %#x not supported @ %p\n", rd, opc); + printk("RDHWR not supported\n"); er = EMULATE_FAIL; break; } } else { - kvm_debug("Emulate RI not supported @ %p: %#x\n", opc, inst); + printk("Emulate RI not supported @ %p: %#x\n", opc, inst); er = EMULATE_FAIL; } @@ -1590,7 +1590,6 @@ kvm_mips_handle_ri(unsigned long cause, uint32_t *opc, */ if (er == EMULATE_FAIL) { vcpu->arch.pc = curr_pc; - er = kvm_mips_emulate_ri_exc(cause, opc, run, vcpu); } return er; } diff --git a/arch/mips/lantiq/dts/easy50712.dts b/arch/mips/lantiq/dts/easy50712.dts index 143b8a37b5e..fac1f5b178e 100644 --- a/arch/mips/lantiq/dts/easy50712.dts +++ b/arch/mips/lantiq/dts/easy50712.dts @@ -8,7 +8,6 @@ }; memory@0 { - device_type = "memory"; reg = <0x0 0x2000000>; }; diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index 5495101d32c..21813beec7a 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -12,7 +12,6 @@ #include <linux/highmem.h> #include <linux/kernel.h> #include <linux/linkage.h> -#include <linux/preempt.h> #include <linux/sched.h> #include <linux/smp.h> #include <linux/mm.h> @@ -602,7 +601,6 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size) /* Catch bad driver code */ BUG_ON(size == 0); - preempt_disable(); if (cpu_has_inclusive_pcaches) { if (size >= scache_size) r4k_blast_scache(); @@ -623,7 +621,6 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size) R4600_HIT_CACHEOP_WAR_IMPL; blast_dcache_range(addr, addr + size); } - preempt_enable(); bc_wback_inv(addr, size); __sync(); @@ -634,7 +631,6 @@ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size) /* Catch bad driver code */ BUG_ON(size == 0); - preempt_disable(); if (cpu_has_inclusive_pcaches) { if (size >= scache_size) r4k_blast_scache(); @@ -659,7 +655,6 @@ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size) R4600_HIT_CACHEOP_WAR_IMPL; blast_inv_dcache_range(addr, addr + size); } - preempt_enable(); bc_inv(addr, size); __sync(); diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index a91a7a99f70..afeef93f81a 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -1091,7 +1091,6 @@ static void __cpuinit build_update_entries(u32 **p, unsigned int tmp, struct mips_huge_tlb_info { int huge_pte; int restore_scratch; - bool need_reload_pte; }; static struct mips_huge_tlb_info __cpuinit @@ -1106,7 +1105,6 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l, rv.huge_pte = scratch; rv.restore_scratch = 0; - rv.need_reload_pte = false; if (check_for_high_segbits) { UASM_i_MFC0(p, tmp, C0_BADVADDR); @@ -1295,7 +1293,6 @@ static void __cpuinit build_r4000_tlb_refill_handler(void) } else { htlb_info.huge_pte = K0; htlb_info.restore_scratch = 0; - htlb_info.need_reload_pte = true; vmalloc_mode = refill_noscratch; /* * create the plain linear handler @@ -1332,8 +1329,6 @@ static void __cpuinit build_r4000_tlb_refill_handler(void) } #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT uasm_l_tlb_huge_update(&l, p); - if (htlb_info.need_reload_pte) - UASM_i_LW(&p, htlb_info.huge_pte, 0, K1); build_huge_update_entries(&p, htlb_info.huge_pte, K1); build_huge_tlb_write_entry(&p, &l, &r, K0, tlb_random, htlb_info.restore_scratch); diff --git a/arch/mips/power/hibernate.S b/arch/mips/power/hibernate.S index 32a7c828f07..7e0277a1048 100644 --- a/arch/mips/power/hibernate.S +++ b/arch/mips/power/hibernate.S @@ -43,7 +43,6 @@ LEAF(swsusp_arch_resume) bne t1, t3, 1b PTR_L t0, PBE_NEXT(t0) bnez t0, 0b - jal local_flush_tlb_all /* Avoid TLB mismatch after kernel resume */ PTR_LA t0, saved_regs PTR_L ra, PT_R31(t0) PTR_L sp, PT_R29(t0) diff --git a/arch/mips/ralink/dts/mt7620a_eval.dts b/arch/mips/ralink/dts/mt7620a_eval.dts index 709f58132f5..35eb874ab7f 100644 --- a/arch/mips/ralink/dts/mt7620a_eval.dts +++ b/arch/mips/ralink/dts/mt7620a_eval.dts @@ -7,7 +7,6 @@ model = "Ralink MT7620A evaluation board"; memory@0 { - device_type = "memory"; reg = <0x0 0x2000000>; }; diff --git a/arch/mips/ralink/dts/rt2880_eval.dts b/arch/mips/ralink/dts/rt2880_eval.dts index 0a685db093d..322d7002595 100644 --- a/arch/mips/ralink/dts/rt2880_eval.dts +++ b/arch/mips/ralink/dts/rt2880_eval.dts @@ -7,7 +7,6 @@ model = "Ralink RT2880 evaluation board"; memory@0 { - device_type = "memory"; reg = <0x8000000 0x2000000>; }; diff --git a/arch/mips/ralink/dts/rt3052_eval.dts b/arch/mips/ralink/dts/rt3052_eval.dts index ec9e9a03554..0ac73ea2819 100644 --- a/arch/mips/ralink/dts/rt3052_eval.dts +++ b/arch/mips/ralink/dts/rt3052_eval.dts @@ -7,7 +7,6 @@ model = "Ralink RT3052 evaluation board"; memory@0 { - device_type = "memory"; reg = <0x0 0x2000000>; }; diff --git a/arch/mips/ralink/dts/rt3883_eval.dts b/arch/mips/ralink/dts/rt3883_eval.dts index e8df21a5d10..2fa6b330bf4 100644 --- a/arch/mips/ralink/dts/rt3883_eval.dts +++ b/arch/mips/ralink/dts/rt3883_eval.dts @@ -7,7 +7,6 @@ model = "Ralink RT3883 evaluation board"; memory@0 { - device_type = "memory"; reg = <0x0 0x2000000>; }; diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S index fec8bf97d80..d8a455ede5a 100644 --- a/arch/openrisc/kernel/entry.S +++ b/arch/openrisc/kernel/entry.S @@ -853,44 +853,37 @@ UNHANDLED_EXCEPTION(_vector_0x1f00,0x1f00) /* ========================================================[ return ] === */ -_resume_userspace: - DISABLE_INTERRUPTS(r3,r4) - l.lwz r4,TI_FLAGS(r10) - l.andi r13,r4,_TIF_WORK_MASK - l.sfeqi r13,0 - l.bf _restore_all - l.nop - _work_pending: - l.lwz r5,PT_ORIG_GPR11(r1) - l.sfltsi r5,0 - l.bnf 1f + /* + * if (current_thread_info->flags & _TIF_NEED_RESCHED) + * schedule(); + */ + l.lwz r5,TI_FLAGS(r10) + l.andi r3,r5,_TIF_NEED_RESCHED + l.sfnei r3,0 + l.bnf _work_notifysig l.nop - l.andi r5,r5,0 -1: - l.jal do_work_pending - l.ori r3,r1,0 /* pt_regs */ - - l.sfeqi r11,0 - l.bf _restore_all + l.jal schedule l.nop - l.sfltsi r11,0 - l.bnf 1f + l.j _resume_userspace l.nop - l.and r11,r11,r0 - l.ori r11,r11,__NR_restart_syscall - l.j _syscall_check_trace_enter + +/* Handle pending signals and notify-resume requests. + * do_notify_resume must be passed the latest pushed pt_regs, not + * necessarily the "userspace" ones. Also, pt_regs->syscallno + * must be set so that the syscall restart functionality works. + */ +_work_notifysig: + l.jal do_notify_resume + l.ori r3,r1,0 /* pt_regs */ + +_resume_userspace: + DISABLE_INTERRUPTS(r3,r4) + l.lwz r3,TI_FLAGS(r10) + l.andi r3,r3,_TIF_WORK_MASK + l.sfnei r3,0 + l.bf _work_pending l.nop -1: - l.lwz r11,PT_ORIG_GPR11(r1) - /* Restore arg registers */ - l.lwz r3,PT_GPR3(r1) - l.lwz r4,PT_GPR4(r1) - l.lwz r5,PT_GPR5(r1) - l.lwz r6,PT_GPR6(r1) - l.lwz r7,PT_GPR7(r1) - l.j _syscall_check_trace_enter - l.lwz r8,PT_GPR8(r1) _restore_all: RESTORE_ALL diff --git a/arch/openrisc/kernel/signal.c b/arch/openrisc/kernel/signal.c index c277ec82783..ae167f7e081 100644 --- a/arch/openrisc/kernel/signal.c +++ b/arch/openrisc/kernel/signal.c @@ -28,24 +28,24 @@ #include <linux/tracehook.h> #include <asm/processor.h> -#include <asm/syscall.h> #include <asm/ucontext.h> #include <asm/uaccess.h> #define DEBUG_SIG 0 struct rt_sigframe { + struct siginfo *pinfo; + void *puc; struct siginfo info; struct ucontext uc; unsigned char retcode[16]; /* trampoline code */ }; -static int restore_sigcontext(struct pt_regs *regs, - struct sigcontext __user *sc) +static int restore_sigcontext(struct pt_regs *regs, struct sigcontext *sc) { - int err = 0; + unsigned int err = 0; - /* Always make any pending restarted system calls return -EINTR */ + /* Alwys make any pending restarted system call return -EINTR */ current_thread_info()->restart_block.fn = do_no_restart_syscall; /* @@ -53,21 +53,25 @@ static int restore_sigcontext(struct pt_regs *regs, * (sc is already checked for VERIFY_READ since the sigframe was * checked in sys_sigreturn previously) */ - err |= __copy_from_user(regs, sc->regs.gpr, 32 * sizeof(unsigned long)); - err |= __copy_from_user(®s->pc, &sc->regs.pc, sizeof(unsigned long)); - err |= __copy_from_user(®s->sr, &sc->regs.sr, sizeof(unsigned long)); + if (__copy_from_user(regs, sc->regs.gpr, 32 * sizeof(unsigned long))) + goto badframe; + if (__copy_from_user(®s->pc, &sc->regs.pc, sizeof(unsigned long))) + goto badframe; + if (__copy_from_user(®s->sr, &sc->regs.sr, sizeof(unsigned long))) + goto badframe; /* make sure the SM-bit is cleared so user-mode cannot fool us */ regs->sr &= ~SPR_SR_SM; - regs->orig_gpr11 = -1; /* Avoid syscall restart checks */ - /* TODO: the other ports use regs->orig_XX to disable syscall checks * after this completes, but we don't use that mechanism. maybe we can * use it now ? */ return err; + +badframe: + return 1; } asmlinkage long _sys_rt_sigreturn(struct pt_regs *regs) @@ -107,18 +111,21 @@ badframe: * Set up a signal frame. */ -static int setup_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc) +static int setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs, + unsigned long mask) { int err = 0; /* copy the regs */ - /* There should be no need to save callee-saved registers here... - * ...but we save them anyway. Revisit this - */ + err |= __copy_to_user(sc->regs.gpr, regs, 32 * sizeof(unsigned long)); err |= __copy_to_user(&sc->regs.pc, ®s->pc, sizeof(unsigned long)); err |= __copy_to_user(&sc->regs.sr, ®s->sr, sizeof(unsigned long)); + /* then some other stuff */ + + err |= __put_user(mask, &sc->oldmask); + return err; } @@ -174,18 +181,24 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, int err = 0; frame = get_sigframe(ka, regs, sizeof(*frame)); + if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) goto give_sigsegv; - /* Create siginfo. */ + err |= __put_user(&frame->info, &frame->pinfo); + err |= __put_user(&frame->uc, &frame->puc); + if (ka->sa.sa_flags & SA_SIGINFO) err |= copy_siginfo_to_user(&frame->info, info); + if (err) + goto give_sigsegv; - /* Create the ucontext. */ + /* Clear all the bits of the ucontext we don't use. */ + err |= __clear_user(&frame->uc, offsetof(struct ucontext, uc_mcontext)); err |= __put_user(0, &frame->uc.uc_flags); err |= __put_user(NULL, &frame->uc.uc_link); err |= __save_altstack(&frame->uc.uc_stack, regs->sp); - err |= setup_sigcontext(regs, &frame->uc.uc_mcontext); + err |= setup_sigcontext(&frame->uc.uc_mcontext, regs, set->sig[0]); err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); @@ -194,12 +207,9 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, /* trampoline - the desired return ip is the retcode itself */ return_ip = (unsigned long)&frame->retcode; - /* This is: - l.ori r11,r0,__NR_sigreturn - l.sys 1 - */ - err |= __put_user(0xa960, (short *)(frame->retcode + 0)); - err |= __put_user(__NR_rt_sigreturn, (short *)(frame->retcode + 2)); + /* This is l.ori r11,r0,__NR_sigreturn, l.sys 1 */ + err |= __put_user(0xa960, (short *)(frame->retcode + 0)); + err |= __put_user(__NR_rt_sigreturn, (short *)(frame->retcode + 2)); err |= __put_user(0x20000001, (unsigned long *)(frame->retcode + 4)); err |= __put_user(0x15000000, (unsigned long *)(frame->retcode + 8)); @@ -252,106 +262,82 @@ handle_signal(unsigned long sig, * mode below. */ -int do_signal(struct pt_regs *regs, int syscall) +void do_signal(struct pt_regs *regs) { siginfo_t info; int signr; struct k_sigaction ka; - unsigned long continue_addr = 0; - unsigned long restart_addr = 0; - unsigned long retval = 0; - int restart = 0; - - if (syscall) { - continue_addr = regs->pc; - restart_addr = continue_addr - 4; - retval = regs->gpr[11]; - - /* - * Setup syscall restart here so that a debugger will - * see the already changed PC. - */ - switch (retval) { + + /* + * We want the common case to go fast, which + * is why we may in certain cases get here from + * kernel mode. Just return without doing anything + * if so. + */ + if (!user_mode(regs)) + return; + + signr = get_signal_to_deliver(&info, &ka, regs, NULL); + + /* If we are coming out of a syscall then we need + * to check if the syscall was interrupted and wants to be + * restarted after handling the signal. If so, the original + * syscall number is put back into r11 and the PC rewound to + * point at the l.sys instruction that resulted in the + * original syscall. Syscall results other than the four + * below mean that the syscall executed to completion and no + * restart is necessary. + */ + if (regs->orig_gpr11) { + int restart = 0; + + switch (regs->gpr[11]) { case -ERESTART_RESTARTBLOCK: - restart = -2; - /* Fall through */ case -ERESTARTNOHAND: + /* Restart if there is no signal handler */ + restart = (signr <= 0); + break; case -ERESTARTSYS: + /* Restart if there no signal handler or + * SA_RESTART flag is set */ + restart = (signr <= 0 || (ka.sa.sa_flags & SA_RESTART)); + break; case -ERESTARTNOINTR: - restart++; - regs->gpr[11] = regs->orig_gpr11; - regs->pc = restart_addr; + /* Always restart */ + restart = 1; break; } - } - /* - * Get the signal to deliver. When running under ptrace, at this - * point the debugger may change all our registers ... - */ - signr = get_signal_to_deliver(&info, &ka, regs, NULL); - /* - * Depending on the signal settings we may need to revert the - * decision to restart the system call. But skip this if a - * debugger has chosen to restart at a different PC. - */ - if (signr > 0) { - if (unlikely(restart) && regs->pc == restart_addr) { - if (retval == -ERESTARTNOHAND || - retval == -ERESTART_RESTARTBLOCK - || (retval == -ERESTARTSYS - && !(ka.sa.sa_flags & SA_RESTART))) { - /* No automatic restart */ - regs->gpr[11] = -EINTR; - regs->pc = continue_addr; - } + if (restart) { + if (regs->gpr[11] == -ERESTART_RESTARTBLOCK) + regs->gpr[11] = __NR_restart_syscall; + else + regs->gpr[11] = regs->orig_gpr11; + regs->pc -= 4; + } else { + regs->gpr[11] = -EINTR; } + } - handle_signal(signr, &info, &ka, regs); - } else { - /* no handler */ + if (signr <= 0) { + /* no signal to deliver so we just put the saved sigmask + * back */ restore_saved_sigmask(); - /* - * Restore pt_regs PC as syscall restart will be handled by - * kernel without return to userspace - */ - if (unlikely(restart) && regs->pc == restart_addr) { - regs->pc = continue_addr; - return restart; - } + } else { /* signr > 0 */ + /* Whee! Actually deliver the signal. */ + handle_signal(signr, &info, &ka, regs); } - return 0; + return; } -asmlinkage int -do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall) +asmlinkage void do_notify_resume(struct pt_regs *regs) { - do { - if (likely(thread_flags & _TIF_NEED_RESCHED)) { - schedule(); - } else { - if (unlikely(!user_mode(regs))) - return 0; - local_irq_enable(); - if (thread_flags & _TIF_SIGPENDING) { - int restart = do_signal(regs, syscall); - if (unlikely(restart)) { - /* - * Restart without handlers. - * Deal with it without leaving - * the kernel space. - */ - return restart; - } - syscall = 0; - } else { - clear_thread_flag(TIF_NOTIFY_RESUME); - tracehook_notify_resume(regs); - } - } - local_irq_disable(); - thread_flags = current_thread_info()->flags; - } while (thread_flags & _TIF_WORK_MASK); - return 0; + if (current_thread_info()->flags & _TIF_SIGPENDING) + do_signal(regs); + + if (current_thread_info()->flags & _TIF_NOTIFY_RESUME) { + clear_thread_flag(TIF_NOTIFY_RESUME); + tracehook_notify_resume(regs); + } } diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile index 94607bfa273..96ec3982be8 100644 --- a/arch/parisc/Makefile +++ b/arch/parisc/Makefile @@ -46,12 +46,7 @@ cflags-y := -pipe # These flags should be implied by an hppa-linux configuration, but they # are not in gcc 3.2. -cflags-y += -mno-space-regs - -# -mfast-indirect-calls is only relevant for 32-bit kernels. -ifndef CONFIG_64BIT -cflags-y += -mfast-indirect-calls -endif +cflags-y += -mno-space-regs -mfast-indirect-calls # Currently we save and restore fpregs on all kernel entry/interruption paths. # If that gets optimized, we might need to disable the use of fpregs in the diff --git a/arch/parisc/include/asm/processor.h b/arch/parisc/include/asm/processor.h index c6ee86542fe..cc2290a3cac 100644 --- a/arch/parisc/include/asm/processor.h +++ b/arch/parisc/include/asm/processor.h @@ -53,8 +53,6 @@ #define STACK_TOP TASK_SIZE #define STACK_TOP_MAX DEFAULT_TASK_SIZE -#define STACK_SIZE_MAX (1 << 30) /* 1 GB */ - #endif #ifndef __ASSEMBLY__ diff --git a/arch/parisc/include/uapi/asm/signal.h b/arch/parisc/include/uapi/asm/signal.h index f5645d6a89f..a2fa297196b 100644 --- a/arch/parisc/include/uapi/asm/signal.h +++ b/arch/parisc/include/uapi/asm/signal.h @@ -69,6 +69,8 @@ #define SA_NOMASK SA_NODEFER #define SA_ONESHOT SA_RESETHAND +#define SA_RESTORER 0x04000000 /* obsolete -- ignored */ + #define MINSIGSTKSZ 2048 #define SIGSTKSZ 8192 diff --git a/arch/parisc/kernel/hardware.c b/arch/parisc/kernel/hardware.c index c22c3d84e28..872275659d9 100644 --- a/arch/parisc/kernel/hardware.c +++ b/arch/parisc/kernel/hardware.c @@ -1205,8 +1205,7 @@ static struct hp_hardware hp_hardware_list[] = { {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, - {HPHW_FIO, 0x076, 0x000AD, 0x0, "Crestone Peak Core RS-232"}, - {HPHW_FIO, 0x077, 0x000AD, 0x0, "Crestone Peak Fast? Core RS-232"}, + {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"}, diff --git a/arch/parisc/kernel/syscall_table.S b/arch/parisc/kernel/syscall_table.S index 10a0c2aad8c..0c9107285e6 100644 --- a/arch/parisc/kernel/syscall_table.S +++ b/arch/parisc/kernel/syscall_table.S @@ -392,7 +392,7 @@ ENTRY_COMP(vmsplice) ENTRY_COMP(move_pages) /* 295 */ ENTRY_SAME(getcpu) - ENTRY_COMP(epoll_pwait) + ENTRY_SAME(epoll_pwait) ENTRY_COMP(statfs64) ENTRY_COMP(fstatfs64) ENTRY_COMP(kexec_load) /* 300 */ diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 7f656f119ea..fe404e77246 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -138,7 +138,6 @@ config PPC select ARCH_USE_BUILTIN_BSWAP select OLD_SIGSUSPEND select OLD_SIGACTION if PPC32 - select ARCH_SUPPORTS_ATOMIC_RMW config EARLY_PRINTK bool diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index 56a4a5d205a..967fd23ace7 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -97,9 +97,7 @@ CFLAGS-$(CONFIG_POWER7_CPU) += $(call cc-option,-mcpu=power7) CFLAGS-$(CONFIG_TUNE_CELL) += $(call cc-option,-mtune=cell) -asinstr := $(call as-instr,lis 9$(comma)foo@high,-DHAVE_AS_ATHIGH=1) - -KBUILD_CPPFLAGS += -Iarch/$(ARCH) $(asinstr) +KBUILD_CPPFLAGS += -Iarch/$(ARCH) KBUILD_AFLAGS += -Iarch/$(ARCH) KBUILD_CFLAGS += -msoft-float -pipe -Iarch/$(ARCH) $(CFLAGS-y) CPP = $(CC) -E $(KBUILD_CFLAGS) diff --git a/arch/powerpc/include/asm/compat.h b/arch/powerpc/include/asm/compat.h index ef22898daa9..84fdf6857c3 100644 --- a/arch/powerpc/include/asm/compat.h +++ b/arch/powerpc/include/asm/compat.h @@ -8,11 +8,7 @@ #include <linux/sched.h> #define COMPAT_USER_HZ 100 -#ifdef __BIG_ENDIAN__ #define COMPAT_UTS_MACHINE "ppc\0\0" -#else -#define COMPAT_UTS_MACHINE "ppcle\0\0" -#endif typedef u32 compat_size_t; typedef s32 compat_ssize_t; diff --git a/arch/powerpc/include/asm/perf_event_server.h b/arch/powerpc/include/asm/perf_event_server.h index 960bf64788a..f265049dd7d 100644 --- a/arch/powerpc/include/asm/perf_event_server.h +++ b/arch/powerpc/include/asm/perf_event_server.h @@ -59,7 +59,7 @@ struct power_pmu { #define PPMU_SIAR_VALID 0x00000010 /* Processor has SIAR Valid bit */ #define PPMU_HAS_SSLOT 0x00000020 /* Has sampled slot in MMCRA */ #define PPMU_HAS_SIER 0x00000040 /* Has SIER */ -#define PPMU_ARCH_207S 0x00000080 /* PMC is architecture v2.07S */ +#define PPMU_BHRB 0x00000080 /* has BHRB feature enabled */ /* * Values for flags to get_alternatives() diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h index 22cee04a47f..2f1b6c5f817 100644 --- a/arch/powerpc/include/asm/ppc_asm.h +++ b/arch/powerpc/include/asm/ppc_asm.h @@ -390,16 +390,11 @@ n: * ld rY,ADDROFF(name)(rX) */ #ifdef __powerpc64__ -#ifdef HAVE_AS_ATHIGH -#define __AS_ATHIGH high -#else -#define __AS_ATHIGH h -#endif #define LOAD_REG_IMMEDIATE(reg,expr) \ lis reg,(expr)@highest; \ ori reg,reg,(expr)@higher; \ rldicr reg,reg,32,31; \ - oris reg,reg,(expr)@__AS_ATHIGH; \ + oris reg,reg,(expr)@h; \ ori reg,reg,(expr)@l; #define LOAD_REG_ADDR(reg,name) \ diff --git a/arch/powerpc/include/asm/pte-hash64-64k.h b/arch/powerpc/include/asm/pte-hash64-64k.h index 063fcadd1a0..d836d945068 100644 --- a/arch/powerpc/include/asm/pte-hash64-64k.h +++ b/arch/powerpc/include/asm/pte-hash64-64k.h @@ -40,39 +40,17 @@ #ifndef __ASSEMBLY__ -#include <asm/barrier.h> /* for smp_rmb() */ - /* * With 64K pages on hash table, we have a special PTE format that * uses a second "half" of the page table to encode sub-page information * in order to deal with 64K made of 4K HW pages. Thus we override the * generic accessors and iterators here */ -#define __real_pte __real_pte -static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep) -{ - real_pte_t rpte; - - rpte.pte = pte; - rpte.hidx = 0; - if (pte_val(pte) & _PAGE_COMBO) { - /* - * Make sure we order the hidx load against the _PAGE_COMBO - * check. The store side ordering is done in __hash_page_4K - */ - smp_rmb(); - rpte.hidx = pte_val(*((ptep) + PTRS_PER_PTE)); - } - return rpte; -} - -static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index) -{ - if ((pte_val(rpte.pte) & _PAGE_COMBO)) - return (rpte.hidx >> (index<<2)) & 0xf; - return (pte_val(rpte.pte) >> 12) & 0xf; -} - +#define __real_pte(e,p) ((real_pte_t) { \ + (e), (pte_val(e) & _PAGE_COMBO) ? \ + (pte_val(*((p) + PTRS_PER_PTE))) : 0 }) +#define __rpte_to_hidx(r,index) ((pte_val((r).pte) & _PAGE_COMBO) ? \ + (((r).hidx >> ((index)<<2)) & 0xf) : ((pte_val((r).pte) >> 12) & 0xf)) #define __rpte_to_pte(r) ((r).pte) #define __rpte_sub_valid(rpte, index) \ (pte_val(rpte.pte) & (_PAGE_HPTE_SUB0 >> (index))) diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h index 637c97fcbeb..becc08e6a65 100644 --- a/arch/powerpc/include/asm/ptrace.h +++ b/arch/powerpc/include/asm/ptrace.h @@ -35,12 +35,6 @@ STACK_FRAME_OVERHEAD + 288) #define STACK_FRAME_MARKER 12 -#if defined(_CALL_ELF) && _CALL_ELF == 2 -#define STACK_FRAME_MIN_SIZE 32 -#else -#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD -#endif - /* Size of dummy stack frame allocated when calling signal handler. */ #define __SIGNAL_FRAMESIZE 128 #define __SIGNAL_FRAMESIZE32 64 @@ -52,7 +46,6 @@ #define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773) #define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + STACK_FRAME_OVERHEAD) #define STACK_FRAME_MARKER 2 -#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD /* Size of stack frame allocated when calling signal handler. */ #define __SIGNAL_FRAMESIZE 64 diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index 795f67792ea..e1fb161252e 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -208,7 +208,6 @@ #define SPRN_ACOP 0x1F /* Available Coprocessor Register */ #define SPRN_TFIAR 0x81 /* Transaction Failure Inst Addr */ #define SPRN_TEXASR 0x82 /* Transaction EXception & Summary */ -#define TEXASR_FS __MASK(63-36) /* Transaction Failure Summary */ #define SPRN_TEXASRU 0x83 /* '' '' '' Upper 32 */ #define SPRN_TFHAR 0x80 /* Transaction Failure Handler Addr */ #define SPRN_CTRLF 0x088 diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h index 05fcdd82682..43523fe0d8b 100644 --- a/arch/powerpc/include/asm/systbl.h +++ b/arch/powerpc/include/asm/systbl.h @@ -190,7 +190,7 @@ SYSCALL_SPU(getcwd) SYSCALL_SPU(capget) SYSCALL_SPU(capset) COMPAT_SYS(sigaltstack) -SYSX_SPU(sys_sendfile64,compat_sys_sendfile,sys_sendfile) +COMPAT_SYS_SPU(sendfile) SYSCALL(ni_syscall) SYSCALL(ni_syscall) PPC_SYS(vfork) diff --git a/arch/powerpc/include/uapi/asm/cputable.h b/arch/powerpc/include/uapi/asm/cputable.h index de2c0e4ee1a..5b7657959fa 100644 --- a/arch/powerpc/include/uapi/asm/cputable.h +++ b/arch/powerpc/include/uapi/asm/cputable.h @@ -41,6 +41,5 @@ #define PPC_FEATURE2_EBB 0x10000000 #define PPC_FEATURE2_ISEL 0x08000000 #define PPC_FEATURE2_TAR 0x04000000 -#define PPC_FEATURE2_VEC_CRYPTO 0x02000000 #endif /* _UAPI__ASM_POWERPC_CPUTABLE_H */ diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c index b2dc4552285..2a45d0f0438 100644 --- a/arch/powerpc/kernel/cputable.c +++ b/arch/powerpc/kernel/cputable.c @@ -105,8 +105,7 @@ extern void __restore_cpu_e6500(void); PPC_FEATURE_PSERIES_PERFMON_COMPAT) #define COMMON_USER2_POWER8 (PPC_FEATURE2_ARCH_2_07 | \ PPC_FEATURE2_HTM_COMP | PPC_FEATURE2_DSCR | \ - PPC_FEATURE2_ISEL | PPC_FEATURE2_TAR | \ - PPC_FEATURE2_VEC_CRYPTO) + PPC_FEATURE2_ISEL | PPC_FEATURE2_TAR) #define COMMON_USER_PA6T (COMMON_USER_PPC64 | PPC_FEATURE_PA6T |\ PPC_FEATURE_TRUE_LE | \ PPC_FEATURE_HAS_ALTIVEC_COMP) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index d55357ee902..7baa27b7abb 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -523,31 +523,6 @@ out_and_saveregs: tm_save_sprs(thr); } -extern void __tm_recheckpoint(struct thread_struct *thread, - unsigned long orig_msr); - -void tm_recheckpoint(struct thread_struct *thread, - unsigned long orig_msr) -{ - unsigned long flags; - - /* We really can't be interrupted here as the TEXASR registers can't - * change and later in the trecheckpoint code, we have a userspace R1. - * So let's hard disable over this region. - */ - local_irq_save(flags); - hard_irq_disable(); - - /* The TM SPRs are restored here, so that TEXASR.FS can be set - * before the trecheckpoint and no explosion occurs. - */ - tm_restore_sprs(thread); - - __tm_recheckpoint(thread, orig_msr); - - local_irq_restore(flags); -} - static inline void tm_recheckpoint_new_task(struct task_struct *new) { unsigned long msr; @@ -566,10 +541,13 @@ static inline void tm_recheckpoint_new_task(struct task_struct *new) if (!new->thread.regs) return; - if (!MSR_TM_ACTIVE(new->thread.regs->msr)){ - tm_restore_sprs(&new->thread); + /* The TM SPRs are restored here, so that TEXASR.FS can be set + * before the trecheckpoint and no explosion occurs. + */ + tm_restore_sprs(&new->thread); + + if (!MSR_TM_ACTIVE(new->thread.regs->msr)) return; - } msr = new->thread.tm_orig_msr; /* Recheckpoint to restore original checkpointed register state. */ TM_DEBUG("*** tm_recheckpoint of pid %d " @@ -948,16 +926,6 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) flush_altivec_to_thread(src); flush_vsx_to_thread(src); flush_spe_to_thread(src); - /* - * Flush TM state out so we can copy it. __switch_to_tm() does this - * flush but it removes the checkpointed state from the current CPU and - * transitions the CPU out of TM mode. Hence we need to call - * tm_recheckpoint_new_task() (on the same task) to restore the - * checkpointed state back and the TM mode. - */ - __switch_to_tm(src); - tm_recheckpoint_new_task(src); - *dst = *src; return 0; } diff --git a/arch/powerpc/kernel/reloc_64.S b/arch/powerpc/kernel/reloc_64.S index c712ecec13b..b47a0e1ab00 100644 --- a/arch/powerpc/kernel/reloc_64.S +++ b/arch/powerpc/kernel/reloc_64.S @@ -81,7 +81,6 @@ _GLOBAL(relocate) 6: blr -.balign 8 p_dyn: .llong __dynamic_start - 0b p_rela: .llong __rela_dyn_start - 0b p_st: .llong _stext - 0b diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index 81f929f026f..7e9dff80e1d 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -863,8 +863,6 @@ static long restore_tm_user_regs(struct pt_regs *regs, * transactional versions should be loaded. */ tm_enable(); - /* Make sure the transaction is marked as failed */ - current->thread.tm_texasr |= TEXASR_FS; /* This loads the checkpointed FP/VEC state, if used */ tm_recheckpoint(¤t->thread, msr); /* Get the top half of the MSR */ diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c index 74d9615a6bb..35c20a1fb36 100644 --- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -513,8 +513,6 @@ static long restore_tm_sigcontexts(struct pt_regs *regs, } #endif tm_enable(); - /* Make sure the transaction is marked as failed */ - current->thread.tm_texasr |= TEXASR_FS; /* This loads the checkpointed FP/VEC state, if used */ tm_recheckpoint(¤t->thread, msr); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 57fd5c1e8e8..5fc29ad7e26 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -512,7 +512,7 @@ void timer_interrupt(struct pt_regs * regs) __get_cpu_var(irq_stat).timer_irqs++; -#if defined(CONFIG_PPC32) && defined(CONFIG_PPC_PMAC) +#if defined(CONFIG_PPC32) && defined(CONFIG_PMAC) if (atomic_read(&ppc_n_lost_interrupts) != 0) do_IRQ(regs); #endif diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S index 1e43ed404b9..f2abb219a17 100644 --- a/arch/powerpc/kernel/tm.S +++ b/arch/powerpc/kernel/tm.S @@ -296,7 +296,7 @@ dont_backup_fp: * Call with IRQs off, stacks get all out of sync for * some periods in here! */ -_GLOBAL(__tm_recheckpoint) +_GLOBAL(tm_recheckpoint) mfcr r5 mflr r0 std r5, 8(r1) diff --git a/arch/powerpc/lib/crtsavres.S b/arch/powerpc/lib/crtsavres.S index a5b30c71a8d..b2c68ce139a 100644 --- a/arch/powerpc/lib/crtsavres.S +++ b/arch/powerpc/lib/crtsavres.S @@ -231,87 +231,6 @@ _GLOBAL(_rest32gpr_31_x) mr 1,11 blr -#ifdef CONFIG_ALTIVEC -/* Called with r0 pointing just beyond the end of the vector save area. */ - -_GLOBAL(_savevr_20) - li r11,-192 - stvx vr20,r11,r0 -_GLOBAL(_savevr_21) - li r11,-176 - stvx vr21,r11,r0 -_GLOBAL(_savevr_22) - li r11,-160 - stvx vr22,r11,r0 -_GLOBAL(_savevr_23) - li r11,-144 - stvx vr23,r11,r0 -_GLOBAL(_savevr_24) - li r11,-128 - stvx vr24,r11,r0 -_GLOBAL(_savevr_25) - li r11,-112 - stvx vr25,r11,r0 -_GLOBAL(_savevr_26) - li r11,-96 - stvx vr26,r11,r0 -_GLOBAL(_savevr_27) - li r11,-80 - stvx vr27,r11,r0 -_GLOBAL(_savevr_28) - li r11,-64 - stvx vr28,r11,r0 -_GLOBAL(_savevr_29) - li r11,-48 - stvx vr29,r11,r0 -_GLOBAL(_savevr_30) - li r11,-32 - stvx vr30,r11,r0 -_GLOBAL(_savevr_31) - li r11,-16 - stvx vr31,r11,r0 - blr - -_GLOBAL(_restvr_20) - li r11,-192 - lvx vr20,r11,r0 -_GLOBAL(_restvr_21) - li r11,-176 - lvx vr21,r11,r0 -_GLOBAL(_restvr_22) - li r11,-160 - lvx vr22,r11,r0 -_GLOBAL(_restvr_23) - li r11,-144 - lvx vr23,r11,r0 -_GLOBAL(_restvr_24) - li r11,-128 - lvx vr24,r11,r0 -_GLOBAL(_restvr_25) - li r11,-112 - lvx vr25,r11,r0 -_GLOBAL(_restvr_26) - li r11,-96 - lvx vr26,r11,r0 -_GLOBAL(_restvr_27) - li r11,-80 - lvx vr27,r11,r0 -_GLOBAL(_restvr_28) - li r11,-64 - lvx vr28,r11,r0 -_GLOBAL(_restvr_29) - li r11,-48 - lvx vr29,r11,r0 -_GLOBAL(_restvr_30) - li r11,-32 - lvx vr30,r11,r0 -_GLOBAL(_restvr_31) - li r11,-16 - lvx vr31,r11,r0 - blr - -#endif /* CONFIG_ALTIVEC */ - #else /* CONFIG_PPC64 */ .section ".text.save.restore","ax",@progbits @@ -437,111 +356,6 @@ _restgpr0_31: mtlr r0 blr -#ifdef CONFIG_ALTIVEC -/* Called with r0 pointing just beyond the end of the vector save area. */ - -.globl _savevr_20 -_savevr_20: - li r12,-192 - stvx vr20,r12,r0 -.globl _savevr_21 -_savevr_21: - li r12,-176 - stvx vr21,r12,r0 -.globl _savevr_22 -_savevr_22: - li r12,-160 - stvx vr22,r12,r0 -.globl _savevr_23 -_savevr_23: - li r12,-144 - stvx vr23,r12,r0 -.globl _savevr_24 -_savevr_24: - li r12,-128 - stvx vr24,r12,r0 -.globl _savevr_25 -_savevr_25: - li r12,-112 - stvx vr25,r12,r0 -.globl _savevr_26 -_savevr_26: - li r12,-96 - stvx vr26,r12,r0 -.globl _savevr_27 -_savevr_27: - li r12,-80 - stvx vr27,r12,r0 -.globl _savevr_28 -_savevr_28: - li r12,-64 - stvx vr28,r12,r0 -.globl _savevr_29 -_savevr_29: - li r12,-48 - stvx vr29,r12,r0 -.globl _savevr_30 -_savevr_30: - li r12,-32 - stvx vr30,r12,r0 -.globl _savevr_31 -_savevr_31: - li r12,-16 - stvx vr31,r12,r0 - blr - -.globl _restvr_20 -_restvr_20: - li r12,-192 - lvx vr20,r12,r0 -.globl _restvr_21 -_restvr_21: - li r12,-176 - lvx vr21,r12,r0 -.globl _restvr_22 -_restvr_22: - li r12,-160 - lvx vr22,r12,r0 -.globl _restvr_23 -_restvr_23: - li r12,-144 - lvx vr23,r12,r0 -.globl _restvr_24 -_restvr_24: - li r12,-128 - lvx vr24,r12,r0 -.globl _restvr_25 -_restvr_25: - li r12,-112 - lvx vr25,r12,r0 -.globl _restvr_26 -_restvr_26: - li r12,-96 - lvx vr26,r12,r0 -.globl _restvr_27 -_restvr_27: - li r12,-80 - lvx vr27,r12,r0 -.globl _restvr_28 -_restvr_28: - li r12,-64 - lvx vr28,r12,r0 -.globl _restvr_29 -_restvr_29: - li r12,-48 - lvx vr29,r12,r0 -.globl _restvr_30 -_restvr_30: - li r12,-32 - lvx vr30,r12,r0 -.globl _restvr_31 -_restvr_31: - li r12,-16 - lvx vr31,r12,r0 - blr - -#endif /* CONFIG_ALTIVEC */ - #endif /* CONFIG_PPC64 */ #endif diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c index 08490ecc465..e15c521846c 100644 --- a/arch/powerpc/lib/sstep.c +++ b/arch/powerpc/lib/sstep.c @@ -1395,7 +1395,7 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) regs->gpr[rd] = byterev_4(val); goto ldst_done; -#ifdef CONFIG_PPC_FPU +#ifdef CONFIG_PPC_CPU case 535: /* lfsx */ case 567: /* lfsux */ if (!(regs->msr & MSR_FP)) diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 08c6f3185d4..b7293bba006 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -586,8 +586,8 @@ static int __cpuinit cpu_numa_callback(struct notifier_block *nfb, case CPU_UP_CANCELED: case CPU_UP_CANCELED_FROZEN: unmap_cpu_from_node(lcpu); - ret = NOTIFY_OK; break; + ret = NOTIFY_OK; #endif } return ret; diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index 2396dda282c..74d1e780748 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -35,7 +35,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp) return 0; /* must be 16-byte aligned */ if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD)) return 0; - if (sp >= prev_sp + STACK_FRAME_MIN_SIZE) + if (sp >= prev_sp + STACK_FRAME_OVERHEAD) return 1; /* * sp could decrease when we jump off an interrupt stack diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index 846861a20b0..d3ee2e50a3a 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -749,22 +749,7 @@ static void power_pmu_read(struct perf_event *event) } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev); local64_add(delta, &event->count); - - /* - * A number of places program the PMC with (0x80000000 - period_left). - * We never want period_left to be less than 1 because we will program - * the PMC with a value >= 0x800000000 and an edge detected PMC will - * roll around to 0 before taking an exception. We have seen this - * on POWER8. - * - * To fix this, clamp the minimum value of period_left to 1. - */ - do { - prev = local64_read(&event->hw.period_left); - val = prev - delta; - if (val < 1) - val = 1; - } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev); + local64_sub(delta, &event->hw.period_left); } /* @@ -1342,9 +1327,6 @@ static int can_go_on_limited_pmc(struct perf_event *event, u64 ev, if (ppmu->limited_pmc_event(ev)) return 1; - if (ppmu->flags & PPMU_ARCH_207S) - mtspr(SPRN_MMCR2, 0); - /* * The requested event_id isn't on a limited PMC already; * see if any alternative code goes on a limited PMC. @@ -1439,7 +1421,7 @@ static int power_pmu_event_init(struct perf_event *event) if (has_branch_stack(event)) { /* PMU has BHRB enabled */ - if (!(ppmu->flags & PPMU_ARCH_207S)) + if (!(ppmu->flags & PPMU_BHRB)) return -EOPNOTSUPP; } diff --git a/arch/powerpc/perf/power8-pmu.c b/arch/powerpc/perf/power8-pmu.c index ee3b4048ab4..9aefaebedef 100644 --- a/arch/powerpc/perf/power8-pmu.c +++ b/arch/powerpc/perf/power8-pmu.c @@ -592,7 +592,7 @@ static struct power_pmu power8_pmu = { .get_constraint = power8_get_constraint, .get_alternatives = power8_get_alternatives, .disable_pmc = power8_disable_pmc, - .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_ARCH_207S, + .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_BHRB, .n_generic = ARRAY_SIZE(power8_generic_events), .generic_events = power8_generic_events, .attr_groups = power8_pmu_attr_groups, diff --git a/arch/powerpc/platforms/pseries/eeh_pseries.c b/arch/powerpc/platforms/pseries/eeh_pseries.c index 68f97d5a467..b456b157d33 100644 --- a/arch/powerpc/platforms/pseries/eeh_pseries.c +++ b/arch/powerpc/platforms/pseries/eeh_pseries.c @@ -400,7 +400,6 @@ static int pseries_eeh_get_state(struct eeh_pe *pe, int *state) } else { result = EEH_STATE_NOT_SUPPORT; } - break; default: result = EEH_STATE_NOT_SUPPORT; } diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c index bebe64ed5dc..9a432de363b 100644 --- a/arch/powerpc/platforms/pseries/hotplug-memory.c +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c @@ -158,7 +158,7 @@ static int pseries_remove_memory(struct device_node *np) static inline int pseries_remove_memblock(unsigned long base, unsigned int memblock_size) { - return 0; + return -EOPNOTSUPP; } static inline int pseries_remove_memory(struct device_node *np) { diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index d8d6eeca56b..97dcbea97a1 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -116,7 +116,6 @@ config S390 select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACE_MCOUNT_TEST - select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_GZIP select HAVE_KERNEL_LZMA diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index fd104db9cea..2a245b55bb7 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -818,9 +818,6 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, long func, else memcpy(walk->iv, ctrptr, AES_BLOCK_SIZE); spin_unlock(&ctrblk_lock); - } else { - if (!nbytes) - memcpy(walk->iv, ctrptr, AES_BLOCK_SIZE); } /* * final block may be < AES_BLOCK_SIZE, copy only nbytes diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c index f2d6cccddcf..2d96e68febb 100644 --- a/arch/s390/crypto/des_s390.c +++ b/arch/s390/crypto/des_s390.c @@ -429,9 +429,6 @@ static int ctr_desall_crypt(struct blkcipher_desc *desc, long func, else memcpy(walk->iv, ctrptr, DES_BLOCK_SIZE); spin_unlock(&ctrblk_lock); - } else { - if (!nbytes) - memcpy(walk->iv, ctrptr, DES_BLOCK_SIZE); } /* final block may be < DES_BLOCK_SIZE, copy only nbytes */ if (nbytes) { diff --git a/arch/s390/include/asm/ccwdev.h b/arch/s390/include/asm/ccwdev.h index 31b5ca8f8c3..f201af8be58 100644 --- a/arch/s390/include/asm/ccwdev.h +++ b/arch/s390/include/asm/ccwdev.h @@ -219,7 +219,7 @@ extern void ccw_device_get_id(struct ccw_device *, struct ccw_dev_id *); #define to_ccwdev(n) container_of(n, struct ccw_device, dev) #define to_ccwdrv(n) container_of(n, struct ccw_driver, driver) -extern struct ccw_device *ccw_device_probe_console(struct ccw_driver *); +extern struct ccw_device *ccw_device_probe_console(void); extern void ccw_device_wait_idle(struct ccw_device *); extern int ccw_device_force_console(struct ccw_device *); diff --git a/arch/s390/include/asm/lowcore.h b/arch/s390/include/asm/lowcore.h index 2bed4f02a55..bbf8141408c 100644 --- a/arch/s390/include/asm/lowcore.h +++ b/arch/s390/include/asm/lowcore.h @@ -142,9 +142,9 @@ struct _lowcore { __u8 pad_0x02fc[0x0300-0x02fc]; /* 0x02fc */ /* Interrupt response block */ - __u8 irb[96]; /* 0x0300 */ + __u8 irb[64]; /* 0x0300 */ - __u8 pad_0x0360[0x0e00-0x0360]; /* 0x0360 */ + __u8 pad_0x0340[0x0e00-0x0340]; /* 0x0340 */ /* * 0xe00 contains the address of the IPL Parameter Information @@ -288,13 +288,12 @@ struct _lowcore { __u8 pad_0x03a0[0x0400-0x03a0]; /* 0x03a0 */ /* Interrupt response block. */ - __u8 irb[96]; /* 0x0400 */ - __u8 pad_0x0460[0x0480-0x0460]; /* 0x0460 */ + __u8 irb[64]; /* 0x0400 */ /* Per cpu primary space access list */ - __u32 paste[16]; /* 0x0480 */ + __u32 paste[16]; /* 0x0440 */ - __u8 pad_0x04c0[0x0e00-0x04c0]; /* 0x04c0 */ + __u8 pad_0x0480[0x0e00-0x0480]; /* 0x0480 */ /* * 0xe00 contains the address of the IPL Parameter Information diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c index 9677d935583..a314c57f4e9 100644 --- a/arch/s390/kernel/ptrace.c +++ b/arch/s390/kernel/ptrace.c @@ -314,9 +314,7 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data) * psw and gprs are stored on the stack */ if (addr == (addr_t) &dummy->regs.psw.mask && - (((data^psw_user_bits) & ~PSW_MASK_USER) || - (((data^psw_user_bits) & PSW_MASK_ASC) && - ((data|psw_user_bits) & PSW_MASK_ASC) == PSW_MASK_ASC) || + ((data & ~PSW_MASK_USER) != psw_user_bits || ((data & PSW_MASK_EA) && !(data & PSW_MASK_BA)))) /* Invalid psw mask. */ return -EINVAL; @@ -629,10 +627,7 @@ static int __poke_user_compat(struct task_struct *child, */ if (addr == (addr_t) &dummy32->regs.psw.mask) { /* Build a 64 bit psw mask from 31 bit mask. */ - if (((tmp^psw32_user_bits) & ~PSW32_MASK_USER) || - (((tmp^psw32_user_bits) & PSW32_MASK_ASC) && - ((tmp|psw32_user_bits) & PSW32_MASK_ASC) - == PSW32_MASK_ASC)) + if ((tmp & ~PSW32_MASK_USER) != psw32_user_bits) /* Invalid psw mask. */ return -EINVAL; regs->psw.mask = (regs->psw.mask & ~PSW_MASK_USER) | diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index bc79ab00536..5c948177529 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -71,7 +71,6 @@ static int __interrupt_is_deliverable(struct kvm_vcpu *vcpu, return 0; if (vcpu->arch.sie_block->gcr[0] & 0x2000ul) return 1; - return 0; case KVM_S390_INT_EMERGENCY: if (psw_extint_disabled(vcpu)) return 0; diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 3bc6b7e43b2..1919db6c060 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -243,6 +243,7 @@ static void bpf_jit_noleaks(struct bpf_jit *jit, struct sock_filter *filter) case BPF_S_LD_W_IND: case BPF_S_LD_H_IND: case BPF_S_LD_B_IND: + case BPF_S_LDX_B_MSH: case BPF_S_LD_IMM: case BPF_S_LD_MEM: case BPF_S_MISC_TXA: diff --git a/arch/score/Kconfig b/arch/score/Kconfig index 91182e95b88..c8def8bc902 100644 --- a/arch/score/Kconfig +++ b/arch/score/Kconfig @@ -109,6 +109,3 @@ source "security/Kconfig" source "crypto/Kconfig" source "lib/Kconfig" - -config NO_IOMEM - def_bool y diff --git a/arch/score/Makefile b/arch/score/Makefile index 9e3e060290e..974aefe8612 100644 --- a/arch/score/Makefile +++ b/arch/score/Makefile @@ -20,8 +20,8 @@ cflags-y += -G0 -pipe -mel -mnhwloop -D__SCOREEL__ \ # KBUILD_AFLAGS += $(cflags-y) KBUILD_CFLAGS += $(cflags-y) -KBUILD_AFLAGS_MODULE += -KBUILD_CFLAGS_MODULE += +KBUILD_AFLAGS_MODULE += -mlong-calls +KBUILD_CFLAGS_MODULE += -mlong-calls LDFLAGS += --oformat elf32-littlescore LDFLAGS_vmlinux += -G0 -static -nostdlib diff --git a/arch/score/include/asm/checksum.h b/arch/score/include/asm/checksum.h index 961bd64015a..f909ac3144a 100644 --- a/arch/score/include/asm/checksum.h +++ b/arch/score/include/asm/checksum.h @@ -184,57 +184,48 @@ static inline __sum16 csum_ipv6_magic(const struct in6_addr *saddr, __wsum sum) { __asm__ __volatile__( - ".set\tvolatile\t\t\t# csum_ipv6_magic\n\t" - "add\t%0, %0, %5\t\t\t# proto (long in network byte order)\n\t" - "cmp.c\t%5, %0\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0, %6\t\t\t# csum\n\t" - "cmp.c\t%6, %0\n\t" - "lw\t%1, [%2, 0]\t\t\t# four words source address\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0, %1\n\t" - "cmp.c\t%1, %0\n\t" - "1:lw\t%1, [%2, 4]\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0, %1\n\t" - "cmp.c\t%1, %0\n\t" - "lw\t%1, [%2,8]\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0, %1\n\t" - "cmp.c\t%1, %0\n\t" - "lw\t%1, [%2, 12]\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0,%1\n\t" - "cmp.c\t%1, %0\n\t" - "lw\t%1, [%3, 0]\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0, %1\n\t" - "cmp.c\t%1, %0\n\t" - "lw\t%1, [%3, 4]\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0, %1\n\t" - "cmp.c\t%1, %0\n\t" - "lw\t%1, [%3, 8]\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0, %1\n\t" - "cmp.c\t%1, %0\n\t" - "lw\t%1, [%3, 12]\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:add\t%0, %0, %1\n\t" - "cmp.c\t%1, %0\n\t" - "bleu 1f\n\t" - "addi\t%0, 0x1\n\t" - "1:\n\t" - ".set\toptimize" + ".set\tnoreorder\t\t\t# csum_ipv6_magic\n\t" + ".set\tnoat\n\t" + "addu\t%0, %5\t\t\t# proto (long in network byte order)\n\t" + "sltu\t$1, %0, %5\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %6\t\t\t# csum\n\t" + "sltu\t$1, %0, %6\n\t" + "lw\t%1, 0(%2)\t\t\t# four words source address\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %1\n\t" + "sltu\t$1, %0, %1\n\t" + "lw\t%1, 4(%2)\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %1\n\t" + "sltu\t$1, %0, %1\n\t" + "lw\t%1, 8(%2)\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %1\n\t" + "sltu\t$1, %0, %1\n\t" + "lw\t%1, 12(%2)\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %1\n\t" + "sltu\t$1, %0, %1\n\t" + "lw\t%1, 0(%3)\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %1\n\t" + "sltu\t$1, %0, %1\n\t" + "lw\t%1, 4(%3)\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %1\n\t" + "sltu\t$1, %0, %1\n\t" + "lw\t%1, 8(%3)\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %1\n\t" + "sltu\t$1, %0, %1\n\t" + "lw\t%1, 12(%3)\n\t" + "addu\t%0, $1\n\t" + "addu\t%0, %1\n\t" + "sltu\t$1, %0, %1\n\t" + "addu\t%0, $1\t\t\t# Add final carry\n\t" + ".set\tnoat\n\t" + ".set\tnoreorder" : "=r" (sum), "=r" (proto) : "r" (saddr), "r" (daddr), "0" (htonl(len)), "1" (htonl(proto)), "r" (sum)); diff --git a/arch/score/include/asm/io.h b/arch/score/include/asm/io.h index 574c8827abe..fbbfd7132e3 100644 --- a/arch/score/include/asm/io.h +++ b/arch/score/include/asm/io.h @@ -5,4 +5,5 @@ #define virt_to_bus virt_to_phys #define bus_to_virt phys_to_virt + #endif /* _ASM_SCORE_IO_H */ diff --git a/arch/score/include/asm/pgalloc.h b/arch/score/include/asm/pgalloc.h index 716b3fd1d86..059a61b7071 100644 --- a/arch/score/include/asm/pgalloc.h +++ b/arch/score/include/asm/pgalloc.h @@ -2,7 +2,7 @@ #define _ASM_SCORE_PGALLOC_H #include <linux/mm.h> -#include <linux/highmem.h> + static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { diff --git a/arch/score/kernel/entry.S b/arch/score/kernel/entry.S index befb87d30a8..7234ed09b7b 100644 --- a/arch/score/kernel/entry.S +++ b/arch/score/kernel/entry.S @@ -264,7 +264,7 @@ resume_kernel: disable_irq lw r8, [r28, TI_PRE_COUNT] cmpz.c r8 - bne restore_all + bne r8, restore_all need_resched: lw r8, [r28, TI_FLAGS] andri.c r9, r8, _TIF_NEED_RESCHED @@ -415,7 +415,7 @@ ENTRY(handle_sys) sw r9, [r0, PT_EPC] cmpi.c r27, __NR_syscalls # check syscall number - bcs illegal_syscall + bgeu illegal_syscall slli r8, r27, 2 # get syscall routine la r11, sys_call_table diff --git a/arch/score/kernel/process.c b/arch/score/kernel/process.c index a1519ad3d49..f4c6d02421d 100644 --- a/arch/score/kernel/process.c +++ b/arch/score/kernel/process.c @@ -78,8 +78,8 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, p->thread.reg0 = (unsigned long) childregs; if (unlikely(p->flags & PF_KTHREAD)) { memset(childregs, 0, sizeof(struct pt_regs)); - p->thread.reg12 = usp; - p->thread.reg13 = arg; + p->thread->reg12 = usp; + p->thread->reg13 = arg; p->thread.reg3 = (unsigned long) ret_from_kernel_thread; } else { *childregs = *current_pt_regs(); diff --git a/arch/score/kernel/vmlinux.lds.S b/arch/score/kernel/vmlinux.lds.S index 7274b5c4287..eebcbaa4e97 100644 --- a/arch/score/kernel/vmlinux.lds.S +++ b/arch/score/kernel/vmlinux.lds.S @@ -49,7 +49,6 @@ SECTIONS } . = ALIGN(16); - _sdata = .; /* Start of data section */ RODATA EXCEPTION_TABLE(16) diff --git a/arch/sh/kernel/dumpstack.c b/arch/sh/kernel/dumpstack.c index 8dfe645bcc4..b959f559260 100644 --- a/arch/sh/kernel/dumpstack.c +++ b/arch/sh/kernel/dumpstack.c @@ -115,7 +115,7 @@ static int print_trace_stack(void *data, char *name) */ static void print_trace_address(void *data, unsigned long addr, int reliable) { - printk("%s", (char *)data); + printk(data); printk_address(addr, reliable); } diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 03a1bc3c3dd..9ac9f166633 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -25,7 +25,7 @@ config SPARC select RTC_DRV_M48T59 select HAVE_DMA_ATTRS select HAVE_DMA_API_DEBUG - select HAVE_ARCH_JUMP_LABEL if SPARC64 + select HAVE_ARCH_JUMP_LABEL select HAVE_GENERIC_HARDIRQS select GENERIC_IRQ_SHOW select ARCH_WANT_IPC_PARSE_VERSION @@ -77,7 +77,6 @@ config SPARC64 select ARCH_HAVE_NMI_SAFE_CMPXCHG select HAVE_C_RECORDMCOUNT select NO_BOOTMEM - select ARCH_SUPPORTS_ATOMIC_RMW config ARCH_DEFCONFIG string diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 6663604a902..dfb0019bf05 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -24,8 +24,7 @@ /* The kernel image occupies 0x4000000 to 0x6000000 (4MB --> 96MB). * The page copy blockops can use 0x6000000 to 0x8000000. - * The 8K TSB is mapped in the 0x8000000 to 0x8400000 range. - * The 4M TSB is mapped in the 0x8400000 to 0x8800000 range. + * The TSB is mapped in the 0x8000000 to 0xa000000 range. * The PROM resides in an area spanning 0xf0000000 to 0x100000000. * The vmalloc area spans 0x100000000 to 0x200000000. * Since modules need to be in the lowest 32-bits of the address space, @@ -34,8 +33,7 @@ * 0x400000000. */ #define TLBTEMP_BASE _AC(0x0000000006000000,UL) -#define TSBMAP_8K_BASE _AC(0x0000000008000000,UL) -#define TSBMAP_4M_BASE _AC(0x0000000008400000,UL) +#define TSBMAP_BASE _AC(0x0000000008000000,UL) #define MODULES_VADDR _AC(0x0000000010000000,UL) #define MODULES_LEN _AC(0x00000000e0000000,UL) #define MODULES_END _AC(0x00000000f0000000,UL) diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/tlbflush_64.h index 1a4bb971e06..f0d6a9700f4 100644 --- a/arch/sparc/include/asm/tlbflush_64.h +++ b/arch/sparc/include/asm/tlbflush_64.h @@ -35,8 +35,6 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, { } -void flush_tlb_kernel_range(unsigned long start, unsigned long end); - #define __HAVE_ARCH_ENTER_LAZY_MMU_MODE extern void flush_tlb_pending(void); @@ -51,6 +49,11 @@ extern void __flush_tlb_kernel_range(unsigned long start, unsigned long end); #ifndef CONFIG_SMP +#define flush_tlb_kernel_range(start,end) \ +do { flush_tsb_kernel_range(start,end); \ + __flush_tlb_kernel_range(start,end); \ +} while (0) + static inline void global_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr) { __flush_tlb_page(CTX_HWBITS(mm->context), vaddr); @@ -61,6 +64,11 @@ static inline void global_flush_tlb_page(struct mm_struct *mm, unsigned long vad extern void smp_flush_tlb_kernel_range(unsigned long start, unsigned long end); extern void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr); +#define flush_tlb_kernel_range(start, end) \ +do { flush_tsb_kernel_range(start,end); \ + smp_flush_tlb_kernel_range(start, end); \ +} while (0) + #define global_flush_tlb_page(mm, vaddr) \ smp_flush_tlb_page(mm, vaddr) diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h index ad7e178337f..e562d3caee5 100644 --- a/arch/sparc/include/asm/uaccess_64.h +++ b/arch/sparc/include/asm/uaccess_64.h @@ -262,8 +262,8 @@ extern unsigned long __must_check __clear_user(void __user *, unsigned long); extern __must_check long strlen_user(const char __user *str); extern __must_check long strnlen_user(const char __user *str, long n); -#define __copy_to_user_inatomic __copy_to_user -#define __copy_from_user_inatomic __copy_from_user +#define __copy_to_user_inatomic ___copy_to_user +#define __copy_from_user_inatomic ___copy_from_user struct pt_regs; extern unsigned long compute_effective_address(struct pt_regs *, diff --git a/arch/sparc/kernel/ldc.c b/arch/sparc/kernel/ldc.c index fa4c900a0d1..54df554b82d 100644 --- a/arch/sparc/kernel/ldc.c +++ b/arch/sparc/kernel/ldc.c @@ -1336,7 +1336,7 @@ int ldc_connect(struct ldc_channel *lp) if (!(lp->flags & LDC_FLAG_ALLOCED_QUEUES) || !(lp->flags & LDC_FLAG_REGISTERED_QUEUES) || lp->hs_state != LDC_HS_OPEN) - err = ((lp->hs_state > LDC_HS_OPEN) ? 0 : -EINVAL); + err = -EINVAL; else err = start_handshake(lp); diff --git a/arch/sparc/kernel/pci.c b/arch/sparc/kernel/pci.c index 906cbf0f860..baf4366e2d6 100644 --- a/arch/sparc/kernel/pci.c +++ b/arch/sparc/kernel/pci.c @@ -399,8 +399,8 @@ static void apb_fake_ranges(struct pci_dev *dev, apb_calc_first_last(map, &first, &last); res = bus->resource[1]; res->flags = IORESOURCE_MEM; - region.start = (first << 29); - region.end = (last << 29) + ((1 << 29) - 1); + region.start = (first << 21); + region.end = (last << 21) + ((1 << 21) - 1); pcibios_bus_to_resource(dev, res, ®ion); } diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c index b9cc9763faf..baebab21549 100644 --- a/arch/sparc/kernel/process_64.c +++ b/arch/sparc/kernel/process_64.c @@ -57,12 +57,9 @@ void arch_cpu_idle(void) { if (tlb_type != hypervisor) { touch_nmi_watchdog(); - local_irq_enable(); } else { unsigned long pstate; - local_irq_enable(); - /* The sun4v sleeping code requires that we have PSTATE.IE cleared over * the cpu sleep hypervisor call. */ @@ -84,6 +81,7 @@ void arch_cpu_idle(void) : "=&r" (pstate) : "i" (PSTATE_IE)); } + local_irq_enable(); } #ifdef CONFIG_HOTPLUG_CPU diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index 8565ecd7d48..77539eda928 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -150,7 +150,7 @@ void cpu_panic(void) #define NUM_ROUNDS 64 /* magic value */ #define NUM_ITERS 5 /* likewise */ -static DEFINE_RAW_SPINLOCK(itc_sync_lock); +static DEFINE_SPINLOCK(itc_sync_lock); static unsigned long go[SLAVE + 1]; #define DEBUG_TICK_SYNC 0 @@ -258,7 +258,7 @@ static void smp_synchronize_one_tick(int cpu) go[MASTER] = 0; membar_safe("#StoreLoad"); - raw_spin_lock_irqsave(&itc_sync_lock, flags); + spin_lock_irqsave(&itc_sync_lock, flags); { for (i = 0; i < NUM_ROUNDS*NUM_ITERS; i++) { while (!go[MASTER]) @@ -269,7 +269,7 @@ static void smp_synchronize_one_tick(int cpu) membar_safe("#StoreLoad"); } } - raw_spin_unlock_irqrestore(&itc_sync_lock, flags); + spin_unlock_irqrestore(&itc_sync_lock, flags); } #if defined(CONFIG_SUN_LDOMS) && defined(CONFIG_HOTPLUG_CPU) diff --git a/arch/sparc/kernel/sys32.S b/arch/sparc/kernel/sys32.S index d066eb18650..f7c72b6efc2 100644 --- a/arch/sparc/kernel/sys32.S +++ b/arch/sparc/kernel/sys32.S @@ -44,7 +44,7 @@ SIGN1(sys32_timer_settime, compat_sys_timer_settime, %o1) SIGN1(sys32_io_submit, compat_sys_io_submit, %o1) SIGN1(sys32_mq_open, compat_sys_mq_open, %o1) SIGN1(sys32_select, compat_sys_select, %o0) -SIGN1(sys32_futex, compat_sys_futex, %o1) +SIGN3(sys32_futex, compat_sys_futex, %o1, %o2, %o5) SIGN1(sys32_recvfrom, compat_sys_recvfrom, %o0) SIGN1(sys32_recvmsg, compat_sys_recvmsg, %o0) SIGN1(sys32_sendmsg, compat_sys_sendmsg, %o0) diff --git a/arch/sparc/kernel/syscalls.S b/arch/sparc/kernel/syscalls.S index c79c687fbe1..73ec8a798d9 100644 --- a/arch/sparc/kernel/syscalls.S +++ b/arch/sparc/kernel/syscalls.S @@ -189,8 +189,7 @@ linux_sparc_syscall32: mov %i0, %l5 ! IEU1 5: call %l7 ! CTI Group brk forced srl %i5, 0, %o5 ! IEU1 - ba,pt %xcc, 3f - sra %o0, 0, %o0 + ba,a,pt %xcc, 3f /* Linux native system calls enter here... */ .align 32 @@ -218,6 +217,7 @@ linux_sparc_syscall: 3: stx %o0, [%sp + PTREGS_OFF + PT_V9_I0] ret_sys_call: ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %g3 + sra %o0, 0, %o0 mov %ulo(TSTATE_XCARRY | TSTATE_ICARRY), %g2 sllx %g2, 32, %g2 diff --git a/arch/sparc/kernel/unaligned_64.c b/arch/sparc/kernel/unaligned_64.c index 4db8898199f..8201c25e766 100644 --- a/arch/sparc/kernel/unaligned_64.c +++ b/arch/sparc/kernel/unaligned_64.c @@ -163,23 +163,17 @@ static unsigned long *fetch_reg_addr(unsigned int reg, struct pt_regs *regs) unsigned long compute_effective_address(struct pt_regs *regs, unsigned int insn, unsigned int rd) { - int from_kernel = (regs->tstate & TSTATE_PRIV) != 0; unsigned int rs1 = (insn >> 14) & 0x1f; unsigned int rs2 = insn & 0x1f; - unsigned long addr; + int from_kernel = (regs->tstate & TSTATE_PRIV) != 0; if (insn & 0x2000) { maybe_flush_windows(rs1, 0, rd, from_kernel); - addr = (fetch_reg(rs1, regs) + sign_extend_imm13(insn)); + return (fetch_reg(rs1, regs) + sign_extend_imm13(insn)); } else { maybe_flush_windows(rs1, rs2, rd, from_kernel); - addr = (fetch_reg(rs1, regs) + fetch_reg(rs2, regs)); + return (fetch_reg(rs1, regs) + fetch_reg(rs2, regs)); } - - if (!from_kernel && test_thread_flag(TIF_32BIT)) - addr &= 0xffffffff; - - return addr; } /* This is just to make gcc think die_if_kernel does return... */ diff --git a/arch/sparc/lib/NG2memcpy.S b/arch/sparc/lib/NG2memcpy.S index 30eee6e8a81..2c20ad63ddb 100644 --- a/arch/sparc/lib/NG2memcpy.S +++ b/arch/sparc/lib/NG2memcpy.S @@ -236,7 +236,6 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ */ VISEntryHalf - membar #Sync alignaddr %o1, %g0, %g0 add %o1, (64 - 1), %o4 diff --git a/arch/sparc/math-emu/math_32.c b/arch/sparc/math-emu/math_32.c index 5ce8f2f6460..aa4d55b0bdf 100644 --- a/arch/sparc/math-emu/math_32.c +++ b/arch/sparc/math-emu/math_32.c @@ -499,7 +499,7 @@ static int do_one_mathemu(u32 insn, unsigned long *pfsr, unsigned long *fregs) case 0: fsr = *pfsr; if (IR == -1) IR = 2; /* fcc is always fcc0 */ - fsr &= ~0xc00; fsr |= (IR << 10); + fsr &= ~0xc00; fsr |= (IR << 10); break; *pfsr = fsr; break; case 1: rd->s = IR; break; diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index 3841a081beb..2ebec263d68 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -95,51 +95,38 @@ static unsigned int get_user_insn(unsigned long tpc) pte_t *ptep, pte; unsigned long pa; u32 insn = 0; + unsigned long pstate; - if (pgd_none(*pgdp) || unlikely(pgd_bad(*pgdp))) - goto out; + if (pgd_none(*pgdp)) + goto outret; pudp = pud_offset(pgdp, tpc); - if (pud_none(*pudp) || unlikely(pud_bad(*pudp))) - goto out; + if (pud_none(*pudp)) + goto outret; + pmdp = pmd_offset(pudp, tpc); + if (pmd_none(*pmdp)) + goto outret; /* This disables preemption for us as well. */ - local_irq_disable(); - - pmdp = pmd_offset(pudp, tpc); - if (pmd_none(*pmdp) || unlikely(pmd_bad(*pmdp))) - goto out_irq_enable; + __asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate)); + __asm__ __volatile__("wrpr %0, %1, %%pstate" + : : "r" (pstate), "i" (PSTATE_IE)); + ptep = pte_offset_map(pmdp, tpc); + pte = *ptep; + if (!pte_present(pte)) + goto out; -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (pmd_trans_huge(*pmdp)) { - if (pmd_trans_splitting(*pmdp)) - goto out_irq_enable; + pa = (pte_pfn(pte) << PAGE_SHIFT); + pa += (tpc & ~PAGE_MASK); - pa = pmd_pfn(*pmdp) << PAGE_SHIFT; - pa += tpc & ~HPAGE_MASK; + /* Use phys bypass so we don't pollute dtlb/dcache. */ + __asm__ __volatile__("lduwa [%1] %2, %0" + : "=r" (insn) + : "r" (pa), "i" (ASI_PHYS_USE_EC)); - /* Use phys bypass so we don't pollute dtlb/dcache. */ - __asm__ __volatile__("lduwa [%1] %2, %0" - : "=r" (insn) - : "r" (pa), "i" (ASI_PHYS_USE_EC)); - } else -#endif - { - ptep = pte_offset_map(pmdp, tpc); - pte = *ptep; - if (pte_present(pte)) { - pa = (pte_pfn(pte) << PAGE_SHIFT); - pa += (tpc & ~PAGE_MASK); - - /* Use phys bypass so we don't pollute dtlb/dcache. */ - __asm__ __volatile__("lduwa [%1] %2, %0" - : "=r" (insn) - : "r" (pa), "i" (ASI_PHYS_USE_EC)); - } - pte_unmap(ptep); - } -out_irq_enable: - local_irq_enable(); out: + pte_unmap(ptep); + __asm__ __volatile__("wrpr %0, 0x0, %%pstate" : : "r" (pstate)); +outret: return insn; } @@ -165,8 +152,7 @@ show_signal_msg(struct pt_regs *regs, int sig, int code, } static void do_fault_siginfo(int code, int sig, struct pt_regs *regs, - unsigned long fault_addr, unsigned int insn, - int fault_code) + unsigned int insn, int fault_code) { unsigned long addr; siginfo_t info; @@ -174,18 +160,10 @@ static void do_fault_siginfo(int code, int sig, struct pt_regs *regs, info.si_code = code; info.si_signo = sig; info.si_errno = 0; - if (fault_code & FAULT_CODE_ITLB) { + if (fault_code & FAULT_CODE_ITLB) addr = regs->tpc; - } else { - /* If we were able to probe the faulting instruction, use it - * to compute a precise fault address. Otherwise use the fault - * time provided address which may only have page granularity. - */ - if (insn) - addr = compute_effective_address(regs, insn, 0); - else - addr = fault_addr; - } + else + addr = compute_effective_address(regs, insn, 0); info.si_addr = (void __user *) addr; info.si_trapno = 0; @@ -260,7 +238,7 @@ static void __kprobes do_kernel_fault(struct pt_regs *regs, int si_code, /* The si_code was set to make clear whether * this was a SEGV_MAPERR or SEGV_ACCERR fault. */ - do_fault_siginfo(si_code, SIGSEGV, regs, address, insn, fault_code); + do_fault_siginfo(si_code, SIGSEGV, regs, insn, fault_code); return; } @@ -280,6 +258,18 @@ static void noinline __kprobes bogus_32bit_fault_tpc(struct pt_regs *regs) show_regs(regs); } +static void noinline __kprobes bogus_32bit_fault_address(struct pt_regs *regs, + unsigned long addr) +{ + static int times; + + if (times++ < 10) + printk(KERN_ERR "FAULT[%s:%d]: 32-bit process " + "reports 64-bit fault address [%lx]\n", + current->comm, current->pid, addr); + show_regs(regs); +} + asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) { struct mm_struct *mm = current->mm; @@ -308,8 +298,10 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) goto intr_or_no_mm; } } - if (unlikely((address >> 32) != 0)) + if (unlikely((address >> 32) != 0)) { + bogus_32bit_fault_address(regs, address); goto intr_or_no_mm; + } } if (regs->tstate & TSTATE_PRIV) { @@ -529,7 +521,7 @@ do_sigbus: * Send a sigbus, regardless of whether we were in kernel * or user mode. */ - do_fault_siginfo(BUS_ADRERR, SIGBUS, regs, address, insn, fault_code); + do_fault_siginfo(BUS_ADRERR, SIGBUS, regs, insn, fault_code); /* Kernel mode? Handle exceptions or die */ if (regs->tstate & TSTATE_PRIV) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index a751023dbdc..04fd55a6e46 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -350,10 +350,6 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * mm = vma->vm_mm; - /* Don't insert a non-valid PTE into the TSB, we'll deadlock. */ - if (!pte_accessible(mm, pte)) - return; - spin_lock_irqsave(&mm->context.lock, flags); #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) @@ -2768,26 +2764,3 @@ void hugetlb_setup(struct pt_regs *regs) } } #endif - -#ifdef CONFIG_SMP -#define do_flush_tlb_kernel_range smp_flush_tlb_kernel_range -#else -#define do_flush_tlb_kernel_range __flush_tlb_kernel_range -#endif - -void flush_tlb_kernel_range(unsigned long start, unsigned long end) -{ - if (start < HI_OBP_ADDRESS && end > LOW_OBP_ADDRESS) { - if (start < LOW_OBP_ADDRESS) { - flush_tsb_kernel_range(start, LOW_OBP_ADDRESS); - do_flush_tlb_kernel_range(start, LOW_OBP_ADDRESS); - } - if (end > HI_OBP_ADDRESS) { - flush_tsb_kernel_range(end, HI_OBP_ADDRESS); - do_flush_tlb_kernel_range(end, HI_OBP_ADDRESS); - } - } else { - flush_tsb_kernel_range(start, end); - do_flush_tlb_kernel_range(start, end); - } -} diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c index 71d99a6c75a..2cc3bce5ee9 100644 --- a/arch/sparc/mm/tsb.c +++ b/arch/sparc/mm/tsb.c @@ -133,19 +133,7 @@ static void setup_tsb_params(struct mm_struct *mm, unsigned long tsb_idx, unsign mm->context.tsb_block[tsb_idx].tsb_nentries = tsb_bytes / sizeof(struct tsb); - switch (tsb_idx) { - case MM_TSB_BASE: - base = TSBMAP_8K_BASE; - break; -#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) - case MM_TSB_HUGE: - base = TSBMAP_4M_BASE; - break; -#endif - default: - BUG(); - } - + base = TSBMAP_BASE; tte = pgprot_val(PAGE_KERNEL_LOCKED); tsb_paddr = __pa(mm->context.tsb_block[tsb_idx].tsb); BUG_ON(tsb_paddr & (tsb_bytes - 1UL)); diff --git a/arch/sparc/net/bpf_jit_comp.c b/arch/sparc/net/bpf_jit_comp.c index 224fc0c71b8..fd95862c65a 100644 --- a/arch/sparc/net/bpf_jit_comp.c +++ b/arch/sparc/net/bpf_jit_comp.c @@ -83,9 +83,9 @@ static void bpf_flush_icache(void *start_, void *end_) #define BNE (F2(0, 2) | CONDNE) #ifdef CONFIG_SPARC64 -#define BE_PTR (F2(0, 1) | CONDE | (2 << 20)) +#define BNE_PTR (F2(0, 1) | CONDNE | (2 << 20)) #else -#define BE_PTR BE +#define BNE_PTR BNE #endif #define SETHI(K, REG) \ @@ -600,7 +600,7 @@ void bpf_jit_compile(struct sk_filter *fp) case BPF_S_ANC_IFINDEX: emit_skb_loadptr(dev, r_A); emit_cmpi(r_A, 0); - emit_branch(BE_PTR, cleanup_addr + 4); + emit_branch(BNE_PTR, cleanup_addr + 4); emit_nop(); emit_load32(r_A, struct net_device, ifindex, r_A); break; @@ -613,7 +613,7 @@ void bpf_jit_compile(struct sk_filter *fp) case BPF_S_ANC_HATYPE: emit_skb_loadptr(dev, r_A); emit_cmpi(r_A, 0); - emit_branch(BE_PTR, cleanup_addr + 4); + emit_branch(BNE_PTR, cleanup_addr + 4); emit_nop(); emit_load16(r_A, struct net_device, type, r_A); break; diff --git a/arch/unicore32/mm/alignment.c b/arch/unicore32/mm/alignment.c index 24e836023e6..de7dc5fdd58 100644 --- a/arch/unicore32/mm/alignment.c +++ b/arch/unicore32/mm/alignment.c @@ -21,7 +21,6 @@ #include <linux/sched.h> #include <linux/uaccess.h> -#include <asm/pgtable.h> #include <asm/tlbflush.h> #include <asm/unaligned.h> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 4e5b80d883c..fe120da2562 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -121,7 +121,6 @@ config X86 select OLD_SIGACTION if X86_32 select COMPAT_OLD_SIGACTION if IA32_EMULATION select RTC_LIB - select ARCH_SUPPORTS_ATOMIC_RMW config INSTRUCTION_DECODER def_bool y @@ -952,27 +951,10 @@ config VM86 default y depends on X86_32 ---help--- - This option is required by programs like DOSEMU to run - 16-bit real mode legacy code on x86 processors. It also may - be needed by software like XFree86 to initialize some video - cards via BIOS. Disabling this option saves about 6K. - -config X86_16BIT - bool "Enable support for 16-bit segments" if EXPERT - default y - ---help--- - This option is required by programs like Wine to run 16-bit - protected mode legacy code on x86 processors. Disabling - this option saves about 300 bytes on i386, or around 6K text - plus 16K runtime memory on x86-64, - -config X86_ESPFIX32 - def_bool y - depends on X86_16BIT && X86_32 - -config X86_ESPFIX64 - def_bool y - depends on X86_16BIT && X86_64 + This option is required by programs like DOSEMU to run 16-bit legacy + code on X86 processors. It also may be needed by software like + XFree86 to initialize some video cards via BIOS. Disabling this + option saves about 6k. config TOSHIBA tristate "Toshiba Laptop support" @@ -1578,7 +1560,6 @@ config EFI config EFI_STUB bool "EFI stub support" depends on EFI - select RELOCATABLE ---help--- This kernel feature allows a bzImage to be loaded directly by EFI firmware without the use of a bootloader. diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c index 1308beed7ab..d606463aa6d 100644 --- a/arch/x86/boot/compressed/eboot.c +++ b/arch/x86/boot/compressed/eboot.c @@ -865,9 +865,6 @@ fail: * Because the x86 boot code expects to be passed a boot_params we * need to create one ourselves (usually the bootloader would create * one for us). - * - * The caller is responsible for filling out ->code32_start in the - * returned boot_params. */ struct boot_params *make_boot_params(void *handle, efi_system_table_t *_table) { @@ -924,6 +921,8 @@ struct boot_params *make_boot_params(void *handle, efi_system_table_t *_table) hdr->vid_mode = 0xffff; hdr->boot_flag = 0xAA55; + hdr->code32_start = (__u64)(unsigned long)image->image_base; + hdr->type_of_loader = 0x21; /* Convert unicode cmdline to ascii */ diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S index abb988a54c6..1e3184f6072 100644 --- a/arch/x86/boot/compressed/head_32.S +++ b/arch/x86/boot/compressed/head_32.S @@ -50,13 +50,6 @@ ENTRY(efi_pe_entry) pushl %eax pushl %esi pushl %ecx - - call reloc -reloc: - popl %ecx - subl reloc, %ecx - movl %ecx, BP_code32_start(%eax) - sub $0x4, %esp ENTRY(efi_stub_entry) @@ -70,7 +63,12 @@ ENTRY(efi_stub_entry) hlt jmp 1b 2: - movl BP_code32_start(%esi), %eax + call 3f +3: + popl %eax + subl $3b, %eax + subl BP_pref_address(%esi), %eax + add BP_code32_start(%esi), %eax leal preferred_addr(%eax), %eax jmp *%eax diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index 92059b8f3f7..16f24e6dad7 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -217,8 +217,6 @@ ENTRY(efi_pe_entry) cmpq $0,%rax je 1f mov %rax, %rdx - leaq startup_32(%rip), %rax - movl %eax, BP_code32_start(%rdx) popq %rsi popq %rdi @@ -232,7 +230,12 @@ ENTRY(efi_stub_entry) hlt jmp 1b 2: - movl BP_code32_start(%esi), %eax + call 3f +3: + popq %rax + subq $3b, %rax + subq BP_pref_address(%rsi), %rax + add BP_code32_start(%esi), %eax leaq preferred_addr(%rax), %rax jmp *%rax diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S index 42571246217..9ec06a1f6d6 100644 --- a/arch/x86/boot/header.S +++ b/arch/x86/boot/header.S @@ -91,9 +91,10 @@ bs_die: .section ".bsdata", "a" bugger_off_msg: - .ascii "Use a boot loader.\r\n" + .ascii "Direct floppy boot is not supported. " + .ascii "Use a boot loader program instead.\r\n" .ascii "\n" - .ascii "Remove disk and press any key to reboot...\r\n" + .ascii "Remove disk and press any key to reboot ...\r\n" .byte 0 #ifdef CONFIG_EFI_STUB @@ -107,7 +108,7 @@ coff_header: #else .word 0x8664 # x86-64 #endif - .word 4 # nr_sections + .word 3 # nr_sections .long 0 # TimeDateStamp .long 0 # PointerToSymbolTable .long 1 # NumberOfSymbols @@ -249,25 +250,6 @@ section_table: .word 0 # NumberOfLineNumbers .long 0x60500020 # Characteristics (section flags) - # - # The offset & size fields are filled in by build.c. - # - .ascii ".bss" - .byte 0 - .byte 0 - .byte 0 - .byte 0 - .long 0 - .long 0x0 - .long 0 # Size of initialized data - # on disk - .long 0x0 - .long 0 # PointerToRelocations - .long 0 # PointerToLineNumbers - .word 0 # NumberOfRelocations - .word 0 # NumberOfLineNumbers - .long 0xc8000080 # Characteristics (section flags) - #endif /* CONFIG_EFI_STUB */ # Kernel attributes; used by setup. This is part 1 of the diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c index 971a0ce062a..94c54465002 100644 --- a/arch/x86/boot/tools/build.c +++ b/arch/x86/boot/tools/build.c @@ -141,7 +141,7 @@ static void usage(void) #ifdef CONFIG_EFI_STUB -static void update_pecoff_section_header_fields(char *section_name, u32 vma, u32 size, u32 datasz, u32 offset) +static void update_pecoff_section_header(char *section_name, u32 offset, u32 size) { unsigned int pe_header; unsigned short num_sections; @@ -162,10 +162,10 @@ static void update_pecoff_section_header_fields(char *section_name, u32 vma, u32 put_unaligned_le32(size, section + 0x8); /* section header vma field */ - put_unaligned_le32(vma, section + 0xc); + put_unaligned_le32(offset, section + 0xc); /* section header 'size of initialised data' field */ - put_unaligned_le32(datasz, section + 0x10); + put_unaligned_le32(size, section + 0x10); /* section header 'file offset' field */ put_unaligned_le32(offset, section + 0x14); @@ -177,11 +177,6 @@ static void update_pecoff_section_header_fields(char *section_name, u32 vma, u32 } } -static void update_pecoff_section_header(char *section_name, u32 offset, u32 size) -{ - update_pecoff_section_header_fields(section_name, offset, size, size, offset); -} - static void update_pecoff_setup_and_reloc(unsigned int size) { u32 setup_offset = 0x200; @@ -206,6 +201,9 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz) pe_header = get_unaligned_le32(&buf[0x3c]); + /* Size of image */ + put_unaligned_le32(file_sz, &buf[pe_header + 0x50]); + /* * Size of code: Subtract the size of the first sector (512 bytes) * which includes the header. @@ -220,22 +218,6 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz) update_pecoff_section_header(".text", text_start, text_sz); } -static void update_pecoff_bss(unsigned int file_sz, unsigned int init_sz) -{ - unsigned int pe_header; - unsigned int bss_sz = init_sz - file_sz; - - pe_header = get_unaligned_le32(&buf[0x3c]); - - /* Size of uninitialized data */ - put_unaligned_le32(bss_sz, &buf[pe_header + 0x24]); - - /* Size of image */ - put_unaligned_le32(init_sz, &buf[pe_header + 0x50]); - - update_pecoff_section_header_fields(".bss", file_sz, bss_sz, 0, 0); -} - #endif /* CONFIG_EFI_STUB */ @@ -286,9 +268,6 @@ int main(int argc, char ** argv) int fd; void *kernel; u32 crc = 0xffffffffUL; -#ifdef CONFIG_EFI_STUB - unsigned int init_sz; -#endif /* Defaults for old kernel */ #ifdef CONFIG_X86_32 @@ -359,9 +338,7 @@ int main(int argc, char ** argv) put_unaligned_le32(sys_size, &buf[0x1f4]); #ifdef CONFIG_EFI_STUB - update_pecoff_text(setup_sectors * 512, i + (sys_size * 16)); - init_sz = get_unaligned_le32(&buf[0x260]); - update_pecoff_bss(i + (sys_size * 16), init_sz); + update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz)); #ifdef CONFIG_X86_64 /* Yes, this is really how we defined it :( */ efi_stub_entry -= 0x200; diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S index 185fad49d86..586f41aac36 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_asm.S +++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S @@ -24,6 +24,10 @@ .align 16 .Lbswap_mask: .octa 0x000102030405060708090a0b0c0d0e0f +.Lpoly: + .octa 0xc2000000000000000000000000000001 +.Ltwo_one: + .octa 0x00000001000000000000000000000001 #define DATA %xmm0 #define SHASH %xmm1 @@ -130,3 +134,28 @@ ENTRY(clmul_ghash_update) .Lupdate_just_ret: ret ENDPROC(clmul_ghash_update) + +/* + * void clmul_ghash_setkey(be128 *shash, const u8 *key); + * + * Calculate hash_key << 1 mod poly + */ +ENTRY(clmul_ghash_setkey) + movaps .Lbswap_mask, BSWAP + movups (%rsi), %xmm0 + PSHUFB_XMM BSWAP %xmm0 + movaps %xmm0, %xmm1 + psllq $1, %xmm0 + psrlq $63, %xmm1 + movaps %xmm1, %xmm2 + pslldq $8, %xmm1 + psrldq $8, %xmm2 + por %xmm1, %xmm0 + # reduction + pshufd $0b00100100, %xmm2, %xmm1 + pcmpeqd .Ltwo_one, %xmm1 + pand .Lpoly, %xmm1 + pxor %xmm1, %xmm0 + movups %xmm0, (%rdi) + ret +ENDPROC(clmul_ghash_setkey) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index d785cf2c529..6759dd1135b 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -30,6 +30,8 @@ void clmul_ghash_mul(char *dst, const be128 *shash); void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, const be128 *shash); +void clmul_ghash_setkey(be128 *shash, const u8 *key); + struct ghash_async_ctx { struct cryptd_ahash *cryptd_tfm; }; @@ -56,23 +58,13 @@ static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct ghash_ctx *ctx = crypto_shash_ctx(tfm); - be128 *x = (be128 *)key; - u64 a, b; if (keylen != GHASH_BLOCK_SIZE) { crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } - /* perform multiplication by 'x' in GF(2^128) */ - a = be64_to_cpu(x->a); - b = be64_to_cpu(x->b); - - ctx->shash.a = (__be64)((b << 1) | (a >> 63)); - ctx->shash.b = (__be64)((a << 1) | (b >> 63)); - - if (a >> 63) - ctx->shash.b ^= cpu_to_be64(0xc2); + clmul_ghash_setkey(&ctx->shash, key); return 0; } diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 9f5e71f0667..6cbd8df348d 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -141,7 +141,7 @@ static int sha512_ssse3_final(struct shash_desc *desc, u8 *out) /* save number of bits */ bits[1] = cpu_to_be64(sctx->count[0] << 3); - bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61); + bits[0] = cpu_to_be64(sctx->count[1] << 3) | sctx->count[0] >> 61; /* Pad out to 112 mod 128 and append length */ index = sctx->count[0] & 0x7f; diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S index c9305ef1d41..474dc1b59f7 100644 --- a/arch/x86/ia32/ia32entry.S +++ b/arch/x86/ia32/ia32entry.S @@ -151,16 +151,6 @@ ENTRY(ia32_sysenter_target) 1: movl (%rbp),%ebp _ASM_EXTABLE(1b,ia32_badarg) ASM_CLAC - - /* - * Sysenter doesn't filter flags, so we need to clear NT - * ourselves. To save a few cycles, we can check whether - * NT was set instead of doing an unconditional popfq. - */ - testl $X86_EFLAGS_NT,EFLAGS-ARGOFFSET(%rsp) - jnz sysenter_fix_flags -sysenter_flags_fixed: - orl $TS_COMPAT,TI_status+THREAD_INFO(%rsp,RIP-ARGOFFSET) testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) CFI_REMEMBER_STATE @@ -194,8 +184,6 @@ sysexit_from_sys_call: TRACE_IRQS_ON ENABLE_INTERRUPTS_SYSEXIT32 - CFI_RESTORE_STATE - #ifdef CONFIG_AUDITSYSCALL .macro auditsys_entry_common movl %esi,%r9d /* 6th arg: 4th syscall arg */ @@ -238,6 +226,7 @@ sysexit_from_sys_call: .endm sysenter_auditsys: + CFI_RESTORE_STATE auditsys_entry_common movl %ebp,%r9d /* reload 6th syscall arg */ jmp sysenter_dispatch @@ -246,11 +235,6 @@ sysexit_audit: auditsys_exit sysexit_from_sys_call #endif -sysenter_fix_flags: - pushq_cfi $(X86_EFLAGS_IF|X86_EFLAGS_FIXED) - popfq_cfi - jmp sysenter_flags_fixed - sysenter_tracesys: #ifdef CONFIG_AUDITSYSCALL testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h index 01f15b227d7..9c999c1674f 100644 --- a/arch/x86/include/asm/elf.h +++ b/arch/x86/include/asm/elf.h @@ -155,9 +155,8 @@ do { \ #define elf_check_arch(x) \ ((x)->e_machine == EM_X86_64) -#define compat_elf_check_arch(x) \ - (elf_check_arch_ia32(x) || \ - (IS_ENABLED(CONFIG_X86_X32_ABI) && (x)->e_machine == EM_X86_64)) +#define compat_elf_check_arch(x) \ + (elf_check_arch_ia32(x) || (x)->e_machine == EM_X86_64) #if __USER32_DS != __USER_DS # error "The following code assumes __USER32_DS == __USER_DS" diff --git a/arch/x86/include/asm/espfix.h b/arch/x86/include/asm/espfix.h deleted file mode 100644 index 99efebb2f69..00000000000 --- a/arch/x86/include/asm/espfix.h +++ /dev/null @@ -1,16 +0,0 @@ -#ifndef _ASM_X86_ESPFIX_H -#define _ASM_X86_ESPFIX_H - -#ifdef CONFIG_X86_64 - -#include <asm/percpu.h> - -DECLARE_PER_CPU_READ_MOSTLY(unsigned long, espfix_stack); -DECLARE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr); - -extern void init_espfix_bsp(void); -extern void init_espfix_ap(void); - -#endif /* CONFIG_X86_64 */ - -#endif /* _ASM_X86_ESPFIX_H */ diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 9d7d36c82fc..0dc7d9e21c3 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -123,14 +123,14 @@ enum fixed_addresses { __end_of_permanent_fixed_addresses, /* - * 512 temporary boot-time mappings, used by early_ioremap(), + * 256 temporary boot-time mappings, used by early_ioremap(), * before ioremap() is functional. * - * If necessary we round it up to the next 512 pages boundary so + * If necessary we round it up to the next 256 pages boundary so * that we can have a single pgd entry and a single pte table: */ #define NR_FIX_BTMAPS 64 -#define FIX_BTMAPS_SLOTS 8 +#define FIX_BTMAPS_SLOTS 4 #define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS) FIX_BTMAP_END = (__end_of_permanent_fixed_addresses ^ diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h index 68c05398bba..a8091216963 100644 --- a/arch/x86/include/asm/hugetlb.h +++ b/arch/x86/include/asm/hugetlb.h @@ -52,7 +52,6 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - ptep_clear_flush(vma, addr, ptep); } static inline int huge_pte_none(pte_t pte) diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h index 0a8b519226b..bba3cf88e62 100644 --- a/arch/x86/include/asm/irqflags.h +++ b/arch/x86/include/asm/irqflags.h @@ -129,7 +129,7 @@ static inline notrace unsigned long arch_local_irq_save(void) #define PARAVIRT_ADJUST_EXCEPTION_FRAME /* */ -#define INTERRUPT_RETURN jmp native_iret +#define INTERRUPT_RETURN iretq #define USERGS_SYSRET64 \ swapgs; \ sysretq; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4c481e751e8..3741c653767 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -445,7 +445,7 @@ struct kvm_vcpu_arch { bool nmi_injected; /* Trying to inject an NMI this entry */ struct mtrr_state_type mtrr_state; - u64 pat; + u32 pat; int switch_db_regs; unsigned long db[KVM_NR_DB_REGS]; @@ -463,7 +463,6 @@ struct kvm_vcpu_arch { u64 mmio_gva; unsigned access; gfn_t mmio_gfn; - u64 mmio_gen; struct kvm_pmu pmu; @@ -953,20 +952,6 @@ static inline void kvm_inject_gp(struct kvm_vcpu *vcpu, u32 error_code) kvm_queue_exception_e(vcpu, GP_VECTOR, error_code); } -static inline u64 get_canonical(u64 la) -{ - return ((int64_t)la << 16) >> 16; -} - -static inline bool is_noncanonical_address(u64 la) -{ -#ifdef CONFIG_X86_64 - return get_canonical(la) != la; -#else - return false; -#endif -} - #define TSS_IOPB_BASE_OFFSET 0x66 #define TSS_BASE_SIZE 0x68 #define TSS_IOPB_SIZE (65536 / 8) @@ -1025,7 +1010,7 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v); void kvm_vcpu_reset(struct kvm_vcpu *vcpu); void kvm_define_shared_msr(unsigned index, u32 msr); -int kvm_set_shared_msr(unsigned index, u64 val, u64 mask); +void kvm_set_shared_msr(unsigned index, u64 val, u64 mask); bool kvm_is_linear_rip(struct kvm_vcpu *vcpu, unsigned long linear_rip); diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index b1609f2c524..2d883440cb9 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -61,8 +61,6 @@ typedef struct { pteval_t pte; } pte_t; #define MODULES_VADDR _AC(0xffffffffa0000000, UL) #define MODULES_END _AC(0xffffffffff000000, UL) #define MODULES_LEN (MODULES_END - MODULES_VADDR) -#define ESPFIX_PGD_ENTRY _AC(-2, UL) -#define ESPFIX_BASE_ADDR (ESPFIX_PGD_ENTRY << PGDIR_SHIFT) #define EARLY_DYNAMIC_PAGE_TABLES 64 diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h index 68e9f007cd4..942a08623a1 100644 --- a/arch/x86/include/asm/ptrace.h +++ b/arch/x86/include/asm/ptrace.h @@ -232,22 +232,6 @@ static inline unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, #define ARCH_HAS_USER_SINGLE_STEP_INFO -/* - * When hitting ptrace_stop(), we cannot return using SYSRET because - * that does not restore the full CPU state, only a minimal set. The - * ptracer can change arbitrary register values, which is usually okay - * because the usual ptrace stops run off the signal delivery path which - * forces IRET; however, ptrace_event() stops happen in arbitrary places - * in the kernel and don't force IRET path. - * - * So force IRET path after a ptrace stop. - */ -#define arch_ptrace_stop_needed(code, info) \ -({ \ - set_thread_flag(TIF_NOTIFY_RESUME); \ - false; \ -}) - struct user_desc; extern int do_get_thread_area(struct task_struct *p, int idx, struct user_desc __user *info); diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h index 2e327f114a1..b7bf3505e1e 100644 --- a/arch/x86/include/asm/setup.h +++ b/arch/x86/include/asm/setup.h @@ -62,8 +62,6 @@ static inline void x86_ce4100_early_setup(void) { } #ifndef _SETUP -#include <asm/espfix.h> - /* * This is set up by the setup-routine at boot-time */ diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index 60bd2748a7c..095b21507b6 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -119,10 +119,9 @@ static inline void setup_node_to_cpumask_map(void) { } extern const struct cpumask *cpu_coregroup_mask(int cpu); +#ifdef ENABLE_TOPO_DEFINES #define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id) #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id) - -#ifdef ENABLE_TOPO_DEFINES #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu)) #define topology_thread_cpumask(cpu) (per_cpu(cpu_sibling_map, cpu)) diff --git a/arch/x86/include/uapi/asm/processor-flags.h b/arch/x86/include/uapi/asm/processor-flags.h index b16e6d28f14..54991a74604 100644 --- a/arch/x86/include/uapi/asm/processor-flags.h +++ b/arch/x86/include/uapi/asm/processor-flags.h @@ -6,7 +6,7 @@ * EFLAGS bits */ #define X86_EFLAGS_CF 0x00000001 /* Carry Flag */ -#define X86_EFLAGS_FIXED 0x00000002 /* Bit 1 - always on */ +#define X86_EFLAGS_BIT1 0x00000002 /* Bit 1 - always on */ #define X86_EFLAGS_PF 0x00000004 /* Parity Flag */ #define X86_EFLAGS_AF 0x00000010 /* Auxiliary carry Flag */ #define X86_EFLAGS_ZF 0x00000040 /* Zero Flag */ diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 111eb356dbe..7bd3bd31010 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -27,7 +27,6 @@ obj-$(CONFIG_X86_64) += sys_x86_64.o x8664_ksyms_64.o obj-y += syscall_$(BITS).o obj-$(CONFIG_X86_64) += vsyscall_64.o obj-$(CONFIG_X86_64) += vsyscall_emu_64.o -obj-$(CONFIG_X86_ESPFIX64) += espfix_64.o obj-y += bootflag.o e820.o obj-y += pci-dma.o quirks.o topology.o kdebugfs.o obj-y += alternative.o i8253.o pci-nommu.o hw_breakpoint.o diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 033eb44dc66..904611bf0e5 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -1263,7 +1263,7 @@ void __cpuinit setup_local_APIC(void) unsigned int value, queued; int i, j, acked = 0; unsigned long long tsc = 0, ntsc; - long long max_loops = cpu_khz ? cpu_khz : 1000000; + long long max_loops = cpu_khz; if (cpu_has_tsc) rdtscll(tsc); @@ -1360,7 +1360,7 @@ void __cpuinit setup_local_APIC(void) break; } if (queued) { - if (cpu_has_tsc && cpu_khz) { + if (cpu_has_tsc) { rdtscll(ntsc); max_loops = (cpu_khz << 10) - (ntsc - tsc); } else diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 6a7e3e9cffc..deeb48d9459 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1134,7 +1134,7 @@ void syscall_init(void) /* Flags to clear on syscall */ wrmsrl(MSR_SYSCALL_MASK, X86_EFLAGS_TF|X86_EFLAGS_DF|X86_EFLAGS_IF| - X86_EFLAGS_IOPL|X86_EFLAGS_AC|X86_EFLAGS_NT); + X86_EFLAGS_IOPL|X86_EFLAGS_AC); } /* diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index 8533e69d2b8..f187806dfc1 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -154,21 +154,6 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c) setup_clear_cpu_cap(X86_FEATURE_ERMS); } } - - /* - * Intel Quark Core DevMan_001.pdf section 6.4.11 - * "The operating system also is required to invalidate (i.e., flush) - * the TLB when any changes are made to any of the page table entries. - * The operating system must reload CR3 to cause the TLB to be flushed" - * - * As a result cpu_has_pge() in arch/x86/include/asm/tlbflush.h should - * be false so that __flush_tlb_all() causes CR3 insted of CR4.PGE - * to be modified - */ - if (c->x86 == 5 && c->x86_model == 9) { - pr_info("Disabling PGE capability bit\n"); - setup_clear_cpu_cap(X86_FEATURE_PGE); - } } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c index 123d9e2271d..a69b67d968d 100644 --- a/arch/x86/kernel/cpu/perf_event.c +++ b/arch/x86/kernel/cpu/perf_event.c @@ -1252,20 +1252,10 @@ void perf_events_lapic_init(void) static int __kprobes perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs) { - int ret; - u64 start_clock; - u64 finish_clock; - if (!atomic_read(&active_events)) return NMI_DONE; - start_clock = local_clock(); - ret = x86_pmu.handle_irq(regs); - finish_clock = local_clock(); - - perf_sample_event_took(finish_clock - start_clock); - - return ret; + return x86_pmu.handle_irq(regs); } struct event_constraint emptyconstraint; diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c index b45ac6affa9..a9e22073bd5 100644 --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c @@ -1199,15 +1199,6 @@ again: intel_pmu_lbr_read(); /* - * CondChgd bit 63 doesn't mean any overflow status. Ignore - * and clear the bit. - */ - if (__test_and_clear_bit(63, (unsigned long *)&status)) { - if (!status) - goto done; - } - - /* * PEBS overflow sets bit 62 in the global status register */ if (__test_and_clear_bit(62, (unsigned long *)&status)) { diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c index 4f7c82cdd0f..63bdb29b254 100644 --- a/arch/x86/kernel/early-quirks.c +++ b/arch/x86/kernel/early-quirks.c @@ -202,15 +202,18 @@ static void __init intel_remapping_check(int num, int slot, int func) revision = read_pci_config_byte(num, slot, func, PCI_REVISION_ID); /* - * Revision <= 13 of all triggering devices id in this quirk - * have a problem draining interrupts when irq remapping is - * enabled, and should be flagged as broken. Additionally - * revision 0x22 of device id 0x3405 has this problem. + * Revision 13 of all triggering devices id in this quirk have + * a problem draining interrupts when irq remapping is enabled, + * and should be flagged as broken. Additionally revisions 0x12 + * and 0x22 of device id 0x3405 has this problem. */ - if (revision <= 0x13) + if (revision == 0x13) set_irq_remapping_broken(); - else if (device == 0x3405 && revision == 0x22) + else if ((device == 0x3405) && + ((revision == 0x12) || + (revision == 0x22))) set_irq_remapping_broken(); + } #define QFLAG_APPLY_ONCE 0x1 diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S index 5c38e2b298c..94e52cf064b 100644 --- a/arch/x86/kernel/entry_32.S +++ b/arch/x86/kernel/entry_32.S @@ -434,9 +434,8 @@ sysenter_past_esp: jnz sysenter_audit sysenter_do_call: cmpl $(NR_syscalls), %eax - jae sysenter_badsys + jae syscall_badsys call *sys_call_table(,%eax,4) -sysenter_after_call: movl %eax,PT_EAX(%esp) LOCKDEP_SYS_EXIT DISABLE_INTERRUPTS(CLBR_ANY) @@ -517,7 +516,6 @@ ENTRY(system_call) jae syscall_badsys syscall_call: call *sys_call_table(,%eax,4) -syscall_after_call: movl %eax,PT_EAX(%esp) # store the return value syscall_exit: LOCKDEP_SYS_EXIT @@ -532,7 +530,6 @@ syscall_exit: restore_all: TRACE_IRQS_IRET restore_all_notrace: -#ifdef CONFIG_X86_ESPFIX32 movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS # Warning: PT_OLDSS(%esp) contains the wrong/random values if we # are returning to the kernel. @@ -543,7 +540,6 @@ restore_all_notrace: cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax CFI_REMEMBER_STATE je ldt_ss # returning to user-space with LDT SS -#endif restore_nocheck: RESTORE_REGS 4 # skip orig_eax/error_code irq_return: @@ -556,9 +552,13 @@ ENTRY(iret_exc) .previous _ASM_EXTABLE(irq_return,iret_exc) -#ifdef CONFIG_X86_ESPFIX32 CFI_RESTORE_STATE ldt_ss: + larl PT_OLDSS(%esp), %eax + jnz restore_nocheck + testl $0x00400000, %eax # returning to 32bit stack? + jnz restore_nocheck # allright, normal return + #ifdef CONFIG_PARAVIRT /* * The kernel can't run on a non-flat stack if paravirt mode @@ -600,7 +600,6 @@ ldt_ss: lss (%esp), %esp /* switch to espfix segment */ CFI_ADJUST_CFA_OFFSET -8 jmp restore_nocheck -#endif CFI_ENDPROC ENDPROC(system_call) @@ -691,13 +690,8 @@ syscall_fault: END(syscall_fault) syscall_badsys: - movl $-ENOSYS,%eax - jmp syscall_after_call -END(syscall_badsys) - -sysenter_badsys: - movl $-ENOSYS,%eax - jmp sysenter_after_call + movl $-ENOSYS,PT_EAX(%esp) + jmp resume_userspace END(syscall_badsys) CFI_ENDPROC /* @@ -713,7 +707,6 @@ END(syscall_badsys) * the high word of the segment base from the GDT and swiches to the * normal stack and adjusts ESP with the matching offset. */ -#ifdef CONFIG_X86_ESPFIX32 /* fixup the stack */ mov GDT_ESPFIX_SS + 4, %al /* bits 16..23 */ mov GDT_ESPFIX_SS + 7, %ah /* bits 24..31 */ @@ -723,10 +716,8 @@ END(syscall_badsys) pushl_cfi %eax lss (%esp), %esp /* switch to the normal stack segment */ CFI_ADJUST_CFA_OFFSET -8 -#endif .endm .macro UNWIND_ESPFIX_STACK -#ifdef CONFIG_X86_ESPFIX32 movl %ss, %eax /* see if on espfix stack */ cmpw $__ESPFIX_SS, %ax @@ -737,7 +728,6 @@ END(syscall_badsys) /* switch to normal stack */ FIXUP_ESPFIX_STACK 27: -#endif .endm /* @@ -1345,13 +1335,11 @@ END(debug) ENTRY(nmi) RING0_INT_FRAME ASM_CLAC -#ifdef CONFIG_X86_ESPFIX32 pushl_cfi %eax movl %ss, %eax cmpw $__ESPFIX_SS, %ax popl_cfi %eax je nmi_espfix_stack -#endif cmpl $ia32_sysenter_target,(%esp) je nmi_stack_fixup pushl_cfi %eax @@ -1391,7 +1379,6 @@ nmi_debug_stack_check: FIX_STACK 24, nmi_stack_correct, 1 jmp nmi_stack_correct -#ifdef CONFIG_X86_ESPFIX32 nmi_espfix_stack: /* We have a RING0_INT_FRAME here. * @@ -1413,7 +1400,6 @@ nmi_espfix_stack: lss 12+4(%esp), %esp # back to espfix stack CFI_ADJUST_CFA_OFFSET -24 jmp irq_return -#endif CFI_ENDPROC END(nmi) diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index 8c6b5c2284c..7ac938a4bfa 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -58,7 +58,6 @@ #include <asm/asm.h> #include <asm/context_tracking.h> #include <asm/smap.h> -#include <asm/pgtable_types.h> #include <linux/err.h> /* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */ @@ -366,7 +365,7 @@ ENDPROC(native_usergs_sysret64) /*CFI_REL_OFFSET ss,0*/ pushq_cfi %rax /* rsp */ CFI_REL_OFFSET rsp,0 - pushq_cfi $(X86_EFLAGS_IF|X86_EFLAGS_FIXED) /* eflags - interrupts on */ + pushq_cfi $(X86_EFLAGS_IF|X86_EFLAGS_BIT1) /* eflags - interrupts on */ /*CFI_REL_OFFSET rflags,0*/ pushq_cfi $__KERNEL_CS /* cs */ /*CFI_REL_OFFSET cs,0*/ @@ -1057,45 +1056,12 @@ restore_args: irq_return: INTERRUPT_RETURN + _ASM_EXTABLE(irq_return, bad_iret) +#ifdef CONFIG_PARAVIRT ENTRY(native_iret) - /* - * Are we returning to a stack segment from the LDT? Note: in - * 64-bit mode SS:RSP on the exception stack is always valid. - */ -#ifdef CONFIG_X86_ESPFIX64 - testb $4,(SS-RIP)(%rsp) - jnz native_irq_return_ldt -#endif - -native_irq_return_iret: iretq - _ASM_EXTABLE(native_irq_return_iret, bad_iret) - -#ifdef CONFIG_X86_ESPFIX64 -native_irq_return_ldt: - pushq_cfi %rax - pushq_cfi %rdi - SWAPGS - movq PER_CPU_VAR(espfix_waddr),%rdi - movq %rax,(0*8)(%rdi) /* RAX */ - movq (2*8)(%rsp),%rax /* RIP */ - movq %rax,(1*8)(%rdi) - movq (3*8)(%rsp),%rax /* CS */ - movq %rax,(2*8)(%rdi) - movq (4*8)(%rsp),%rax /* RFLAGS */ - movq %rax,(3*8)(%rdi) - movq (6*8)(%rsp),%rax /* SS */ - movq %rax,(5*8)(%rdi) - movq (5*8)(%rsp),%rax /* RSP */ - movq %rax,(4*8)(%rdi) - andl $0xffff0000,%eax - popq_cfi %rdi - orq PER_CPU_VAR(espfix_stack),%rax - SWAPGS - movq %rax,%rsp - popq_cfi %rax - jmp native_irq_return_iret + _ASM_EXTABLE(native_iret, bad_iret) #endif .section .fixup,"ax" @@ -1161,40 +1127,9 @@ ENTRY(retint_kernel) call preempt_schedule_irq jmp exit_intr #endif - CFI_ENDPROC -END(common_interrupt) - /* - * If IRET takes a fault on the espfix stack, then we - * end up promoting it to a doublefault. In that case, - * modify the stack to make it look like we just entered - * the #GP handler from user space, similar to bad_iret. - */ -#ifdef CONFIG_X86_ESPFIX64 - ALIGN -__do_double_fault: - XCPT_FRAME 1 RDI+8 - movq RSP(%rdi),%rax /* Trap on the espfix stack? */ - sarq $PGDIR_SHIFT,%rax - cmpl $ESPFIX_PGD_ENTRY,%eax - jne do_double_fault /* No, just deliver the fault */ - cmpl $__KERNEL_CS,CS(%rdi) - jne do_double_fault - movq RIP(%rdi),%rax - cmpq $native_irq_return_iret,%rax - jne do_double_fault /* This shouldn't happen... */ - movq PER_CPU_VAR(kernel_stack),%rax - subq $(6*8-KERNEL_STACK_OFFSET),%rax /* Reset to original stack */ - movq %rax,RSP(%rdi) - movq $0,(%rax) /* Missing (lost) #GP error code */ - movq $general_protection,RIP(%rdi) - retq CFI_ENDPROC -END(__do_double_fault) -#else -# define __do_double_fault do_double_fault -#endif - +END(common_interrupt) /* * End of kprobes section */ @@ -1363,7 +1298,7 @@ zeroentry overflow do_overflow zeroentry bounds do_bounds zeroentry invalid_op do_invalid_op zeroentry device_not_available do_device_not_available -paranoiderrorentry double_fault __do_double_fault +paranoiderrorentry double_fault do_double_fault zeroentry coprocessor_segment_overrun do_coprocessor_segment_overrun errorentry invalid_TSS do_invalid_TSS errorentry segment_not_present do_segment_not_present @@ -1650,7 +1585,7 @@ error_sti: */ error_kernelspace: incl %ebx - leaq native_irq_return_iret(%rip),%rcx + leaq irq_return(%rip),%rcx cmpq %rcx,RIP+8(%rsp) je error_swapgs movl %ecx,%eax /* zero extend */ diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c deleted file mode 100644 index 94d857fb103..00000000000 --- a/arch/x86/kernel/espfix_64.c +++ /dev/null @@ -1,208 +0,0 @@ -/* ----------------------------------------------------------------------- * - * - * Copyright 2014 Intel Corporation; author: H. Peter Anvin - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * ----------------------------------------------------------------------- */ - -/* - * The IRET instruction, when returning to a 16-bit segment, only - * restores the bottom 16 bits of the user space stack pointer. This - * causes some 16-bit software to break, but it also leaks kernel state - * to user space. - * - * This works around this by creating percpu "ministacks", each of which - * is mapped 2^16 times 64K apart. When we detect that the return SS is - * on the LDT, we copy the IRET frame to the ministack and use the - * relevant alias to return to userspace. The ministacks are mapped - * readonly, so if the IRET fault we promote #GP to #DF which is an IST - * vector and thus has its own stack; we then do the fixup in the #DF - * handler. - * - * This file sets up the ministacks and the related page tables. The - * actual ministack invocation is in entry_64.S. - */ - -#include <linux/init.h> -#include <linux/init_task.h> -#include <linux/kernel.h> -#include <linux/percpu.h> -#include <linux/gfp.h> -#include <linux/random.h> -#include <asm/pgtable.h> -#include <asm/pgalloc.h> -#include <asm/setup.h> -#include <asm/espfix.h> - -/* - * Note: we only need 6*8 = 48 bytes for the espfix stack, but round - * it up to a cache line to avoid unnecessary sharing. - */ -#define ESPFIX_STACK_SIZE (8*8UL) -#define ESPFIX_STACKS_PER_PAGE (PAGE_SIZE/ESPFIX_STACK_SIZE) - -/* There is address space for how many espfix pages? */ -#define ESPFIX_PAGE_SPACE (1UL << (PGDIR_SHIFT-PAGE_SHIFT-16)) - -#define ESPFIX_MAX_CPUS (ESPFIX_STACKS_PER_PAGE * ESPFIX_PAGE_SPACE) -#if CONFIG_NR_CPUS > ESPFIX_MAX_CPUS -# error "Need more than one PGD for the ESPFIX hack" -#endif - -#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO) - -/* This contains the *bottom* address of the espfix stack */ -DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_stack); -DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr); - -/* Initialization mutex - should this be a spinlock? */ -static DEFINE_MUTEX(espfix_init_mutex); - -/* Page allocation bitmap - each page serves ESPFIX_STACKS_PER_PAGE CPUs */ -#define ESPFIX_MAX_PAGES DIV_ROUND_UP(CONFIG_NR_CPUS, ESPFIX_STACKS_PER_PAGE) -static void *espfix_pages[ESPFIX_MAX_PAGES]; - -static __page_aligned_bss pud_t espfix_pud_page[PTRS_PER_PUD] - __aligned(PAGE_SIZE); - -static unsigned int page_random, slot_random; - -/* - * This returns the bottom address of the espfix stack for a specific CPU. - * The math allows for a non-power-of-two ESPFIX_STACK_SIZE, in which case - * we have to account for some amount of padding at the end of each page. - */ -static inline unsigned long espfix_base_addr(unsigned int cpu) -{ - unsigned long page, slot; - unsigned long addr; - - page = (cpu / ESPFIX_STACKS_PER_PAGE) ^ page_random; - slot = (cpu + slot_random) % ESPFIX_STACKS_PER_PAGE; - addr = (page << PAGE_SHIFT) + (slot * ESPFIX_STACK_SIZE); - addr = (addr & 0xffffUL) | ((addr & ~0xffffUL) << 16); - addr += ESPFIX_BASE_ADDR; - return addr; -} - -#define PTE_STRIDE (65536/PAGE_SIZE) -#define ESPFIX_PTE_CLONES (PTRS_PER_PTE/PTE_STRIDE) -#define ESPFIX_PMD_CLONES PTRS_PER_PMD -#define ESPFIX_PUD_CLONES (65536/(ESPFIX_PTE_CLONES*ESPFIX_PMD_CLONES)) - -#define PGTABLE_PROT ((_KERNPG_TABLE & ~_PAGE_RW) | _PAGE_NX) - -static void init_espfix_random(void) -{ - unsigned long rand; - - /* - * This is run before the entropy pools are initialized, - * but this is hopefully better than nothing. - */ - if (!arch_get_random_long(&rand)) { - /* The constant is an arbitrary large prime */ - rdtscll(rand); - rand *= 0xc345c6b72fd16123UL; - } - - slot_random = rand % ESPFIX_STACKS_PER_PAGE; - page_random = (rand / ESPFIX_STACKS_PER_PAGE) - & (ESPFIX_PAGE_SPACE - 1); -} - -void __init init_espfix_bsp(void) -{ - pgd_t *pgd_p; - pteval_t ptemask; - - ptemask = __supported_pte_mask; - - /* Install the espfix pud into the kernel page directory */ - pgd_p = &init_level4_pgt[pgd_index(ESPFIX_BASE_ADDR)]; - pgd_populate(&init_mm, pgd_p, (pud_t *)espfix_pud_page); - - /* Randomize the locations */ - init_espfix_random(); - - /* The rest is the same as for any other processor */ - init_espfix_ap(); -} - -void init_espfix_ap(void) -{ - unsigned int cpu, page; - unsigned long addr; - pud_t pud, *pud_p; - pmd_t pmd, *pmd_p; - pte_t pte, *pte_p; - int n; - void *stack_page; - pteval_t ptemask; - - /* We only have to do this once... */ - if (likely(this_cpu_read(espfix_stack))) - return; /* Already initialized */ - - cpu = smp_processor_id(); - addr = espfix_base_addr(cpu); - page = cpu/ESPFIX_STACKS_PER_PAGE; - - /* Did another CPU already set this up? */ - stack_page = ACCESS_ONCE(espfix_pages[page]); - if (likely(stack_page)) - goto done; - - mutex_lock(&espfix_init_mutex); - - /* Did we race on the lock? */ - stack_page = ACCESS_ONCE(espfix_pages[page]); - if (stack_page) - goto unlock_done; - - ptemask = __supported_pte_mask; - - pud_p = &espfix_pud_page[pud_index(addr)]; - pud = *pud_p; - if (!pud_present(pud)) { - pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP); - pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask)); - paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); - for (n = 0; n < ESPFIX_PUD_CLONES; n++) - set_pud(&pud_p[n], pud); - } - - pmd_p = pmd_offset(&pud, addr); - pmd = *pmd_p; - if (!pmd_present(pmd)) { - pte_p = (pte_t *)__get_free_page(PGALLOC_GFP); - pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask)); - paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT); - for (n = 0; n < ESPFIX_PMD_CLONES; n++) - set_pmd(&pmd_p[n], pmd); - } - - pte_p = pte_offset_kernel(&pmd, addr); - stack_page = (void *)__get_free_page(GFP_KERNEL); - pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); - for (n = 0; n < ESPFIX_PTE_CLONES; n++) - set_pte(&pte_p[n*PTE_STRIDE], pte); - - /* Job is done for this CPU and any CPU which shares this page */ - ACCESS_ONCE(espfix_pages[page]) = stack_page; - -unlock_done: - mutex_unlock(&espfix_init_mutex); -done: - this_cpu_write(espfix_stack, addr); - this_cpu_write(espfix_waddr, (unsigned long)stack_page - + (addr & ~PAGE_MASK)); -} diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 1ffc32dbe45..e6253195a30 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -659,8 +659,8 @@ ftrace_modify_code(unsigned long ip, unsigned const char *old_code, ret = -EPERM; goto out; } - out: run_sync(); + out: return ret; fail_update: diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index df63cae573e..73afd11799c 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -566,10 +566,6 @@ ENDPROC(early_idt_handlers) /* This is global to keep gas from relaxing the jumps */ ENTRY(early_idt_handler) cld - - cmpl $2,(%esp) # X86_TRAP_NMI - je is_nmi # Ignore NMI - cmpl $2,%ss:early_recursion_flag je hlt_loop incl %ss:early_recursion_flag @@ -620,9 +616,8 @@ ex_entry: pop %edx pop %ecx pop %eax - decl %ss:early_recursion_flag -is_nmi: addl $8,%esp /* drop vector number and error code */ + decl %ss:early_recursion_flag iret ENDPROC(early_idt_handler) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index f2a9a2aa98f..a8368608ab4 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -343,9 +343,6 @@ early_idt_handlers: ENTRY(early_idt_handler) cld - cmpl $2,(%rsp) # X86_TRAP_NMI - je is_nmi # Ignore NMI - cmpl $2,early_recursion_flag(%rip) jz 1f incl early_recursion_flag(%rip) @@ -408,9 +405,8 @@ ENTRY(early_idt_handler) popq %rdx popq %rcx popq %rax - decl early_recursion_flag(%rip) -is_nmi: addq $16,%rsp # drop vector number and error code + decl early_recursion_flag(%rip) INTERRUPT_RETURN ENDPROC(early_idt_handler) diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index b03ff184254..f7ea30dce23 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -86,19 +86,10 @@ EXPORT_SYMBOL(__kernel_fpu_begin); void __kernel_fpu_end(void) { - if (use_eager_fpu()) { - /* - * For eager fpu, most the time, tsk_used_math() is true. - * Restore the user math as we are done with the kernel usage. - * At few instances during thread exit, signal handling etc, - * tsk_used_math() is false. Those few places will take proper - * actions, so we don't need to restore the math here. - */ - if (likely(tsk_used_math(current))) - math_state_restore(); - } else { + if (use_eager_fpu()) + math_state_restore(); + else stts(); - } } EXPORT_SYMBOL(__kernel_fpu_end); diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c index c37886d759c..ebc98739892 100644 --- a/arch/x86/kernel/ldt.c +++ b/arch/x86/kernel/ldt.c @@ -229,11 +229,6 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode) } } - if (!IS_ENABLED(CONFIG_X86_16BIT) && !ldt_info.seg_32bit) { - error = -EINVAL; - goto out_unlock; - } - fill_ldt(&ldt, &ldt_info); if (oldmode) ldt.avl = 0; diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c index a1da6737ba5..3f08f34f93e 100644 --- a/arch/x86/kernel/paravirt_patch_64.c +++ b/arch/x86/kernel/paravirt_patch_64.c @@ -6,6 +6,7 @@ DEF_NATIVE(pv_irq_ops, irq_disable, "cli"); DEF_NATIVE(pv_irq_ops, irq_enable, "sti"); DEF_NATIVE(pv_irq_ops, restore_fl, "pushq %rdi; popfq"); DEF_NATIVE(pv_irq_ops, save_fl, "pushfq; popq %rax"); +DEF_NATIVE(pv_cpu_ops, iret, "iretq"); DEF_NATIVE(pv_mmu_ops, read_cr2, "movq %cr2, %rax"); DEF_NATIVE(pv_mmu_ops, read_cr3, "movq %cr3, %rax"); DEF_NATIVE(pv_mmu_ops, write_cr3, "movq %rdi, %cr3"); @@ -49,6 +50,7 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf, PATCH_SITE(pv_irq_ops, save_fl); PATCH_SITE(pv_irq_ops, irq_enable); PATCH_SITE(pv_irq_ops, irq_disable); + PATCH_SITE(pv_cpu_ops, iret); PATCH_SITE(pv_cpu_ops, irq_enable_sysexit); PATCH_SITE(pv_cpu_ops, usergs_sysret32); PATCH_SITE(pv_cpu_ops, usergs_sysret64); diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index 0339f5c14bf..7305f7dfc7a 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -147,7 +147,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, childregs->bp = arg; childregs->orig_ax = -1; childregs->cs = __KERNEL_CS | get_kernel_rpl(); - childregs->flags = X86_EFLAGS_IF | X86_EFLAGS_FIXED; + childregs->flags = X86_EFLAGS_IF | X86_EFLAGS_BIT1; p->fpu_counter = 0; p->thread.io_bitmap_ptr = NULL; memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps)); diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index f99a242730e..355ae06dbf9 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -176,7 +176,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, childregs->bp = arg; childregs->orig_ax = -1; childregs->cs = __KERNEL_CS | get_kernel_rpl(); - childregs->flags = X86_EFLAGS_IF | X86_EFLAGS_FIXED; + childregs->flags = X86_EFLAGS_IF | X86_EFLAGS_BIT1; return 0; } *childregs = *current_pt_regs(); diff --git a/arch/x86/kernel/quirks.c b/arch/x86/kernel/quirks.c index 52dbf1e400d..04ee1e2e4c0 100644 --- a/arch/x86/kernel/quirks.c +++ b/arch/x86/kernel/quirks.c @@ -529,7 +529,7 @@ static void quirk_amd_nb_node(struct pci_dev *dev) return; pci_read_config_dword(nb_ht, 0x60, &val); - node = pcibus_to_node(dev->bus) | (val & 7); + node = val & 7; /* * Some hardware may return an invalid node ID, * so check it first: diff --git a/arch/x86/kernel/resource.c b/arch/x86/kernel/resource.c index 80eab01c1a6..2a26819bb6a 100644 --- a/arch/x86/kernel/resource.c +++ b/arch/x86/kernel/resource.c @@ -37,12 +37,10 @@ static void remove_e820_regions(struct resource *avail) void arch_remove_reservations(struct resource *avail) { - /* - * Trim out BIOS area (high 2MB) and E820 regions. We do not remove - * the low 1MB unconditionally, as this area is needed for some ISA - * cards requiring a memory range, e.g. the i82365 PCMCIA controller. - */ + /* Trim out BIOS areas (low 1MB and high 2MB) and E820 regions */ if (avail->flags & IORESOURCE_MEM) { + if (avail->start < BIOS_END) + avail->start = BIOS_END; resource_clip(avail, BIOS_ROM_BASE, BIOS_ROM_END); remove_e820_regions(avail); diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c index 66deef41512..087ab2af381 100644 --- a/arch/x86/kernel/signal.c +++ b/arch/x86/kernel/signal.c @@ -677,11 +677,6 @@ handle_signal(struct ksignal *ksig, struct pt_regs *regs) * handler too. */ regs->flags &= ~X86_EFLAGS_TF; - /* - * Ensure the signal handler starts with the new fpu state. - */ - if (used_math()) - drop_init_fpu(current); } signal_setup_done(failed, ksig, test_thread_flag(TIF_SINGLESTEP)); } diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 87084ab90d1..bfd348e9936 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -265,13 +265,6 @@ notrace static void __cpuinit start_secondary(void *unused) check_tsc_sync_target(); /* - * Enable the espfix hack for this CPU - */ -#ifdef CONFIG_X86_ESPFIX64 - init_espfix_ap(); -#endif - - /* * We need to hold vector_lock so there the set of online cpus * does not change while we are assigning vectors to cpus. Holding * this lock ensures we don't half assign or remove an irq from a cpu. @@ -1284,9 +1277,6 @@ static void remove_siblinginfo(int cpu) for_each_cpu(sibling, cpu_sibling_mask(cpu)) cpumask_clear_cpu(cpu, cpu_sibling_mask(sibling)); - for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) - cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling)); - cpumask_clear(cpu_llc_shared_mask(cpu)); cpumask_clear(cpu_sibling_mask(cpu)); cpumask_clear(cpu_core_mask(cpu)); c->phys_proc_id = 0; diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 4e27ba53c40..098b3cfda72 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -968,17 +968,14 @@ void __init tsc_init(void) x86_init.timers.tsc_pre_init(); - if (!cpu_has_tsc) { - setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); + if (!cpu_has_tsc) return; - } tsc_khz = x86_platform.calibrate_tsc(); cpu_khz = tsc_khz; if (!tsc_khz) { mark_tsc_unstable("could not calculate TSC khz"); - setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); return; } diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c index c52c07efe97..9a907a67be8 100644 --- a/arch/x86/kernel/vsyscall_64.c +++ b/arch/x86/kernel/vsyscall_64.c @@ -125,10 +125,10 @@ static void warn_bad_vsyscall(const char *level, struct pt_regs *regs, if (!show_unhandled_signals) return; - printk_ratelimited("%s%s[%d] %s ip:%lx cs:%lx sp:%lx ax:%lx si:%lx di:%lx\n", - level, current->comm, task_pid_nr(current), - message, regs->ip, regs->cs, - regs->sp, regs->ax, regs->si, regs->di); + pr_notice_ratelimited("%s%s[%d] %s ip:%lx cs:%lx sp:%lx ax:%lx si:%lx di:%lx\n", + level, current->comm, task_pid_nr(current), + message, regs->ip, regs->cs, + regs->sp, regs->ax, regs->si, regs->di); } static int addr_to_vsyscall_nr(unsigned long addr) diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c index 1ee723298e9..ada87a329ed 100644 --- a/arch/x86/kernel/xsave.c +++ b/arch/x86/kernel/xsave.c @@ -268,6 +268,8 @@ int save_xstate_sig(void __user *buf, void __user *buf_fx, int size) if (use_fxsr() && save_xstate_epilog(buf_fx, ia32_fxstate)) return -1; + drop_init_fpu(tsk); /* trigger finit */ + return 0; } @@ -398,11 +400,8 @@ int __restore_xstate_sig(void __user *buf, void __user *buf_fx, int size) set_used_math(); } - if (use_eager_fpu()) { - preempt_disable(); + if (use_eager_fpu()) math_state_restore(); - preempt_enable(); - } return err; } else { diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 4c01f022c6a..5484d54582c 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -663,6 +663,11 @@ static void rsp_increment(struct x86_emulate_ctxt *ctxt, int inc) masked_increment(reg_rmw(ctxt, VCPU_REGS_RSP), stack_mask(ctxt), inc); } +static inline void jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) +{ + register_address_increment(ctxt, &ctxt->_eip, rel); +} + static u32 desc_limit_scaled(struct desc_struct *desc) { u32 limit = get_desc_limit(desc); @@ -736,38 +741,6 @@ static int emulate_nm(struct x86_emulate_ctxt *ctxt) return emulate_exception(ctxt, NM_VECTOR, 0, false); } -static inline int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst, - int cs_l) -{ - switch (ctxt->op_bytes) { - case 2: - ctxt->_eip = (u16)dst; - break; - case 4: - ctxt->_eip = (u32)dst; - break; - case 8: - if ((cs_l && is_noncanonical_address(dst)) || - (!cs_l && (dst & ~(u32)-1))) - return emulate_gp(ctxt, 0); - ctxt->_eip = dst; - break; - default: - WARN(1, "unsupported eip assignment size\n"); - } - return X86EMUL_CONTINUE; -} - -static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst) -{ - return assign_eip_far(ctxt, dst, ctxt->mode == X86EMUL_MODE_PROT64); -} - -static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) -{ - return assign_eip_near(ctxt, ctxt->_eip + rel); -} - static u16 get_segment_selector(struct x86_emulate_ctxt *ctxt, unsigned seg) { u16 selector; @@ -2188,15 +2161,13 @@ static int em_grp45(struct x86_emulate_ctxt *ctxt) case 2: /* call near abs */ { long int old_eip; old_eip = ctxt->_eip; - rc = assign_eip_near(ctxt, ctxt->src.val); - if (rc != X86EMUL_CONTINUE) - break; + ctxt->_eip = ctxt->src.val; ctxt->src.val = old_eip; rc = em_push(ctxt); break; } case 4: /* jmp abs */ - rc = assign_eip_near(ctxt, ctxt->src.val); + ctxt->_eip = ctxt->src.val; break; case 5: /* jmp far */ rc = em_jmp_far(ctxt); @@ -2228,21 +2199,16 @@ static int em_cmpxchg8b(struct x86_emulate_ctxt *ctxt) static int em_ret(struct x86_emulate_ctxt *ctxt) { - int rc; - unsigned long eip; - - rc = emulate_pop(ctxt, &eip, ctxt->op_bytes); - if (rc != X86EMUL_CONTINUE) - return rc; - - return assign_eip_near(ctxt, eip); + ctxt->dst.type = OP_REG; + ctxt->dst.addr.reg = &ctxt->_eip; + ctxt->dst.bytes = ctxt->op_bytes; + return em_pop(ctxt); } static int em_ret_far(struct x86_emulate_ctxt *ctxt) { int rc; unsigned long cs; - int cpl = ctxt->ops->cpl(ctxt); rc = emulate_pop(ctxt, &ctxt->_eip, ctxt->op_bytes); if (rc != X86EMUL_CONTINUE) @@ -2252,9 +2218,6 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt) rc = emulate_pop(ctxt, &cs, ctxt->op_bytes); if (rc != X86EMUL_CONTINUE) return rc; - /* Outer-privilege level return is not implemented */ - if (ctxt->mode >= X86EMUL_MODE_PROT16 && (cs & 3) > cpl) - return X86EMUL_UNHANDLEABLE; rc = load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS); return rc; } @@ -2502,7 +2465,7 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) { const struct x86_emulate_ops *ops = ctxt->ops; struct desc_struct cs, ss; - u64 msr_data, rcx, rdx; + u64 msr_data; int usermode; u16 cs_sel = 0, ss_sel = 0; @@ -2518,9 +2481,6 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) else usermode = X86EMUL_MODE_PROT32; - rcx = reg_read(ctxt, VCPU_REGS_RCX); - rdx = reg_read(ctxt, VCPU_REGS_RDX); - cs.dpl = 3; ss.dpl = 3; ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data); @@ -2538,9 +2498,6 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) ss_sel = cs_sel + 8; cs.d = 0; cs.l = 1; - if (is_noncanonical_address(rcx) || - is_noncanonical_address(rdx)) - return emulate_gp(ctxt, 0); break; } cs_sel |= SELECTOR_RPL_MASK; @@ -2549,8 +2506,8 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) ops->set_segment(ctxt, cs_sel, &cs, 0, VCPU_SREG_CS); ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS); - ctxt->_eip = rdx; - *reg_write(ctxt, VCPU_REGS_RSP) = rcx; + ctxt->_eip = reg_read(ctxt, VCPU_REGS_RDX); + *reg_write(ctxt, VCPU_REGS_RSP) = reg_read(ctxt, VCPU_REGS_RCX); return X86EMUL_CONTINUE; } @@ -3089,13 +3046,10 @@ static int em_aad(struct x86_emulate_ctxt *ctxt) static int em_call(struct x86_emulate_ctxt *ctxt) { - int rc; long rel = ctxt->src.val; ctxt->src.val = (unsigned long)ctxt->_eip; - rc = jmp_rel(ctxt, rel); - if (rc != X86EMUL_CONTINUE) - return rc; + jmp_rel(ctxt, rel); return em_push(ctxt); } @@ -3127,12 +3081,11 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt) static int em_ret_near_imm(struct x86_emulate_ctxt *ctxt) { int rc; - unsigned long eip; - rc = emulate_pop(ctxt, &eip, ctxt->op_bytes); - if (rc != X86EMUL_CONTINUE) - return rc; - rc = assign_eip_near(ctxt, eip); + ctxt->dst.type = OP_REG; + ctxt->dst.addr.reg = &ctxt->_eip; + ctxt->dst.bytes = ctxt->op_bytes; + rc = emulate_pop(ctxt, &ctxt->dst.val, ctxt->op_bytes); if (rc != X86EMUL_CONTINUE) return rc; rsp_increment(ctxt, ctxt->src.val); @@ -3422,24 +3375,20 @@ static int em_lmsw(struct x86_emulate_ctxt *ctxt) static int em_loop(struct x86_emulate_ctxt *ctxt) { - int rc = X86EMUL_CONTINUE; - register_address_increment(ctxt, reg_rmw(ctxt, VCPU_REGS_RCX), -1); if ((address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) != 0) && (ctxt->b == 0xe2 || test_cc(ctxt->b ^ 0x5, ctxt->eflags))) - rc = jmp_rel(ctxt, ctxt->src.val); + jmp_rel(ctxt, ctxt->src.val); - return rc; + return X86EMUL_CONTINUE; } static int em_jcxz(struct x86_emulate_ctxt *ctxt) { - int rc = X86EMUL_CONTINUE; - if (address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) == 0) - rc = jmp_rel(ctxt, ctxt->src.val); + jmp_rel(ctxt, ctxt->src.val); - return rc; + return X86EMUL_CONTINUE; } static int em_in(struct x86_emulate_ctxt *ctxt) @@ -4768,7 +4717,7 @@ special_insn: break; case 0x70 ... 0x7f: /* jcc (short) */ if (test_cc(ctxt->b, ctxt->eflags)) - rc = jmp_rel(ctxt, ctxt->src.val); + jmp_rel(ctxt, ctxt->src.val); break; case 0x8d: /* lea r16/r32, m */ ctxt->dst.val = ctxt->src.addr.mem.ea; @@ -4797,7 +4746,7 @@ special_insn: break; case 0xe9: /* jmp rel */ case 0xeb: /* jmp rel short */ - rc = jmp_rel(ctxt, ctxt->src.val); + jmp_rel(ctxt, ctxt->src.val); ctxt->dst.type = OP_NONE; /* Disable writeback. */ break; case 0xf4: /* hlt */ @@ -4909,7 +4858,7 @@ twobyte_insn: break; case 0x80 ... 0x8f: /* jnz rel, etc*/ if (test_cc(ctxt->b, ctxt->eflags)) - rc = jmp_rel(ctxt, ctxt->src.val); + jmp_rel(ctxt, ctxt->src.val); break; case 0x90 ... 0x9f: /* setcc r/m8 */ ctxt->dst.val = test_cc(ctxt->b, ctxt->eflags); diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c index 298781d4cfb..518d86471b7 100644 --- a/arch/x86/kvm/i8254.c +++ b/arch/x86/kvm/i8254.c @@ -262,10 +262,8 @@ void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu) return; timer = &pit->pit_state.timer; - mutex_lock(&pit->pit_state.lock); if (hrtimer_cancel(timer)) hrtimer_start_expires(timer, HRTIMER_MODE_ABS); - mutex_unlock(&pit->pit_state.lock); } static void destroy_pit_timer(struct kvm_pit *pit) diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index 3ec38cb56bd..484bc874688 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -108,7 +108,7 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v) vector = kvm_cpu_get_extint(v); - if (vector != -1) + if (kvm_apic_vid_enabled(v->kvm) || vector != -1) return vector; /* PIC */ return kvm_get_apic_interrupt(v); /* APIC */ diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 681e4e251f0..61d9fed5eb3 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -362,90 +362,31 @@ static inline int apic_find_highest_irr(struct kvm_lapic *apic) static inline void apic_clear_irr(int vec, struct kvm_lapic *apic) { - struct kvm_vcpu *vcpu; - - vcpu = apic->vcpu; - + apic->irr_pending = false; apic_clear_vector(vec, apic->regs + APIC_IRR); - if (unlikely(kvm_apic_vid_enabled(vcpu->kvm))) - /* try to update RVI */ - kvm_make_request(KVM_REQ_EVENT, vcpu); - else { - vec = apic_search_irr(apic); - apic->irr_pending = (vec != -1); - } + if (apic_search_irr(apic) != -1) + apic->irr_pending = true; } static inline void apic_set_isr(int vec, struct kvm_lapic *apic) { - struct kvm_vcpu *vcpu; - - if (__apic_test_and_set_vector(vec, apic->regs + APIC_ISR)) - return; - - vcpu = apic->vcpu; - - /* - * With APIC virtualization enabled, all caching is disabled - * because the processor can modify ISR under the hood. Instead - * just set SVI. - */ - if (unlikely(kvm_apic_vid_enabled(vcpu->kvm))) - kvm_x86_ops->hwapic_isr_update(vcpu->kvm, vec); - else { + if (!__apic_test_and_set_vector(vec, apic->regs + APIC_ISR)) ++apic->isr_count; - BUG_ON(apic->isr_count > MAX_APIC_VECTOR); - /* - * ISR (in service register) bit is set when injecting an interrupt. - * The highest vector is injected. Thus the latest bit set matches - * the highest bit in ISR. - */ - apic->highest_isr_cache = vec; - } -} - -static inline int apic_find_highest_isr(struct kvm_lapic *apic) -{ - int result; - + BUG_ON(apic->isr_count > MAX_APIC_VECTOR); /* - * Note that isr_count is always 1, and highest_isr_cache - * is always -1, with APIC virtualization enabled. + * ISR (in service register) bit is set when injecting an interrupt. + * The highest vector is injected. Thus the latest bit set matches + * the highest bit in ISR. */ - if (!apic->isr_count) - return -1; - if (likely(apic->highest_isr_cache != -1)) - return apic->highest_isr_cache; - - result = find_highest_vector(apic->regs + APIC_ISR); - ASSERT(result == -1 || result >= 16); - - return result; + apic->highest_isr_cache = vec; } static inline void apic_clear_isr(int vec, struct kvm_lapic *apic) { - struct kvm_vcpu *vcpu; - if (!__apic_test_and_clear_vector(vec, apic->regs + APIC_ISR)) - return; - - vcpu = apic->vcpu; - - /* - * We do get here for APIC virtualization enabled if the guest - * uses the Hyper-V APIC enlightenment. In this case we may need - * to trigger a new interrupt delivery by writing the SVI field; - * on the other hand isr_count and highest_isr_cache are unused - * and must be left alone. - */ - if (unlikely(kvm_apic_vid_enabled(vcpu->kvm))) - kvm_x86_ops->hwapic_isr_update(vcpu->kvm, - apic_find_highest_isr(apic)); - else { + if (__apic_test_and_clear_vector(vec, apic->regs + APIC_ISR)) --apic->isr_count; - BUG_ON(apic->isr_count < 0); - apic->highest_isr_cache = -1; - } + BUG_ON(apic->isr_count < 0); + apic->highest_isr_cache = -1; } int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu) @@ -525,6 +466,22 @@ static void pv_eoi_clr_pending(struct kvm_vcpu *vcpu) __clear_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention); } +static inline int apic_find_highest_isr(struct kvm_lapic *apic) +{ + int result; + + /* Note that isr_count is always 1 with vid enabled */ + if (!apic->isr_count) + return -1; + if (likely(apic->highest_isr_cache != -1)) + return apic->highest_isr_cache; + + result = find_highest_vector(apic->regs + APIC_ISR); + ASSERT(result == -1 || result >= 16); + + return result; +} + void kvm_apic_update_tmr(struct kvm_vcpu *vcpu, u32 *tmr) { struct kvm_lapic *apic = vcpu->arch.apic; @@ -1665,13 +1622,6 @@ int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu) if (vector == -1) return -1; - /* - * We get here even with APIC virtualization enabled, if doing - * nested virtualization and L1 runs with the "acknowledge interrupt - * on exit" mode. Then we cannot inject the interrupt via RVI, - * because the process would deliver it through the IDT. - */ - apic_set_isr(vector, apic); apic_update_ppr(apic); apic_clear_irr(vector, apic); diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index e14b1f8667b..004cc87b781 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2585,9 +2585,6 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write, int emulate = 0; gfn_t pseudo_gfn; - if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) - return 0; - for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) { if (iterator.level == level) { mmu_set_spte(vcpu, iterator.sptep, ACC_ALL, @@ -2751,9 +2748,6 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, bool ret = false; u64 spte = 0ull; - if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) - return false; - if (!page_fault_can_be_fast(vcpu, error_code)) return false; @@ -3072,7 +3066,7 @@ static void mmu_sync_roots(struct kvm_vcpu *vcpu) if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) return; - vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY); + vcpu_clear_mmio_info(vcpu, ~0ul); kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC); if (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL) { hpa_t root = vcpu->arch.mmu.root_hpa; @@ -3145,9 +3139,6 @@ static u64 walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr) struct kvm_shadow_walk_iterator iterator; u64 spte = 0ull; - if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) - return spte; - walk_shadow_page_lockless_begin(vcpu); for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) if (!is_shadow_present_pte(spte)) @@ -4338,9 +4329,6 @@ int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]) u64 spte; int nr_sptes = 0; - if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) - return nr_sptes; - walk_shadow_page_lockless_begin(vcpu); for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) { sptes[iterator.level-1] = spte; diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 7e6090e1323..da20860b457 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -423,9 +423,6 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, if (FNAME(gpte_changed)(vcpu, gw, top_level)) goto out_gpte_changed; - if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) - goto out_gpte_changed; - for (shadow_walk_init(&it, vcpu, addr); shadow_walk_okay(&it) && it.level > gw->level; shadow_walk_next(&it)) { @@ -674,11 +671,6 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) */ mmu_topup_memory_caches(vcpu); - if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) { - WARN_ON(1); - return; - } - spin_lock(&vcpu->kvm->mmu_lock); for_each_shadow_entry(vcpu, gva, iterator) { level = iterator.level; diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 8bf40a243d7..a14a6eaf871 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2985,8 +2985,10 @@ static int cr8_write_interception(struct vcpu_svm *svm) u8 cr8_prev = kvm_get_cr8(&svm->vcpu); /* instruction emulation calls kvm_set_cr8() */ r = cr_interception(svm); - if (irqchip_in_kernel(svm->vcpu.kvm)) + if (irqchip_in_kernel(svm->vcpu.kvm)) { + clr_cr_intercept(svm, INTERCEPT_CR8_WRITE); return r; + } if (cr8_prev <= kvm_get_cr8(&svm->vcpu)) return r; kvm_run->exit_reason = KVM_EXIT_SET_TPR; @@ -3196,7 +3198,7 @@ static int wrmsr_interception(struct vcpu_svm *svm) msr.host_initiated = false; svm->next_rip = kvm_rip_read(&svm->vcpu) + 2; - if (kvm_set_msr(&svm->vcpu, &msr)) { + if (svm_set_msr(&svm->vcpu, &msr)) { trace_kvm_msr_write_ex(ecx, data); kvm_inject_gp(&svm->vcpu, 0); } else { @@ -3478,9 +3480,9 @@ static int handle_exit(struct kvm_vcpu *vcpu) if (exit_code >= ARRAY_SIZE(svm_exit_handlers) || !svm_exit_handlers[exit_code]) { - WARN_ONCE(1, "vmx: unexpected exit reason 0x%x\n", exit_code); - kvm_queue_exception(vcpu, UD_VECTOR); - return 1; + kvm_run->exit_reason = KVM_EXIT_UNKNOWN; + kvm_run->hw.hardware_exit_reason = exit_code; + return 0; } return svm_exit_handlers[exit_code](svm); @@ -3548,8 +3550,6 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) if (is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK)) return; - clr_cr_intercept(svm, INTERCEPT_CR8_WRITE); - if (irr == -1) return; diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 51139ff3491..5402c94ab76 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -2493,15 +2493,12 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) break; msr = find_msr_entry(vmx, msr_index); if (msr) { - u64 old_msr_data = msr->data; msr->data = data; if (msr - vmx->guest_msrs < vmx->save_nmsrs) { preempt_disable(); - ret = kvm_set_shared_msr(msr->index, msr->data, - msr->mask); + kvm_set_shared_msr(msr->index, msr->data, + msr->mask); preempt_enable(); - if (ret) - msr->data = old_msr_data; } break; } @@ -5065,7 +5062,7 @@ static int handle_wrmsr(struct kvm_vcpu *vcpu) msr.data = data; msr.index = ecx; msr.host_initiated = false; - if (kvm_set_msr(vcpu, &msr) != 0) { + if (vmx_set_msr(vcpu, &msr) != 0) { trace_kvm_msr_write_ex(ecx, data); kvm_inject_gp(vcpu, 0); return 1; @@ -6654,10 +6651,10 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu) && kvm_vmx_exit_handlers[exit_reason]) return kvm_vmx_exit_handlers[exit_reason](vcpu); else { - WARN_ONCE(1, "vmx: unexpected exit reason 0x%x\n", exit_reason); - kvm_queue_exception(vcpu, UD_VECTOR); - return 1; + vcpu->run->exit_reason = KVM_EXIT_UNKNOWN; + vcpu->run->hw.hardware_exit_reason = exit_reason; } + return 0; } static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) @@ -7136,8 +7133,8 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu) struct vcpu_vmx *vmx = to_vmx(vcpu); free_vpid(vmx); - free_loaded_vmcs(vmx->loaded_vmcs); free_nested(vmx); + free_loaded_vmcs(vmx->loaded_vmcs); kfree(vmx->guest_msrs); kvm_vcpu_uninit(vcpu); kmem_cache_free(kvm_vcpu_cache, vmx); @@ -7952,7 +7949,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp); kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip); - vmx_set_rflags(vcpu, X86_EFLAGS_FIXED); + vmx_set_rflags(vcpu, X86_EFLAGS_BIT1); /* * Note that calling vmx_set_cr0 is important, even if cr0 hasn't * actually changed, because it depends on the current state of diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 684f46dc87d..1be0a9e75d1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -225,25 +225,20 @@ static void kvm_shared_msr_cpu_online(void) shared_msr_update(i, shared_msrs_global.msrs[i]); } -int kvm_set_shared_msr(unsigned slot, u64 value, u64 mask) +void kvm_set_shared_msr(unsigned slot, u64 value, u64 mask) { unsigned int cpu = smp_processor_id(); struct kvm_shared_msrs *smsr = per_cpu_ptr(shared_msrs, cpu); - int err; if (((value ^ smsr->values[slot].curr) & mask) == 0) - return 0; + return; smsr->values[slot].curr = value; - err = wrmsrl_safe(shared_msrs_global.msrs[slot], value); - if (err) - return 1; - + wrmsrl(shared_msrs_global.msrs[slot], value); if (!smsr->registered) { smsr->urn.on_user_return = kvm_on_user_return; user_return_notifier_register(&smsr->urn); smsr->registered = true; } - return 0; } EXPORT_SYMBOL_GPL(kvm_set_shared_msr); @@ -925,6 +920,7 @@ void kvm_enable_efer_bits(u64 mask) } EXPORT_SYMBOL_GPL(kvm_enable_efer_bits); + /* * Writes msr value into into the appropriate "register". * Returns 0 on success, non-0 otherwise. @@ -932,34 +928,8 @@ EXPORT_SYMBOL_GPL(kvm_enable_efer_bits); */ int kvm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) { - switch (msr->index) { - case MSR_FS_BASE: - case MSR_GS_BASE: - case MSR_KERNEL_GS_BASE: - case MSR_CSTAR: - case MSR_LSTAR: - if (is_noncanonical_address(msr->data)) - return 1; - break; - case MSR_IA32_SYSENTER_EIP: - case MSR_IA32_SYSENTER_ESP: - /* - * IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if - * non-canonical address is written on Intel but not on - * AMD (which ignores the top 32-bits, because it does - * not implement 64-bit SYSENTER). - * - * 64-bit code should hence be able to write a non-canonical - * value on AMD. Making the address canonical ensures that - * vmentry does not fail on Intel after writing a non-canonical - * value, and that something deterministic happens if the guest - * invokes 64-bit SYSENTER. - */ - msr->data = get_canonical(msr->data); - } return kvm_x86_ops->set_msr(vcpu, msr); } -EXPORT_SYMBOL_GPL(kvm_set_msr); /* * Adapt set_msr() to msr_io()'s calling convention @@ -1226,37 +1196,20 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr) elapsed = ns - kvm->arch.last_tsc_nsec; if (vcpu->arch.virtual_tsc_khz) { - int faulted = 0; - /* n.b - signed multiplication and division required */ usdiff = data - kvm->arch.last_tsc_write; #ifdef CONFIG_X86_64 usdiff = (usdiff * 1000) / vcpu->arch.virtual_tsc_khz; #else /* do_div() only does unsigned */ - asm("1: idivl %[divisor]\n" - "2: xor %%edx, %%edx\n" - " movl $0, %[faulted]\n" - "3:\n" - ".section .fixup,\"ax\"\n" - "4: movl $1, %[faulted]\n" - " jmp 3b\n" - ".previous\n" - - _ASM_EXTABLE(1b, 4b) - - : "=A"(usdiff), [faulted] "=r" (faulted) - : "A"(usdiff * 1000), [divisor] "rm"(vcpu->arch.virtual_tsc_khz)); - + asm("idivl %2; xor %%edx, %%edx" + : "=A"(usdiff) + : "A"(usdiff * 1000), "rm"(vcpu->arch.virtual_tsc_khz)); #endif do_div(elapsed, 1000); usdiff -= elapsed; if (usdiff < 0) usdiff = -usdiff; - - /* idivl overflow => difference is larger than USEC_PER_SEC */ - if (faulted) - usdiff = USEC_PER_SEC; } else usdiff = USEC_PER_SEC; /* disable TSC match window below */ diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 7626d3efa06..3186542f2fa 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -78,23 +78,15 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu, vcpu->arch.mmio_gva = gva & PAGE_MASK; vcpu->arch.access = access; vcpu->arch.mmio_gfn = gfn; - vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation; -} - -static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu) -{ - return vcpu->arch.mmio_gen == kvm_memslots(vcpu->kvm)->generation; } /* - * Clear the mmio cache info for the given gva. If gva is MMIO_GVA_ANY, we - * clear all mmio cache info. + * Clear the mmio cache info for the given gva, + * specially, if gva is ~0ul, we clear all mmio cache info. */ -#define MMIO_GVA_ANY (~(gva_t)0) - static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva) { - if (gva != MMIO_GVA_ANY && vcpu->arch.mmio_gva != (gva & PAGE_MASK)) + if (gva != (~0ul) && vcpu->arch.mmio_gva != (gva & PAGE_MASK)) return; vcpu->arch.mmio_gva = 0; @@ -102,8 +94,7 @@ static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva) static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva) { - if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gva && - vcpu->arch.mmio_gva == (gva & PAGE_MASK)) + if (vcpu->arch.mmio_gva && vcpu->arch.mmio_gva == (gva & PAGE_MASK)) return true; return false; @@ -111,8 +102,7 @@ static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva) static inline bool vcpu_match_mmio_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) { - if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gfn && - vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT) + if (vcpu->arch.mmio_gfn && vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT) return true; return false; diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index e04e6775323..0002a3a3308 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -30,13 +30,11 @@ struct pg_state { unsigned long start_address; unsigned long current_address; const struct addr_marker *marker; - unsigned long lines; }; struct addr_marker { unsigned long start_address; const char *name; - unsigned long max_lines; }; /* indices for address_markers; keep sync'd w/ address_markers below */ @@ -47,7 +45,6 @@ enum address_markers_idx { LOW_KERNEL_NR, VMALLOC_START_NR, VMEMMAP_START_NR, - ESPFIX_START_NR, HIGH_KERNEL_NR, MODULES_VADDR_NR, MODULES_END_NR, @@ -70,7 +67,6 @@ static struct addr_marker address_markers[] = { { PAGE_OFFSET, "Low Kernel Mapping" }, { VMALLOC_START, "vmalloc() Area" }, { VMEMMAP_START, "Vmemmap" }, - { ESPFIX_BASE_ADDR, "ESPfix Area", 16 }, { __START_KERNEL_map, "High Kernel Mapping" }, { MODULES_VADDR, "Modules" }, { MODULES_END, "End Modules" }, @@ -167,7 +163,7 @@ static void note_page(struct seq_file *m, struct pg_state *st, pgprot_t new_prot, int level) { pgprotval_t prot, cur; - static const char units[] = "BKMGTPE"; + static const char units[] = "KMGTPE"; /* * If we have a "break" in the series, we need to flush the state that @@ -182,7 +178,6 @@ static void note_page(struct seq_file *m, struct pg_state *st, st->current_prot = new_prot; st->level = level; st->marker = address_markers; - st->lines = 0; seq_printf(m, "---[ %s ]---\n", st->marker->name); } else if (prot != cur || level != st->level || st->current_address >= st->marker[1].start_address) { @@ -193,21 +188,17 @@ static void note_page(struct seq_file *m, struct pg_state *st, /* * Now print the actual finished series */ - if (!st->marker->max_lines || - st->lines < st->marker->max_lines) { - seq_printf(m, "0x%0*lx-0x%0*lx ", - width, st->start_address, - width, st->current_address); - - delta = (st->current_address - st->start_address); - while (!(delta & 1023) && unit[1]) { - delta >>= 10; - unit++; - } - seq_printf(m, "%9lu%c ", delta, *unit); - printk_prot(m, st->current_prot, st->level); + seq_printf(m, "0x%0*lx-0x%0*lx ", + width, st->start_address, + width, st->current_address); + + delta = (st->current_address - st->start_address) >> 10; + while (!(delta & 1023) && unit[1]) { + delta >>= 10; + unit++; } - st->lines++; + seq_printf(m, "%9lu%c ", delta, *unit); + printk_prot(m, st->current_prot, st->level); /* * We print markers for special areas of address space, @@ -215,15 +206,7 @@ static void note_page(struct seq_file *m, struct pg_state *st, * This helps in the interpretation. */ if (st->current_address >= st->marker[1].start_address) { - if (st->marker->max_lines && - st->lines > st->marker->max_lines) { - unsigned long nskip = - st->lines - st->marker->max_lines; - seq_printf(m, "... %lu entr%s skipped ... \n", - nskip, nskip == 1 ? "y" : "ies"); - } st->marker++; - st->lines = 0; seq_printf(m, "---[ %s ]---\n", st->marker->name); } diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 86c758de4b3..9a1e6583910 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -50,21 +50,6 @@ int ioremap_change_attr(unsigned long vaddr, unsigned long size, return err; } -static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages, - void *arg) -{ - unsigned long i; - - for (i = 0; i < nr_pages; ++i) - if (pfn_valid(start_pfn + i) && - !PageReserved(pfn_to_page(start_pfn + i))) - return 1; - - WARN_ONCE(1, "ioremap on RAM pfn 0x%lx\n", start_pfn); - - return 0; -} - /* * Remap an arbitrary physical address space into the kernel virtual * address space. Needed when the kernel wants to access high addresses @@ -108,11 +93,14 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, /* * Don't allow anybody to remap normal RAM that we're using.. */ - pfn = phys_addr >> PAGE_SHIFT; last_pfn = last_addr >> PAGE_SHIFT; - if (walk_system_ram_range(pfn, last_pfn - pfn + 1, NULL, - __ioremap_check_ram) == 1) - return NULL; + for (pfn = phys_addr >> PAGE_SHIFT; pfn <= last_pfn; pfn++) { + int is_ram = page_is_ram(pfn); + + if (is_ram && pfn_valid(pfn) && !PageReserved(pfn_to_page(pfn))) + return NULL; + WARN_ON_ONCE(is_ram); + } /* * Mappings have to be page-aligned diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index aabdf762f59..bb32480c2d7 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -389,7 +389,7 @@ phys_addr_t slow_virt_to_phys(void *__virt_addr) psize = page_level_size(level); pmask = page_level_mask(level); offset = virt_addr & ~pmask; - phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; + phys_addr = pte_pfn(*pte) << PAGE_SHIFT; return (phys_addr | offset); } EXPORT_SYMBOL_GPL(slow_virt_to_phys); diff --git a/arch/x86/net/bpf_jit.S b/arch/x86/net/bpf_jit.S index 01495755701..877b9a1b215 100644 --- a/arch/x86/net/bpf_jit.S +++ b/arch/x86/net/bpf_jit.S @@ -140,7 +140,7 @@ bpf_slow_path_byte_msh: push %r9; \ push SKBDATA; \ /* rsi already has offset */ \ - mov $SIZE,%edx; /* size */ \ + mov $SIZE,%ecx; /* size */ \ call bpf_internal_load_pointer_neg_helper; \ test %rax,%rax; \ pop SKBDATA; \ diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c index 2883f084020..94919e307f8 100644 --- a/arch/x86/pci/i386.c +++ b/arch/x86/pci/i386.c @@ -162,10 +162,6 @@ pcibios_align_resource(void *data, const struct resource *res, return start; if (start & 0x300) start = (start + 0x3ff) & ~0x3ff; - } else if (res->flags & IORESOURCE_MEM) { - /* The low 1MB range is reserved for ISA cards */ - if (start < BIOS_END) - start = BIOS_END; } return start; } diff --git a/arch/x86/syscalls/syscall_64.tbl b/arch/x86/syscalls/syscall_64.tbl index 63a899304d2..38ae65dfd14 100644 --- a/arch/x86/syscalls/syscall_64.tbl +++ b/arch/x86/syscalls/syscall_64.tbl @@ -212,10 +212,10 @@ 203 common sched_setaffinity sys_sched_setaffinity 204 common sched_getaffinity sys_sched_getaffinity 205 64 set_thread_area -206 64 io_setup sys_io_setup +206 common io_setup sys_io_setup 207 common io_destroy sys_io_destroy 208 common io_getevents sys_io_getevents -209 64 io_submit sys_io_submit +209 common io_submit sys_io_submit 210 common io_cancel sys_io_cancel 211 64 get_thread_area 212 common lookup_dcookie sys_lookup_dcookie @@ -356,5 +356,3 @@ 540 x32 process_vm_writev compat_sys_process_vm_writev 541 x32 setsockopt compat_sys_setsockopt 542 x32 getsockopt compat_sys_getsockopt -543 x32 io_setup compat_sys_io_setup -544 x32 io_submit compat_sys_io_submit diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index 385efb23ddc..d7546c94da5 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -68,12 +68,7 @@ #define VMALLOC_START 0xC0000000 #define VMALLOC_END 0xC7FEFFFF #define TLBTEMP_BASE_1 0xC7FF0000 -#define TLBTEMP_BASE_2 (TLBTEMP_BASE_1 + DCACHE_WAY_SIZE) -#if 2 * DCACHE_WAY_SIZE > ICACHE_WAY_SIZE -#define TLBTEMP_SIZE (2 * DCACHE_WAY_SIZE) -#else -#define TLBTEMP_SIZE ICACHE_WAY_SIZE -#endif +#define TLBTEMP_BASE_2 0xC7FF8000 /* * Xtensa Linux config PTE layout (when present): diff --git a/arch/xtensa/include/asm/uaccess.h b/arch/xtensa/include/asm/uaccess.h index c7211e7e182..fd686dc45d1 100644 --- a/arch/xtensa/include/asm/uaccess.h +++ b/arch/xtensa/include/asm/uaccess.h @@ -52,12 +52,7 @@ */ .macro get_fs ad, sp GET_CURRENT(\ad,\sp) -#if THREAD_CURRENT_DS > 1020 - addi \ad, \ad, TASK_THREAD - l32i \ad, \ad, THREAD_CURRENT_DS - TASK_THREAD -#else l32i \ad, \ad, THREAD_CURRENT_DS -#endif .endm /* diff --git a/arch/xtensa/include/uapi/asm/ioctls.h b/arch/xtensa/include/uapi/asm/ioctls.h index a47909f0c34..b4cb1100c0f 100644 --- a/arch/xtensa/include/uapi/asm/ioctls.h +++ b/arch/xtensa/include/uapi/asm/ioctls.h @@ -28,17 +28,17 @@ #define TCSETSW 0x5403 #define TCSETSF 0x5404 -#define TCGETA 0x80127417 /* _IOR('t', 23, struct termio) */ -#define TCSETA 0x40127418 /* _IOW('t', 24, struct termio) */ -#define TCSETAW 0x40127419 /* _IOW('t', 25, struct termio) */ -#define TCSETAF 0x4012741C /* _IOW('t', 28, struct termio) */ +#define TCGETA _IOR('t', 23, struct termio) +#define TCSETA _IOW('t', 24, struct termio) +#define TCSETAW _IOW('t', 25, struct termio) +#define TCSETAF _IOW('t', 28, struct termio) #define TCSBRK _IO('t', 29) #define TCXONC _IO('t', 30) #define TCFLSH _IO('t', 31) -#define TIOCSWINSZ 0x40087467 /* _IOW('t', 103, struct winsize) */ -#define TIOCGWINSZ 0x80087468 /* _IOR('t', 104, struct winsize) */ +#define TIOCSWINSZ _IOW('t', 103, struct winsize) +#define TIOCGWINSZ _IOR('t', 104, struct winsize) #define TIOCSTART _IO('t', 110) /* start output, like ^Q */ #define TIOCSTOP _IO('t', 111) /* stop output, like ^S */ #define TIOCOUTQ _IOR('t', 115, int) /* output queue size */ @@ -88,6 +88,7 @@ #define TIOCSETD _IOW('T', 35, int) #define TIOCGETD _IOR('T', 36, int) #define TCSBRKP _IOW('T', 37, int) /* Needed for POSIX tcsendbreak()*/ +#define TIOCTTYGSTRUCT _IOR('T', 38, struct tty_struct) /* For debugging only*/ #define TIOCSBRK _IO('T', 39) /* BSD compatibility */ #define TIOCCBRK _IO('T', 40) /* BSD compatibility */ #define TIOCGSID _IOR('T', 41, pid_t) /* Return the session ID of FD*/ @@ -113,10 +114,8 @@ #define TIOCSERGETLSR _IOR('T', 89, unsigned int) /* Get line status reg. */ /* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */ # define TIOCSER_TEMT 0x01 /* Transmitter physically empty */ -#define TIOCSERGETMULTI 0x80a8545a /* Get multiport config */ - /* _IOR('T', 90, struct serial_multiport_struct) */ -#define TIOCSERSETMULTI 0x40a8545b /* Set multiport config */ - /* _IOW('T', 91, struct serial_multiport_struct) */ +#define TIOCSERGETMULTI _IOR('T', 90, struct serial_multiport_struct) /* Get multiport config */ +#define TIOCSERSETMULTI _IOW('T', 91, struct serial_multiport_struct) /* Set multiport config */ #define TIOCMIWAIT _IO('T', 92) /* wait for a change on serial input line(s) */ #define TIOCGICOUNT 0x545D /* read serial port inline interrupt counts */ diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S index 6e53174f855..aa7f9add7d7 100644 --- a/arch/xtensa/kernel/entry.S +++ b/arch/xtensa/kernel/entry.S @@ -1121,8 +1121,9 @@ ENTRY(fast_syscall_xtensa) movi a7, 4 # sizeof(unsigned int) access_ok a3, a7, a0, a2, .Leac # a0: scratch reg, a2: sp - _bgeui a6, SYS_XTENSA_COUNT, .Lill - _bnei a6, SYS_XTENSA_ATOMIC_CMP_SWP, .Lnswp + addi a6, a6, -1 # assuming SYS_XTENSA_ATOMIC_SET = 1 + _bgeui a6, SYS_XTENSA_COUNT - 1, .Lill + _bnei a6, SYS_XTENSA_ATOMIC_CMP_SWP - 1, .Lnswp /* Fall through for ATOMIC_CMP_SWP. */ @@ -1134,26 +1135,27 @@ TRY s32i a5, a3, 0 # different, modify value l32i a7, a2, PT_AREG7 # restore a7 l32i a0, a2, PT_AREG0 # restore a0 movi a2, 1 # and return 1 + addi a6, a6, 1 # restore a6 (really necessary?) rfe 1: l32i a7, a2, PT_AREG7 # restore a7 l32i a0, a2, PT_AREG0 # restore a0 movi a2, 0 # return 0 (note that we cannot set + addi a6, a6, 1 # restore a6 (really necessary?) rfe .Lnswp: /* Atomic set, add, and exg_add. */ TRY l32i a7, a3, 0 # orig - addi a6, a6, -SYS_XTENSA_ATOMIC_SET add a0, a4, a7 # + arg moveqz a0, a4, a6 # set - addi a6, a6, SYS_XTENSA_ATOMIC_SET TRY s32i a0, a3, 0 # write new value mov a0, a2 mov a2, a7 l32i a7, a0, PT_AREG7 # restore a7 l32i a0, a0, PT_AREG0 # restore a0 + addi a6, a6, 1 # restore a6 (really necessary?) rfe CATCH @@ -1162,7 +1164,7 @@ CATCH movi a2, -EFAULT rfe -.Lill: l32i a7, a2, PT_AREG7 # restore a7 +.Lill: l32i a7, a2, PT_AREG0 # restore a7 l32i a0, a2, PT_AREG0 # restore a0 movi a2, -EINVAL rfe @@ -1701,7 +1703,7 @@ ENTRY(fast_second_level_miss) rsr a0, excvaddr bltu a0, a3, 2f - addi a1, a0, -TLBTEMP_SIZE + addi a1, a0, -(2 << (DCACHE_ALIAS_ORDER + PAGE_SHIFT)) bgeu a1, a3, 2f /* Check if we have to restore an ITLB mapping. */ @@ -1959,6 +1961,7 @@ ENTRY(_switch_to) entry a1, 16 + mov a10, a2 # preserve 'prev' (a2) mov a11, a3 # and 'next' (a3) l32i a4, a2, TASK_THREAD_INFO @@ -1966,14 +1969,8 @@ ENTRY(_switch_to) save_xtregs_user a4 a6 a8 a9 a12 a13 THREAD_XTREGS_USER -#if THREAD_RA > 1020 || THREAD_SP > 1020 - addi a10, a2, TASK_THREAD - s32i a0, a10, THREAD_RA - TASK_THREAD # save return address - s32i a1, a10, THREAD_SP - TASK_THREAD # save stack pointer -#else - s32i a0, a2, THREAD_RA # save return address - s32i a1, a2, THREAD_SP # save stack pointer -#endif + s32i a0, a10, THREAD_RA # save return address + s32i a1, a10, THREAD_SP # save stack pointer /* Disable ints while we manipulate the stack pointer. */ @@ -2014,6 +2011,7 @@ ENTRY(_switch_to) load_xtregs_user a5 a6 a8 a9 a12 a13 THREAD_XTREGS_USER wsr a14, ps + mov a2, a10 # return 'prev' rsync retw diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index e8b76b8e4b2..2d9cc6dbfd7 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -49,8 +49,9 @@ dma_alloc_coherent(struct device *dev,size_t size,dma_addr_t *handle,gfp_t flag) /* We currently don't support coherent memory outside KSEG */ - BUG_ON(ret < XCHAL_KSEG_CACHED_VADDR || - ret > XCHAL_KSEG_CACHED_VADDR + XCHAL_KSEG_SIZE - 1); + if (ret < XCHAL_KSEG_CACHED_VADDR + || ret >= XCHAL_KSEG_CACHED_VADDR + XCHAL_KSEG_SIZE) + BUG(); if (ret != 0) { @@ -67,11 +68,10 @@ EXPORT_SYMBOL(dma_alloc_coherent); void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle) { - unsigned long addr = (unsigned long)vaddr + - XCHAL_KSEG_CACHED_VADDR - XCHAL_KSEG_BYPASS_VADDR; + long addr=(long)vaddr+XCHAL_KSEG_CACHED_VADDR-XCHAL_KSEG_BYPASS_VADDR; - BUG_ON(addr < XCHAL_KSEG_CACHED_VADDR || - addr > XCHAL_KSEG_CACHED_VADDR + XCHAL_KSEG_SIZE - 1); + if (addr < 0 || addr >= XCHAL_KSEG_SIZE) + BUG(); free_pages(addr, get_order(size)); } diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 1ff8e97f853..e8918ffaf96 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -876,20 +876,6 @@ void blkcg_drain_queue(struct request_queue *q) { lockdep_assert_held(q->queue_lock); - /* - * @q could be exiting and already have destroyed all blkgs as - * indicated by NULL root_blkg. If so, don't confuse policies. - */ - if (!q->root_blkg) - return; - - /* - * @q could be exiting and already have destroyed all blkgs as - * indicated by NULL root_blkg. If so, don't confuse policies. - */ - if (!q->root_blkg) - return; - blk_throtl_drain(q); } diff --git a/block/blk-core.c b/block/blk-core.c index 5a750b18172..2c66daba44d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2299,7 +2299,7 @@ bool blk_update_request(struct request *req, int error, unsigned int nr_bytes) if (!req->bio) return false; - trace_block_rq_complete(req->q, req, nr_bytes); + trace_block_rq_complete(req->q, req); /* * For fs requests, rq is just carrier of independent bio's diff --git a/block/blk-settings.c b/block/blk-settings.c index ec00a0f7521..53309333c2f 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -553,7 +553,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, bottom = max(b->physical_block_size, b->io_min) + alignment; /* Verify that top and bottom intervals line up */ - if (max(top, bottom) % min(top, bottom)) { + if (max(top, bottom) & (min(top, bottom) - 1)) { t->misaligned = 1; ret = -1; } @@ -594,7 +594,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, /* Find lowest common alignment_offset */ t->alignment_offset = lcm(t->alignment_offset, alignment) - % max(t->physical_block_size, t->io_min); + & (max(t->physical_block_size, t->io_min) - 1); /* Verify that new alignment_offset is on a logical block boundary */ if (t->alignment_offset & (t->logical_block_size - 1)) { diff --git a/block/blk-tag.c b/block/blk-tag.c index 0c51b4b34f4..cc345e1d8d4 100644 --- a/block/blk-tag.c +++ b/block/blk-tag.c @@ -27,15 +27,18 @@ struct request *blk_queue_find_tag(struct request_queue *q, int tag) EXPORT_SYMBOL(blk_queue_find_tag); /** - * blk_free_tags - release a given set of tag maintenance info + * __blk_free_tags - release a given set of tag maintenance info * @bqt: the tag map to free * - * Drop the reference count on @bqt and frees it when the last reference - * is dropped. + * Tries to free the specified @bqt. Returns true if it was + * actually freed and false if there are still references using it */ -void blk_free_tags(struct blk_queue_tag *bqt) +static int __blk_free_tags(struct blk_queue_tag *bqt) { - if (atomic_dec_and_test(&bqt->refcnt)) { + int retval; + + retval = atomic_dec_and_test(&bqt->refcnt); + if (retval) { BUG_ON(find_first_bit(bqt->tag_map, bqt->max_depth) < bqt->max_depth); @@ -47,8 +50,9 @@ void blk_free_tags(struct blk_queue_tag *bqt) kfree(bqt); } + + return retval; } -EXPORT_SYMBOL(blk_free_tags); /** * __blk_queue_free_tags - release tag maintenance info @@ -65,13 +69,28 @@ void __blk_queue_free_tags(struct request_queue *q) if (!bqt) return; - blk_free_tags(bqt); + __blk_free_tags(bqt); q->queue_tags = NULL; queue_flag_clear_unlocked(QUEUE_FLAG_QUEUED, q); } /** + * blk_free_tags - release a given set of tag maintenance info + * @bqt: the tag map to free + * + * For externally managed @bqt frees the map. Callers of this + * function must guarantee to have released all the queues that + * might have been using this tag map. + */ +void blk_free_tags(struct blk_queue_tag *bqt) +{ + if (unlikely(!__blk_free_tags(bqt))) + BUG(); +} +EXPORT_SYMBOL(blk_free_tags); + +/** * blk_queue_free_tags - release tag maintenance info * @q: the request queue for the device * diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c index c981097dd63..c410752c5c6 100644 --- a/block/cfq-iosched.c +++ b/block/cfq-iosched.c @@ -1275,16 +1275,12 @@ __cfq_group_service_tree_add(struct cfq_rb_root *st, struct cfq_group *cfqg) static void cfq_update_group_weight(struct cfq_group *cfqg) { + BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node)); + if (cfqg->new_weight) { cfqg->weight = cfqg->new_weight; cfqg->new_weight = 0; } -} - -static void -cfq_update_group_leaf_weight(struct cfq_group *cfqg) -{ - BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node)); if (cfqg->new_leaf_weight) { cfqg->leaf_weight = cfqg->new_leaf_weight; @@ -1303,7 +1299,7 @@ cfq_group_service_tree_add(struct cfq_rb_root *st, struct cfq_group *cfqg) /* add to the service tree */ BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node)); - cfq_update_group_leaf_weight(cfqg); + cfq_update_group_weight(cfqg); __cfq_group_service_tree_add(st, cfqg); /* @@ -1327,7 +1323,6 @@ cfq_group_service_tree_add(struct cfq_rb_root *st, struct cfq_group *cfqg) */ while ((parent = cfqg_parent(pos))) { if (propagate) { - cfq_update_group_weight(pos); propagate = !parent->nr_active++; parent->children_weight += pos->weight; } diff --git a/block/compat_ioctl.c b/block/compat_ioctl.c index 21ad6869a5c..7c668c8a6f9 100644 --- a/block/compat_ioctl.c +++ b/block/compat_ioctl.c @@ -689,7 +689,6 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg) case BLKROSET: case BLKDISCARD: case BLKSECDISCARD: - case BLKZEROOUT: /* * the ones below are implemented in blkdev_locked_ioctl, * but we call blkdev_ioctl, which gets the lock for us diff --git a/block/genhd.c b/block/genhd.c index fdbfcb3c52a..6f612a74781 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -28,10 +28,10 @@ struct kobject *block_depr; /* for extended dynamic devt allocation, currently only one major is used */ #define NR_EXT_DEVT (1 << MINORBITS) -/* For extended devt allocation. ext_devt_lock prevents look up +/* For extended devt allocation. ext_devt_mutex prevents look up * results from going away underneath its user. */ -static DEFINE_SPINLOCK(ext_devt_lock); +static DEFINE_MUTEX(ext_devt_mutex); static DEFINE_IDR(ext_devt_idr); static struct device_type disk_type; @@ -420,13 +420,9 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt) } /* allocate ext devt */ - idr_preload(GFP_KERNEL); - - spin_lock(&ext_devt_lock); - idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT); - spin_unlock(&ext_devt_lock); - - idr_preload_end(); + mutex_lock(&ext_devt_mutex); + idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_KERNEL); + mutex_unlock(&ext_devt_mutex); if (idx < 0) return idx == -ENOSPC ? -EBUSY : idx; @@ -445,13 +441,15 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt) */ void blk_free_devt(dev_t devt) { + might_sleep(); + if (devt == MKDEV(0, 0)) return; if (MAJOR(devt) == BLOCK_EXT_MAJOR) { - spin_lock(&ext_devt_lock); + mutex_lock(&ext_devt_mutex); idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt))); - spin_unlock(&ext_devt_lock); + mutex_unlock(&ext_devt_mutex); } } @@ -667,6 +665,7 @@ void del_gendisk(struct gendisk *disk) sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk))); pm_runtime_set_memalloc_noio(disk_to_dev(disk), false); device_del(disk_to_dev(disk)); + blk_free_devt(disk_to_dev(disk)->devt); } EXPORT_SYMBOL(del_gendisk); @@ -691,13 +690,13 @@ struct gendisk *get_gendisk(dev_t devt, int *partno) } else { struct hd_struct *part; - spin_lock(&ext_devt_lock); + mutex_lock(&ext_devt_mutex); part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt))); if (part && get_disk(part_to_disk(part))) { *partno = part->partno; disk = part_to_disk(part); } - spin_unlock(&ext_devt_lock); + mutex_unlock(&ext_devt_mutex); } return disk; @@ -1099,7 +1098,6 @@ static void disk_release(struct device *dev) { struct gendisk *disk = dev_to_disk(dev); - blk_free_devt(dev->devt); disk_release_events(disk); kfree(disk->random); disk_replace_part_tbl(disk, NULL); diff --git a/block/partition-generic.c b/block/partition-generic.c index 47284e71265..c7942acf137 100644 --- a/block/partition-generic.c +++ b/block/partition-generic.c @@ -211,7 +211,6 @@ static const struct attribute_group *part_attr_groups[] = { static void part_release(struct device *dev) { struct hd_struct *p = dev_to_part(dev); - blk_free_devt(dev->devt); free_part_stats(p); free_part_info(p); kfree(p); @@ -265,6 +264,7 @@ void delete_partition(struct gendisk *disk, int partno) rcu_assign_pointer(ptbl->last_lookup, NULL); kobject_put(part->holder_dir); device_del(part_to_dev(part)); + blk_free_devt(part_devt(part)); hd_struct_put(part); } diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c index 1b4988b4bc1..a5ffcc988f0 100644 --- a/block/scsi_ioctl.c +++ b/block/scsi_ioctl.c @@ -506,7 +506,7 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode, if (bytes && blk_rq_map_kern(q, rq, buffer, bytes, __GFP_WAIT)) { err = DRIVER_ERROR << 24; - goto error; + goto out; } memset(sense, 0, sizeof(sense)); @@ -516,6 +516,7 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode, blk_execute_rq(q, disk, rq, 0); +out: err = rq->errors & 0xff; /* only 8 bit SCSI status */ if (err) { if (rq->sense_len && rq->sense) { diff --git a/crypto/af_alg.c b/crypto/af_alg.c index bf948e13498..ac33d5f3077 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -21,7 +21,6 @@ #include <linux/module.h> #include <linux/net.h> #include <linux/rwsem.h> -#include <linux/security.h> struct alg_type_list { const struct af_alg_type *type; @@ -244,7 +243,6 @@ int af_alg_accept(struct sock *sk, struct socket *newsock) sock_init_data(newsock, sk2); sock_graft(sk2, newsock); - security_sk_clone(sk, sk2); err = type->accept(ask->private, sk2); if (err) { diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c index 83187f497c7..a19c027b29b 100644 --- a/crypto/algif_skcipher.c +++ b/crypto/algif_skcipher.c @@ -49,7 +49,7 @@ struct skcipher_ctx { struct ablkcipher_request req; }; -#define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \ +#define MAX_SGL_ENTS ((PAGE_SIZE - sizeof(struct skcipher_sg_list)) / \ sizeof(struct scatterlist) - 1) static inline int skcipher_sndbuf(struct sock *sk) diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c index 43665d0d090..1512e41cd93 100644 --- a/crypto/crypto_user.c +++ b/crypto/crypto_user.c @@ -466,7 +466,7 @@ static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) type -= CRYPTO_MSG_BASE; link = &crypto_dispatch[type]; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; if ((type == (CRYPTO_MSG_GETALG - CRYPTO_MSG_BASE) && diff --git a/crypto/crypto_wq.c b/crypto/crypto_wq.c index 2f1b8d12952..adad92a44ba 100644 --- a/crypto/crypto_wq.c +++ b/crypto/crypto_wq.c @@ -33,7 +33,7 @@ static void __exit crypto_wq_exit(void) destroy_workqueue(kcrypto_wq); } -subsys_initcall(crypto_wq_init); +module_init(crypto_wq_init); module_exit(crypto_wq_exit); MODULE_LICENSE("GPL"); diff --git a/drivers/acpi/acpica/aclocal.h b/drivers/acpi/acpica/aclocal.h index 95896886fc5..d5bfbd331bf 100644 --- a/drivers/acpi/acpica/aclocal.h +++ b/drivers/acpi/acpica/aclocal.h @@ -254,7 +254,6 @@ struct acpi_create_field_info { u32 field_bit_position; u32 field_bit_length; u16 resource_length; - u16 pin_number_index; u8 field_flags; u8 attribute; u8 field_type; diff --git a/drivers/acpi/acpica/acobject.h b/drivers/acpi/acpica/acobject.h index a47cc78ffd4..cc7ab6dd724 100644 --- a/drivers/acpi/acpica/acobject.h +++ b/drivers/acpi/acpica/acobject.h @@ -263,7 +263,6 @@ struct acpi_object_region_field { ACPI_OBJECT_COMMON_HEADER ACPI_COMMON_FIELD_INFO u16 resource_length; union acpi_operand_object *region_obj; /* Containing op_region object */ u8 *resource_buffer; /* resource_template for serial regions/fields */ - u16 pin_number_index; /* Index relative to previous Connection/Template */ }; struct acpi_object_bank_field { diff --git a/drivers/acpi/acpica/dsfield.c b/drivers/acpi/acpica/dsfield.c index e651d4ec7c4..feadeed1012 100644 --- a/drivers/acpi/acpica/dsfield.c +++ b/drivers/acpi/acpica/dsfield.c @@ -360,7 +360,6 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info, */ info->resource_buffer = NULL; info->connection_node = NULL; - info->pin_number_index = 0; /* * A Connection() is either an actual resource descriptor (buffer) @@ -438,7 +437,6 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info, } info->field_bit_position += info->field_bit_length; - info->pin_number_index++; /* Index relative to previous Connection() */ break; default: diff --git a/drivers/acpi/acpica/evregion.c b/drivers/acpi/acpica/evregion.c index 8fab9262d98..6555e350fc1 100644 --- a/drivers/acpi/acpica/evregion.c +++ b/drivers/acpi/acpica/evregion.c @@ -141,7 +141,6 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj, union acpi_operand_object *region_obj2; void *region_context = NULL; struct acpi_connection_info *context; - acpi_physical_address address; ACPI_FUNCTION_TRACE(ev_address_space_dispatch); @@ -236,23 +235,25 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj, /* We have everything we need, we can invoke the address space handler */ handler = handler_desc->address_space.handler; - address = (region_obj->region.address + region_offset); + + ACPI_DEBUG_PRINT((ACPI_DB_OPREGION, + "Handler %p (@%p) Address %8.8X%8.8X [%s]\n", + ®ion_obj->region.handler->address_space, handler, + ACPI_FORMAT_NATIVE_UINT(region_obj->region.address + + region_offset), + acpi_ut_get_region_name(region_obj->region. + space_id))); /* * Special handling for generic_serial_bus and general_purpose_io: * There are three extra parameters that must be passed to the * handler via the context: - * 1) Connection buffer, a resource template from Connection() op - * 2) Length of the above buffer - * 3) Actual access length from the access_as() op - * - * In addition, for general_purpose_io, the Address and bit_width fields - * are defined as follows: - * 1) Address is the pin number index of the field (bit offset from - * the previous Connection) - * 2) bit_width is the actual bit length of the field (number of pins) + * 1) Connection buffer, a resource template from Connection() op. + * 2) Length of the above buffer. + * 3) Actual access length from the access_as() op. */ - if ((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) && + if (((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) || + (region_obj->region.space_id == ACPI_ADR_SPACE_GPIO)) && context && field_obj) { /* Get the Connection (resource_template) buffer */ @@ -261,24 +262,6 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj, context->length = field_obj->field.resource_length; context->access_length = field_obj->field.access_length; } - if ((region_obj->region.space_id == ACPI_ADR_SPACE_GPIO) && - context && field_obj) { - - /* Get the Connection (resource_template) buffer */ - - context->connection = field_obj->field.resource_buffer; - context->length = field_obj->field.resource_length; - context->access_length = field_obj->field.access_length; - address = field_obj->field.pin_number_index; - bit_width = field_obj->field.bit_length; - } - - ACPI_DEBUG_PRINT((ACPI_DB_OPREGION, - "Handler %p (@%p) Address %8.8X%8.8X [%s]\n", - ®ion_obj->region.handler->address_space, handler, - ACPI_FORMAT_NATIVE_UINT(address), - acpi_ut_get_region_name(region_obj->region. - space_id))); if (!(handler_desc->address_space.handler_flags & ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)) { @@ -292,7 +275,9 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj, /* Call the handler */ - status = handler(function, address, bit_width, value, context, + status = handler(function, + (region_obj->region.address + region_offset), + bit_width, value, context, region_obj2->extra.region_context); if (ACPI_FAILURE(status)) { diff --git a/drivers/acpi/acpica/exfield.c b/drivers/acpi/acpica/exfield.c index 0108d59665a..7d4bae71e8c 100644 --- a/drivers/acpi/acpica/exfield.c +++ b/drivers/acpi/acpica/exfield.c @@ -178,37 +178,6 @@ acpi_ex_read_data_from_field(struct acpi_walk_state *walk_state, buffer = &buffer_desc->integer.value; } - if ((obj_desc->common.type == ACPI_TYPE_LOCAL_REGION_FIELD) && - (obj_desc->field.region_obj->region.space_id == - ACPI_ADR_SPACE_GPIO)) { - /* - * For GPIO (general_purpose_io), the Address will be the bit offset - * from the previous Connection() operator, making it effectively a - * pin number index. The bit_length is the length of the field, which - * is thus the number of pins. - */ - ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, - "GPIO FieldRead [FROM]: Pin %u Bits %u\n", - obj_desc->field.pin_number_index, - obj_desc->field.bit_length)); - - /* Lock entire transaction if requested */ - - acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); - - /* Perform the write */ - - status = acpi_ex_access_region(obj_desc, 0, - (u64 *)buffer, ACPI_READ); - acpi_ex_release_global_lock(obj_desc->common_field.field_flags); - if (ACPI_FAILURE(status)) { - acpi_ut_remove_reference(buffer_desc); - } else { - *ret_buffer_desc = buffer_desc; - } - return_ACPI_STATUS(status); - } - ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, "FieldRead [TO]: Obj %p, Type %X, Buf %p, ByteLen %X\n", obj_desc, obj_desc->common.type, buffer, @@ -356,42 +325,6 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc, *result_desc = buffer_desc; return_ACPI_STATUS(status); - } else if ((obj_desc->common.type == ACPI_TYPE_LOCAL_REGION_FIELD) && - (obj_desc->field.region_obj->region.space_id == - ACPI_ADR_SPACE_GPIO)) { - /* - * For GPIO (general_purpose_io), we will bypass the entire field - * mechanism and handoff the bit address and bit width directly to - * the handler. The Address will be the bit offset - * from the previous Connection() operator, making it effectively a - * pin number index. The bit_length is the length of the field, which - * is thus the number of pins. - */ - if (source_desc->common.type != ACPI_TYPE_INTEGER) { - return_ACPI_STATUS(AE_AML_OPERAND_TYPE); - } - - ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, - "GPIO FieldWrite [FROM]: (%s:%X), Val %.8X [TO]: Pin %u Bits %u\n", - acpi_ut_get_type_name(source_desc->common. - type), - source_desc->common.type, - (u32)source_desc->integer.value, - obj_desc->field.pin_number_index, - obj_desc->field.bit_length)); - - buffer = &source_desc->integer.value; - - /* Lock entire transaction if requested */ - - acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); - - /* Perform the write */ - - status = acpi_ex_access_region(obj_desc, 0, - (u64 *)buffer, ACPI_WRITE); - acpi_ex_release_global_lock(obj_desc->common_field.field_flags); - return_ACPI_STATUS(status); } /* Get a pointer to the data to be written */ diff --git a/drivers/acpi/acpica/exprep.c b/drivers/acpi/acpica/exprep.c index df212fe4cf6..6b728aef2dc 100644 --- a/drivers/acpi/acpica/exprep.c +++ b/drivers/acpi/acpica/exprep.c @@ -479,8 +479,6 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info) obj_desc->field.resource_length = info->resource_length; } - obj_desc->field.pin_number_index = info->pin_number_index; - /* Allow full data read from EC address space */ if ((obj_desc->field.region_obj->region.space_id == diff --git a/drivers/acpi/acpica/utcopy.c b/drivers/acpi/acpica/utcopy.c index a63a4cdd2ce..e4c9291fc0a 100644 --- a/drivers/acpi/acpica/utcopy.c +++ b/drivers/acpi/acpica/utcopy.c @@ -998,11 +998,5 @@ acpi_ut_copy_iobject_to_iobject(union acpi_operand_object *source_desc, status = acpi_ut_copy_simple_object(source_desc, *dest_desc); } - /* Delete the allocated object if copy failed */ - - if (ACPI_FAILURE(status)) { - acpi_ut_remove_reference(*dest_desc); - } - return_ACPI_STATUS(status); } diff --git a/drivers/acpi/acpica/utstring.c b/drivers/acpi/acpica/utstring.c index ca6d2acafa6..b3e36a81aa4 100644 --- a/drivers/acpi/acpica/utstring.c +++ b/drivers/acpi/acpica/utstring.c @@ -349,7 +349,7 @@ void acpi_ut_print_string(char *string, u8 max_length) } acpi_os_printf("\""); - for (i = 0; (i < max_length) && string[i]; i++) { + for (i = 0; string[i] && (i < max_length); i++) { /* Escape sequences */ diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c index 7ae5ebd1e70..99427d7307a 100644 --- a/drivers/acpi/battery.c +++ b/drivers/acpi/battery.c @@ -34,7 +34,6 @@ #include <linux/dmi.h> #include <linux/slab.h> #include <linux/suspend.h> -#include <linux/delay.h> #include <asm/unaligned.h> #ifdef CONFIG_ACPI_PROCFS_POWER @@ -1082,28 +1081,6 @@ static struct dmi_system_id bat_dmi_table[] = { {}, }; -/* - * Some machines'(E,G Lenovo Z480) ECs are not stable - * during boot up and this causes battery driver fails to be - * probed due to failure of getting battery information - * from EC sometimes. After several retries, the operation - * may work. So add retry code here and 20ms sleep between - * every retries. - */ -static int acpi_battery_update_retry(struct acpi_battery *battery) -{ - int retry, ret; - - for (retry = 5; retry; retry--) { - ret = acpi_battery_update(battery); - if (!ret) - break; - - msleep(20); - } - return ret; -} - static int acpi_battery_add(struct acpi_device *device) { int result = 0; @@ -1123,11 +1100,9 @@ static int acpi_battery_add(struct acpi_device *device) if (ACPI_SUCCESS(acpi_get_handle(battery->device->handle, "_BIX", &handle))) set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags); - - result = acpi_battery_update_retry(battery); + result = acpi_battery_update(battery); if (result) goto fail; - #ifdef CONFIG_ACPI_PROCFS_POWER result = acpi_battery_add_fs(device); #endif diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c index 76da257cfc2..cb9629638de 100644 --- a/drivers/acpi/blacklist.c +++ b/drivers/acpi/blacklist.c @@ -327,19 +327,6 @@ static struct dmi_system_id acpi_osi_dmi_table[] __initdata = { DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T500"), }, }, - /* - * Without this this EEEpc exports a non working WMI interface, with - * this it exports a working "good old" eeepc_laptop interface, fixing - * both brightness control, and rfkill not working. - */ - { - .callback = dmi_enable_osi_linux, - .ident = "Asus EEE PC 1015PX", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer INC."), - DMI_MATCH(DMI_PRODUCT_NAME, "1015PX"), - }, - }, {} }; diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c index b62207a8743..ccba6e46cfb 100644 --- a/drivers/acpi/bus.c +++ b/drivers/acpi/bus.c @@ -57,12 +57,6 @@ EXPORT_SYMBOL(acpi_root_dir); #ifdef CONFIG_X86 -#ifdef CONFIG_ACPI_CUSTOM_DSDT -static inline int set_copy_dsdt(const struct dmi_system_id *id) -{ - return 0; -} -#else static int set_copy_dsdt(const struct dmi_system_id *id) { printk(KERN_NOTICE "%s detected - " @@ -70,7 +64,6 @@ static int set_copy_dsdt(const struct dmi_system_id *id) acpi_gbl_copy_dsdt_locally = 1; return 0; } -#endif static struct dmi_system_id dsdt_dmi_table[] __initdata = { /* diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index a88894190e4..4056d317517 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -1101,9 +1101,9 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) if (pr->id == 0 && cpuidle_get_driver() == &acpi_idle_driver) { + cpuidle_pause_and_lock(); /* Protect against cpu-hotplug */ get_online_cpus(); - cpuidle_pause_and_lock(); /* Disable all cpuidle devices */ for_each_online_cpu(cpu) { @@ -1130,8 +1130,8 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) cpuidle_enable_device(dev); } } - cpuidle_resume_and_unlock(); put_online_cpus(); + cpuidle_resume_and_unlock(); } return 0; diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c index b9cfaf1d94d..3322b47ab7c 100644 --- a/drivers/acpi/resource.c +++ b/drivers/acpi/resource.c @@ -77,24 +77,18 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res) switch (ares->type) { case ACPI_RESOURCE_TYPE_MEMORY24: memory24 = &ares->data.memory24; - if (!memory24->minimum && !memory24->address_length) - return false; acpi_dev_get_memresource(res, memory24->minimum, memory24->address_length, memory24->write_protect); break; case ACPI_RESOURCE_TYPE_MEMORY32: memory32 = &ares->data.memory32; - if (!memory32->minimum && !memory32->address_length) - return false; acpi_dev_get_memresource(res, memory32->minimum, memory32->address_length, memory32->write_protect); break; case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: fixed_memory32 = &ares->data.fixed_memory32; - if (!fixed_memory32->address && !fixed_memory32->address_length) - return false; acpi_dev_get_memresource(res, fixed_memory32->address, fixed_memory32->address_length, fixed_memory32->write_protect); @@ -150,16 +144,12 @@ bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res) switch (ares->type) { case ACPI_RESOURCE_TYPE_IO: io = &ares->data.io; - if (!io->minimum && !io->address_length) - return false; acpi_dev_get_ioresource(res, io->minimum, io->address_length, io->io_decode); break; case ACPI_RESOURCE_TYPE_FIXED_IO: fixed_io = &ares->data.fixed_io; - if (!fixed_io->address && !fixed_io->address_length) - return false; acpi_dev_get_ioresource(res, fixed_io->address, fixed_io->address_length, ACPI_DECODE_10); diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c index 091682fb161..cca761e80d8 100644 --- a/drivers/acpi/scan.c +++ b/drivers/acpi/scan.c @@ -769,17 +769,12 @@ static void acpi_device_notify(acpi_handle handle, u32 event, void *data) device->driver->ops.notify(device, event); } -static void acpi_device_notify_fixed(void *data) +static acpi_status acpi_device_notify_fixed(void *data) { struct acpi_device *device = data; /* Fixed hardware devices have no handles */ acpi_device_notify(NULL, ACPI_FIXED_HARDWARE_EVENT, device); -} - -static acpi_status acpi_device_fixed_event(void *data) -{ - acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_device_notify_fixed, data); return AE_OK; } @@ -790,12 +785,12 @@ static int acpi_device_install_notify_handler(struct acpi_device *device) if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) status = acpi_install_fixed_event_handler(ACPI_EVENT_POWER_BUTTON, - acpi_device_fixed_event, + acpi_device_notify_fixed, device); else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) status = acpi_install_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON, - acpi_device_fixed_event, + acpi_device_notify_fixed, device); else status = acpi_install_notify_handler(device->handle, @@ -812,10 +807,10 @@ static void acpi_device_remove_notify_handler(struct acpi_device *device) { if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) acpi_remove_fixed_event_handler(ACPI_EVENT_POWER_BUTTON, - acpi_device_fixed_event); + acpi_device_notify_fixed); else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) acpi_remove_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON, - acpi_device_fixed_event); + acpi_device_notify_fixed); else acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY, acpi_device_notify); diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c index 035920f2ab4..9c1a435d10e 100644 --- a/drivers/acpi/sleep.c +++ b/drivers/acpi/sleep.c @@ -78,17 +78,6 @@ static int acpi_sleep_prepare(u32 acpi_state) return 0; } -static bool acpi_sleep_state_supported(u8 sleep_state) -{ - acpi_status status; - u8 type_a, type_b; - - status = acpi_get_sleep_type_data(sleep_state, &type_a, &type_b); - return ACPI_SUCCESS(status) && (!acpi_gbl_reduced_hardware - || (acpi_gbl_FADT.sleep_control.address - && acpi_gbl_FADT.sleep_status.address)); -} - #ifdef CONFIG_ACPI_SLEEP static u32 acpi_target_sleep_state = ACPI_STATE_S0; @@ -611,9 +600,15 @@ static void acpi_sleep_suspend_setup(void) { int i; - for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) - if (acpi_sleep_state_supported(i)) + for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) { + acpi_status status; + u8 type_a, type_b; + + status = acpi_get_sleep_type_data(i, &type_a, &type_b); + if (ACPI_SUCCESS(status)) { sleep_states[i] = 1; + } + } suspend_set_ops(old_suspend_ordering ? &acpi_suspend_ops_old : &acpi_suspend_ops); @@ -744,7 +739,11 @@ static const struct platform_hibernation_ops acpi_hibernation_ops_old = { static void acpi_sleep_hibernate_setup(void) { - if (!acpi_sleep_state_supported(ACPI_STATE_S4)) + acpi_status status; + u8 type_a, type_b; + + status = acpi_get_sleep_type_data(ACPI_STATE_S4, &type_a, &type_b); + if (ACPI_FAILURE(status)) return; hibernation_set_ops(old_suspend_ordering ? @@ -793,6 +792,8 @@ static void acpi_power_off(void) int __init acpi_sleep_init(void) { + acpi_status status; + u8 type_a, type_b; char supported[ACPI_S_STATE_COUNT * 3 + 1]; char *pos = supported; int i; @@ -807,7 +808,8 @@ int __init acpi_sleep_init(void) acpi_sleep_suspend_setup(); acpi_sleep_hibernate_setup(); - if (acpi_sleep_state_supported(ACPI_STATE_S5)) { + status = acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b); + if (ACPI_SUCCESS(status)) { sleep_states[ACPI_STATE_S5] = 1; pm_power_off_prepare = acpi_power_off_prepare; pm_power_off = acpi_power_off; diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index c3f09505f79..4942058402a 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -304,14 +304,6 @@ static const struct pci_device_id ahci_pci_tbl[] = { { PCI_VDEVICE(INTEL, 0x9c85), board_ahci }, /* Wildcat Point-LP RAID */ { PCI_VDEVICE(INTEL, 0x9c87), board_ahci }, /* Wildcat Point-LP RAID */ { PCI_VDEVICE(INTEL, 0x9c8f), board_ahci }, /* Wildcat Point-LP RAID */ - { PCI_VDEVICE(INTEL, 0x8c82), board_ahci }, /* 9 Series AHCI */ - { PCI_VDEVICE(INTEL, 0x8c83), board_ahci }, /* 9 Series AHCI */ - { PCI_VDEVICE(INTEL, 0x8c84), board_ahci }, /* 9 Series RAID */ - { PCI_VDEVICE(INTEL, 0x8c85), board_ahci }, /* 9 Series RAID */ - { PCI_VDEVICE(INTEL, 0x8c86), board_ahci }, /* 9 Series RAID */ - { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */ - { PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */ - { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */ /* JMicron 360/1/3/5/6, match class to avoid IDE function */ { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, @@ -449,23 +441,16 @@ static const struct pci_device_id ahci_pci_tbl[] = { { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x917a), .driver_data = board_ahci_yes_fbs }, /* 88se9172 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9172), - .driver_data = board_ahci_yes_fbs }, /* 88se9182 */ - { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9182), .driver_data = board_ahci_yes_fbs }, /* 88se9172 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9192), .driver_data = board_ahci_yes_fbs }, /* 88se9172 on some Gigabyte */ - { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x91a0), - .driver_data = board_ahci_yes_fbs }, { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x91a3), .driver_data = board_ahci_yes_fbs }, { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9230), .driver_data = board_ahci_yes_fbs }, - { PCI_DEVICE(PCI_VENDOR_ID_TTI, 0x0642), - .driver_data = board_ahci_yes_fbs }, /* Promise */ { PCI_VDEVICE(PROMISE, 0x3f20), board_ahci }, /* PDC42819 */ - { PCI_VDEVICE(PROMISE, 0x3781), board_ahci }, /* FastTrak TX8660 ahci-mode */ /* Asmedia */ { PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci }, /* ASM1060 */ diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c index 82aa7b550ea..b92913a528b 100644 --- a/drivers/ata/ata_piix.c +++ b/drivers/ata/ata_piix.c @@ -340,14 +340,6 @@ static const struct pci_device_id piix_pci_tbl[] = { { 0x8086, 0x0F21, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_byt }, /* SATA Controller IDE (Coleto Creek) */ { 0x8086, 0x23a6, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, - /* SATA Controller IDE (9 Series) */ - { 0x8086, 0x8c88, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb }, - /* SATA Controller IDE (9 Series) */ - { 0x8086, 0x8c89, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb }, - /* SATA Controller IDE (9 Series) */ - { 0x8086, 0x8c80, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, - /* SATA Controller IDE (9 Series) */ - { 0x8086, 0x8c81, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, { } /* terminate list */ }; diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index ca7c23d58a0..15518fda2d2 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -4152,7 +4152,6 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = { /* Seagate Momentus SpinPoint M8 seem to have FPMDA_AA issues */ { "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA }, - { "ST1000LM024 HN-M101MBB", "2BA30001", ATA_HORKAGE_BROKEN_FPDMA_AA }, /* Blacklist entries taken from Silicon Image 3124/3132 Windows driver .inf file - also several Linux problem reports */ @@ -4758,10 +4757,6 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words) * ata_qc_new - Request an available ATA command, for queueing * @ap: target port * - * Some ATA host controllers may implement a queue depth which is less - * than ATA_MAX_QUEUE. So we shouldn't allocate a tag which is beyond - * the hardware limitation. - * * LOCKING: * None. */ @@ -4769,27 +4764,21 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words) static struct ata_queued_cmd *ata_qc_new(struct ata_port *ap) { struct ata_queued_cmd *qc = NULL; - unsigned int max_queue = ap->host->n_tags; - unsigned int i, tag; + unsigned int i; /* no command while frozen */ if (unlikely(ap->pflags & ATA_PFLAG_FROZEN)) return NULL; - for (i = 0, tag = ap->last_tag + 1; i < max_queue; i++, tag++) { - tag = tag < max_queue ? tag : 0; - - /* the last tag is reserved for internal command. */ - if (tag == ATA_TAG_INTERNAL) - continue; - - if (!test_and_set_bit(tag, &ap->qc_allocated)) { - qc = __ata_qc_from_tag(ap, tag); - qc->tag = tag; - ap->last_tag = tag; + /* the last tag is reserved for internal command. */ + for (i = 0; i < ATA_MAX_QUEUE - 1; i++) + if (!test_and_set_bit(i, &ap->qc_allocated)) { + qc = __ata_qc_from_tag(ap, i); break; } - } + + if (qc) + qc->tag = i; return qc; } @@ -6078,7 +6067,6 @@ void ata_host_init(struct ata_host *host, struct device *dev, { spin_lock_init(&host->lock); mutex_init(&host->eh_mutex); - host->n_tags = ATA_MAX_QUEUE - 1; host->dev = dev; host->ops = ops; } @@ -6160,8 +6148,6 @@ int ata_host_register(struct ata_host *host, struct scsi_host_template *sht) { int i, rc; - host->n_tags = clamp(sht->can_queue, 1, ATA_MAX_QUEUE - 1); - /* host must have been started */ if (!(host->flags & ATA_HOST_STARTED)) { dev_err(host->dev, "BUG: trying to register unstarted host\n"); @@ -6308,8 +6294,6 @@ int ata_host_activate(struct ata_host *host, int irq, static void ata_port_detach(struct ata_port *ap) { unsigned long flags; - struct ata_link *link; - struct ata_device *dev; if (!ap->ops->error_handler) goto skip_eh; @@ -6329,13 +6313,6 @@ static void ata_port_detach(struct ata_port *ap) cancel_delayed_work_sync(&ap->hotplug_task); skip_eh: - /* clean up zpodd on port removal */ - ata_for_each_link(link, ap, HOST_FIRST) { - ata_for_each_dev(dev, link, ALL) { - if (zpodd_dev_enabled(dev)) - zpodd_exit(dev); - } - } if (ap->pmp_link) { int i; for (i = 0; i < SATA_PMP_MAX_PORTS; i++) diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c index 37acda6fa7e..b603720b877 100644 --- a/drivers/ata/libata-sff.c +++ b/drivers/ata/libata-sff.c @@ -2008,15 +2008,13 @@ static int ata_bus_softreset(struct ata_port *ap, unsigned int devmask, DPRINTK("ata%u: bus reset via SRST\n", ap->print_id); - if (ap->ioaddr.ctl_addr) { - /* software reset. causes dev0 to be selected */ - iowrite8(ap->ctl, ioaddr->ctl_addr); - udelay(20); /* FIXME: flush */ - iowrite8(ap->ctl | ATA_SRST, ioaddr->ctl_addr); - udelay(20); /* FIXME: flush */ - iowrite8(ap->ctl, ioaddr->ctl_addr); - ap->last_ctl = ap->ctl; - } + /* software reset. causes dev0 to be selected */ + iowrite8(ap->ctl, ioaddr->ctl_addr); + udelay(20); /* FIXME: flush */ + iowrite8(ap->ctl | ATA_SRST, ioaddr->ctl_addr); + udelay(20); /* FIXME: flush */ + iowrite8(ap->ctl, ioaddr->ctl_addr); + ap->last_ctl = ap->ctl; /* wait the port to become ready */ return ata_sff_wait_after_reset(&ap->link, devmask, deadline); @@ -2217,6 +2215,10 @@ void ata_sff_error_handler(struct ata_port *ap) spin_unlock_irqrestore(ap->lock, flags); + /* ignore ata_sff_softreset if ctl isn't accessible */ + if (softreset == ata_sff_softreset && !ap->ioaddr.ctl_addr) + softreset = NULL; + /* ignore built-in hardresets if SCR access is not available */ if ((hardreset == sata_std_hardreset || hardreset == sata_sff_hardreset) && !sata_scr_valid(&ap->link)) diff --git a/drivers/ata/pata_at91.c b/drivers/ata/pata_at91.c index fa288597f01..033f3f4c20a 100644 --- a/drivers/ata/pata_at91.c +++ b/drivers/ata/pata_at91.c @@ -408,13 +408,12 @@ static int pata_at91_probe(struct platform_device *pdev) host->private_data = info; - ret = ata_host_activate(host, gpio_is_valid(irq) ? gpio_to_irq(irq) : 0, - gpio_is_valid(irq) ? ata_sff_interrupt : NULL, - irq_flags, &pata_at91_sht); - if (ret) - goto err_put; + return ata_host_activate(host, gpio_is_valid(irq) ? gpio_to_irq(irq) : 0, + gpio_is_valid(irq) ? ata_sff_interrupt : NULL, + irq_flags, &pata_at91_sht); - return 0; + if (!ret) + return 0; err_put: clk_put(info->mck); diff --git a/drivers/ata/pata_scc.c b/drivers/ata/pata_scc.c index f7badaa39eb..f35f15f4d83 100644 --- a/drivers/ata/pata_scc.c +++ b/drivers/ata/pata_scc.c @@ -586,7 +586,7 @@ static int scc_wait_after_reset(struct ata_link *link, unsigned int devmask, * Note: Original code is ata_bus_softreset(). */ -static int scc_bus_softreset(struct ata_port *ap, unsigned int devmask, +static unsigned int scc_bus_softreset(struct ata_port *ap, unsigned int devmask, unsigned long deadline) { struct ata_ioports *ioaddr = &ap->ioaddr; @@ -600,7 +600,9 @@ static int scc_bus_softreset(struct ata_port *ap, unsigned int devmask, udelay(20); out_be32(ioaddr->ctl_addr, ap->ctl); - return scc_wait_after_reset(&ap->link, devmask, deadline); + scc_wait_after_reset(&ap->link, devmask, deadline); + + return 0; } /** @@ -617,8 +619,7 @@ static int scc_softreset(struct ata_link *link, unsigned int *classes, { struct ata_port *ap = link->ap; unsigned int slave_possible = ap->flags & ATA_FLAG_SLAVE_POSS; - unsigned int devmask = 0; - int rc; + unsigned int devmask = 0, err_mask; u8 err; DPRINTK("ENTER\n"); @@ -634,9 +635,9 @@ static int scc_softreset(struct ata_link *link, unsigned int *classes, /* issue bus reset */ DPRINTK("about to softreset, devmask=%x\n", devmask); - rc = scc_bus_softreset(ap, devmask, deadline); - if (rc) { - ata_port_err(ap, "SRST failed (err_mask=0x%x)\n", rc); + err_mask = scc_bus_softreset(ap, devmask, deadline); + if (err_mask) { + ata_port_err(ap, "SRST failed (err_mask=0x%x)\n", err_mask); return -EIO; } diff --git a/drivers/ata/pata_serverworks.c b/drivers/ata/pata_serverworks.c index 34c91ac3a81..f3febbce6c4 100644 --- a/drivers/ata/pata_serverworks.c +++ b/drivers/ata/pata_serverworks.c @@ -252,18 +252,12 @@ static void serverworks_set_dmamode(struct ata_port *ap, struct ata_device *adev pci_write_config_byte(pdev, 0x54, ultra_cfg); } -static struct scsi_host_template serverworks_osb4_sht = { - ATA_BMDMA_SHT(DRV_NAME), - .sg_tablesize = LIBATA_DUMB_MAX_PRD, -}; - -static struct scsi_host_template serverworks_csb_sht = { +static struct scsi_host_template serverworks_sht = { ATA_BMDMA_SHT(DRV_NAME), }; static struct ata_port_operations serverworks_osb4_port_ops = { .inherits = &ata_bmdma_port_ops, - .qc_prep = ata_bmdma_dumb_qc_prep, .cable_detect = serverworks_cable_detect, .mode_filter = serverworks_osb4_filter, .set_piomode = serverworks_set_piomode, @@ -272,7 +266,6 @@ static struct ata_port_operations serverworks_osb4_port_ops = { static struct ata_port_operations serverworks_csb_port_ops = { .inherits = &serverworks_osb4_port_ops, - .qc_prep = ata_bmdma_qc_prep, .mode_filter = serverworks_csb_filter, }; @@ -412,7 +405,6 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id } }; const struct ata_port_info *ppi[] = { &info[id->driver_data], NULL }; - struct scsi_host_template *sht = &serverworks_csb_sht; int rc; rc = pcim_enable_device(pdev); @@ -426,7 +418,6 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id /* Select non UDMA capable OSB4 if we can't do fixups */ if (rc < 0) ppi[0] = &info[1]; - sht = &serverworks_osb4_sht; } /* setup CSB5/CSB6 : South Bridge and IDE option RAID */ else if ((pdev->device == PCI_DEVICE_ID_SERVERWORKS_CSB5IDE) || @@ -443,7 +434,7 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id ppi[1] = &ata_dummy_port_info; } - return ata_pci_bmdma_init_one(pdev, ppi, sht, NULL, 0); + return ata_pci_bmdma_init_one(pdev, ppi, &serverworks_sht, NULL, 0); } #ifdef CONFIG_PM diff --git a/drivers/base/core.c b/drivers/base/core.c index 12e6e743f24..7cd7aec89cf 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -765,12 +765,12 @@ class_dir_create_and_add(struct class *class, struct kobject *parent_kobj) return &dir->kobj; } -static DEFINE_MUTEX(gdp_mutex); static struct kobject *get_device_parent(struct device *dev, struct device *parent) { if (dev->class) { + static DEFINE_MUTEX(gdp_mutex); struct kobject *kobj = NULL; struct kobject *parent_kobj; struct kobject *k; @@ -834,9 +834,7 @@ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir) glue_dir->kset != &dev->class->p->glue_dirs) return; - mutex_lock(&gdp_mutex); kobject_put(glue_dir); - mutex_unlock(&gdp_mutex); } static void cleanup_device_parent(struct device *dev) diff --git a/drivers/base/dd.c b/drivers/base/dd.c index 8a8d611f202..06051767393 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -52,7 +52,6 @@ static DEFINE_MUTEX(deferred_probe_mutex); static LIST_HEAD(deferred_probe_pending_list); static LIST_HEAD(deferred_probe_active_list); static struct workqueue_struct *deferred_wq; -static atomic_t deferred_trigger_count = ATOMIC_INIT(0); /** * deferred_probe_work_func() - Retry probing devices in the active list. @@ -136,17 +135,6 @@ static bool driver_deferred_probe_enable = false; * This functions moves all devices from the pending list to the active * list and schedules the deferred probe workqueue to process them. It * should be called anytime a driver is successfully bound to a device. - * - * Note, there is a race condition in multi-threaded probe. In the case where - * more than one device is probing at the same time, it is possible for one - * probe to complete successfully while another is about to defer. If the second - * depends on the first, then it will get put on the pending list after the - * trigger event has already occured and will be stuck there. - * - * The atomic 'deferred_trigger_count' is used to determine if a successful - * trigger has occurred in the midst of probing a driver. If the trigger count - * changes in the midst of a probe, then deferred processing should be triggered - * again. */ static void driver_deferred_probe_trigger(void) { @@ -159,7 +147,6 @@ static void driver_deferred_probe_trigger(void) * into the active list so they can be retried by the workqueue */ mutex_lock(&deferred_probe_mutex); - atomic_inc(&deferred_trigger_count); list_splice_tail_init(&deferred_probe_pending_list, &deferred_probe_active_list); mutex_unlock(&deferred_probe_mutex); @@ -278,7 +265,6 @@ static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue); static int really_probe(struct device *dev, struct device_driver *drv) { int ret = 0; - int local_trigger_count = atomic_read(&deferred_trigger_count); atomic_inc(&probe_count); pr_debug("bus: '%s': %s: probing driver %s with device %s\n", @@ -324,9 +310,6 @@ probe_failed: /* Driver requested deferred probing */ dev_info(dev, "Driver %s requests probe deferral\n", drv->name); driver_deferred_probe_add(dev); - /* Did a trigger occur while probing? Need to re-trigger if yes */ - if (local_trigger_count != atomic_read(&deferred_trigger_count)) - driver_deferred_probe_trigger(); } else if (ret != -ENODEV && ret != -ENXIO) { /* driver matched but the probe failed */ printk(KERN_WARNING diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c index 6216866217c..9fb40bf0f8e 100644 --- a/drivers/base/firmware_class.c +++ b/drivers/base/firmware_class.c @@ -1022,9 +1022,6 @@ _request_firmware(const struct firmware **firmware_p, const char *name, if (!firmware_p) return -EINVAL; - if (!name || name[0] == '\0') - return -EINVAL; - ret = _request_firmware_prepare(&fw, name, device); if (ret <= 0) /* error or already assigned */ goto out; diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c index b41994fd846..975719bc345 100644 --- a/drivers/base/regmap/regmap-debugfs.c +++ b/drivers/base/regmap/regmap-debugfs.c @@ -460,20 +460,16 @@ void regmap_debugfs_init(struct regmap *map, const char *name) { struct rb_node *next; struct regmap_range_node *range_node; - const char *devname = "dummy"; INIT_LIST_HEAD(&map->debugfs_off_cache); mutex_init(&map->cache_lock); - if (map->dev) - devname = dev_name(map->dev); - if (name) { map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s", - devname, name); + dev_name(map->dev), name); name = map->debugfs_name; } else { - name = devname; + name = dev_name(map->dev); } map->debugfs = debugfs_create_dir(name, regmap_debugfs_root); diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c index 101720f23b8..0b43b260390 100644 --- a/drivers/base/regmap/regmap.c +++ b/drivers/base/regmap/regmap.c @@ -114,7 +114,7 @@ bool regmap_readable(struct regmap *map, unsigned int reg) bool regmap_volatile(struct regmap *map, unsigned int reg) { - if (!map->format.format_write && !regmap_readable(map, reg)) + if (!regmap_readable(map, reg)) return false; if (map->volatile_reg) @@ -1179,7 +1179,7 @@ int _regmap_write(struct regmap *map, unsigned int reg, } #ifdef LOG_DEVICE - if (map->dev && strcmp(dev_name(map->dev), LOG_DEVICE) == 0) + if (strcmp(dev_name(map->dev), LOG_DEVICE) == 0) dev_info(map->dev, "%x <= %x\n", reg, val); #endif @@ -1439,7 +1439,7 @@ static int _regmap_read(struct regmap *map, unsigned int reg, ret = map->reg_read(context, reg, val); if (ret == 0) { #ifdef LOG_DEVICE - if (map->dev && strcmp(dev_name(map->dev), LOG_DEVICE) == 0) + if (strcmp(dev_name(map->dev), LOG_DEVICE) == 0) dev_info(map->dev, "%x => %x\n", reg, *val); #endif diff --git a/drivers/base/topology.c b/drivers/base/topology.c index bcd19886fa1..ae989c57cd5 100644 --- a/drivers/base/topology.c +++ b/drivers/base/topology.c @@ -40,7 +40,8 @@ static ssize_t show_##name(struct device *dev, \ struct device_attribute *attr, char *buf) \ { \ - return sprintf(buf, "%d\n", topology_##name(dev->id)); \ + unsigned int cpu = dev->id; \ + return sprintf(buf, "%d\n", topology_##name(cpu)); \ } #if defined(topology_thread_cpumask) || defined(topology_core_cpumask) || \ diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c index 31262732db2..fc803ecbbce 100644 --- a/drivers/block/aoe/aoecmd.c +++ b/drivers/block/aoe/aoecmd.c @@ -899,7 +899,7 @@ bio_pageinc(struct bio *bio) * but this has never been seen here. */ if (unlikely(PageCompound(page))) - if (compound_head(page) != page) { + if (compound_trans_head(page) != page) { pr_crit("page tail used for block I/O\n"); BUG(); } diff --git a/drivers/block/drbd/drbd_interval.c b/drivers/block/drbd/drbd_interval.c index 04a14e0f887..89c497c630b 100644 --- a/drivers/block/drbd/drbd_interval.c +++ b/drivers/block/drbd/drbd_interval.c @@ -79,7 +79,6 @@ bool drbd_insert_interval(struct rb_root *root, struct drbd_interval *this) { struct rb_node **new = &root->rb_node, *parent = NULL; - sector_t this_end = this->sector + (this->size >> 9); BUG_ON(!IS_ALIGNED(this->size, 512)); @@ -88,8 +87,6 @@ drbd_insert_interval(struct rb_root *root, struct drbd_interval *this) rb_entry(*new, struct drbd_interval, rb); parent = *new; - if (here->end < this_end) - here->end = this_end; if (this->sector < here->sector) new = &(*new)->rb_left; else if (this->sector > here->sector) @@ -102,7 +99,6 @@ drbd_insert_interval(struct rb_root *root, struct drbd_interval *this) return false; } - this->end = this_end; rb_link_node(&this->rb, parent, new); rb_insert_augmented(&this->rb, root, &augment_callbacks); return true; diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index 9c37f3d896a..9e3f441e7e8 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -514,12 +514,6 @@ void conn_try_outdate_peer_async(struct drbd_tconn *tconn) struct task_struct *opa; kref_get(&tconn->kref); - /* We may just have force_sig()'ed this thread - * to get it out of some blocking network function. - * Clear signals; otherwise kthread_run(), which internally uses - * wait_on_completion_killable(), will mistake our pending signal - * for a new fatal signal and fail. */ - flush_signals(current); opa = kthread_run(_try_outdate_peer_async, tconn, "drbd_async_h"); if (IS_ERR(opa)) { conn_err(tconn, "out of mem, failed to invoke fence-peer helper\n"); diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index eb3575b3fbf..04ceb7e2fad 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -3053,10 +3053,7 @@ static int raw_cmd_copyout(int cmd, void __user *param, int ret; while (ptr) { - struct floppy_raw_cmd cmd = *ptr; - cmd.next = NULL; - cmd.kernel_data = NULL; - ret = copy_to_user(param, &cmd, sizeof(cmd)); + ret = copy_to_user(param, ptr, sizeof(*ptr)); if (ret) return -EFAULT; param += sizeof(struct floppy_raw_cmd); @@ -3110,11 +3107,10 @@ loop: return -ENOMEM; *rcmd = ptr; ret = copy_from_user(ptr, param, sizeof(*ptr)); - ptr->next = NULL; - ptr->buffer_length = 0; - ptr->kernel_data = NULL; if (ret) return -EFAULT; + ptr->next = NULL; + ptr->buffer_length = 0; param += sizeof(struct floppy_raw_cmd); if (ptr->cmd_count > 33) /* the command may now also take up the space @@ -3130,6 +3126,7 @@ loop: for (i = 0; i < 16; i++) ptr->reply[i] = 0; ptr->resultcode = 0; + ptr->kernel_data = NULL; if (ptr->flags & (FD_RAW_READ | FD_RAW_WRITE)) { if (ptr->length <= 0) diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c index 0d94e09221d..20dd52a2f92 100644 --- a/drivers/block/mtip32xx/mtip32xx.c +++ b/drivers/block/mtip32xx/mtip32xx.c @@ -1493,37 +1493,6 @@ static inline void ata_swap_string(u16 *buf, unsigned int len) be16_to_cpus(&buf[i]); } -static void mtip_set_timeout(struct driver_data *dd, - struct host_to_dev_fis *fis, - unsigned int *timeout, u8 erasemode) -{ - switch (fis->command) { - case ATA_CMD_DOWNLOAD_MICRO: - *timeout = 120000; /* 2 minutes */ - break; - case ATA_CMD_SEC_ERASE_UNIT: - case 0xFC: - if (erasemode) - *timeout = ((*(dd->port->identify + 90) * 2) * 60000); - else - *timeout = ((*(dd->port->identify + 89) * 2) * 60000); - break; - case ATA_CMD_STANDBYNOW1: - *timeout = 120000; /* 2 minutes */ - break; - case 0xF7: - case 0xFA: - *timeout = 60000; /* 60 seconds */ - break; - case ATA_CMD_SMART: - *timeout = 15000; /* 15 seconds */ - break; - default: - *timeout = MTIP_IOCTL_COMMAND_TIMEOUT_MS; - break; - } -} - /* * Request the device identity information. * @@ -1633,7 +1602,6 @@ static int mtip_standby_immediate(struct mtip_port *port) int rv; struct host_to_dev_fis fis; unsigned long start; - unsigned int timeout; /* Build the FIS. */ memset(&fis, 0, sizeof(struct host_to_dev_fis)); @@ -1641,8 +1609,6 @@ static int mtip_standby_immediate(struct mtip_port *port) fis.opts = 1 << 7; fis.command = ATA_CMD_STANDBYNOW1; - mtip_set_timeout(port->dd, &fis, &timeout, 0); - start = jiffies; rv = mtip_exec_internal_command(port, &fis, @@ -1651,7 +1617,7 @@ static int mtip_standby_immediate(struct mtip_port *port) 0, 0, GFP_ATOMIC, - timeout); + 15000); dbg_printk(MTIP_DRV_NAME "Time taken to complete standby cmd: %d ms\n", jiffies_to_msecs(jiffies - start)); if (rv) @@ -2190,6 +2156,36 @@ static unsigned int implicit_sector(unsigned char command, } return rv; } +static void mtip_set_timeout(struct driver_data *dd, + struct host_to_dev_fis *fis, + unsigned int *timeout, u8 erasemode) +{ + switch (fis->command) { + case ATA_CMD_DOWNLOAD_MICRO: + *timeout = 120000; /* 2 minutes */ + break; + case ATA_CMD_SEC_ERASE_UNIT: + case 0xFC: + if (erasemode) + *timeout = ((*(dd->port->identify + 90) * 2) * 60000); + else + *timeout = ((*(dd->port->identify + 89) * 2) * 60000); + break; + case ATA_CMD_STANDBYNOW1: + *timeout = 120000; /* 2 minutes */ + break; + case 0xF7: + case 0xFA: + *timeout = 60000; /* 60 seconds */ + break; + case ATA_CMD_SMART: + *timeout = 15000; /* 15 seconds */ + break; + default: + *timeout = MTIP_IOCTL_COMMAND_TIMEOUT_MS; + break; + } +} /* * Executes a taskfile @@ -4044,7 +4040,6 @@ skip_create_disk: blk_queue_max_hw_sectors(dd->queue, 0xffff); blk_queue_max_segment_size(dd->queue, 0x400000); blk_queue_io_min(dd->queue, 4096); - blk_queue_bounce_limit(dd->queue, dd->pdev->dma_mask); /* * write back cache is not supported in the device. FUA depends on @@ -4288,57 +4283,6 @@ static DEFINE_HANDLER(5); static DEFINE_HANDLER(6); static DEFINE_HANDLER(7); -static void mtip_disable_link_opts(struct driver_data *dd, struct pci_dev *pdev) -{ - int pos; - unsigned short pcie_dev_ctrl; - - pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); - if (pos) { - pci_read_config_word(pdev, - pos + PCI_EXP_DEVCTL, - &pcie_dev_ctrl); - if (pcie_dev_ctrl & (1 << 11) || - pcie_dev_ctrl & (1 << 4)) { - dev_info(&dd->pdev->dev, - "Disabling ERO/No-Snoop on bridge device %04x:%04x\n", - pdev->vendor, pdev->device); - pcie_dev_ctrl &= ~(PCI_EXP_DEVCTL_NOSNOOP_EN | - PCI_EXP_DEVCTL_RELAX_EN); - pci_write_config_word(pdev, - pos + PCI_EXP_DEVCTL, - pcie_dev_ctrl); - } - } -} - -static void mtip_fix_ero_nosnoop(struct driver_data *dd, struct pci_dev *pdev) -{ - /* - * This workaround is specific to AMD/ATI chipset with a PCI upstream - * device with device id 0x5aXX - */ - if (pdev->bus && pdev->bus->self) { - if (pdev->bus->self->vendor == PCI_VENDOR_ID_ATI && - ((pdev->bus->self->device & 0xff00) == 0x5a00)) { - mtip_disable_link_opts(dd, pdev->bus->self); - } else { - /* Check further up the topology */ - struct pci_dev *parent_dev = pdev->bus->self; - if (parent_dev->bus && - parent_dev->bus->parent && - parent_dev->bus->parent->self && - parent_dev->bus->parent->self->vendor == - PCI_VENDOR_ID_ATI && - (parent_dev->bus->parent->self->device & - 0xff00) == 0x5a00) { - mtip_disable_link_opts(dd, - parent_dev->bus->parent->self); - } - } - } -} - /* * Called for each supported PCI device detected. * @@ -4490,8 +4434,6 @@ static int mtip_pci_probe(struct pci_dev *pdev, goto block_initialize_err; } - mtip_fix_ero_nosnoop(dd, pdev); - /* Initialize the block layer. */ rv = mtip_block_initialize(dd); if (rv < 0) { @@ -4784,13 +4726,13 @@ static int __init mtip_init(void) */ static void __exit mtip_exit(void) { + debugfs_remove_recursive(dfs_parent); + /* Release the allocated major block device number. */ unregister_blkdev(mtip_major, MTIP_DRV_NAME); /* Unregister the PCI driver. */ pci_unregister_driver(&mtip_pci_driver); - - debugfs_remove_recursive(dfs_parent); } MODULE_AUTHOR("Micron Technology, Inc"); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 9951e66b850..c421fa52851 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -1385,14 +1385,6 @@ static bool obj_request_exists_test(struct rbd_obj_request *obj_request) return test_bit(OBJ_REQ_EXISTS, &obj_request->flags) != 0; } -static bool obj_request_overlaps_parent(struct rbd_obj_request *obj_request) -{ - struct rbd_device *rbd_dev = obj_request->img_request->rbd_dev; - - return obj_request->img_offset < - round_up(rbd_dev->parent_overlap, rbd_obj_bytes(&rbd_dev->header)); -} - static void rbd_obj_request_get(struct rbd_obj_request *obj_request) { dout("%s: obj %p (was %d)\n", __func__, obj_request, @@ -1409,13 +1401,6 @@ static void rbd_obj_request_put(struct rbd_obj_request *obj_request) kref_put(&obj_request->kref, rbd_obj_request_destroy); } -static void rbd_img_request_get(struct rbd_img_request *img_request) -{ - dout("%s: img %p (was %d)\n", __func__, img_request, - atomic_read(&img_request->kref.refcount)); - kref_get(&img_request->kref); -} - static bool img_request_child_test(struct rbd_img_request *img_request); static void rbd_parent_request_destroy(struct kref *kref); static void rbd_img_request_destroy(struct kref *kref); @@ -2169,7 +2154,6 @@ static void rbd_img_obj_callback(struct rbd_obj_request *obj_request) img_request->next_completion = which; out: spin_unlock_irq(&img_request->completion_lock); - rbd_img_request_put(img_request); if (!more) rbd_img_request_complete(img_request); @@ -2266,7 +2250,6 @@ static int rbd_img_request_fill(struct rbd_img_request *img_request, goto out_partial; obj_request->osd_req = osd_req; obj_request->callback = rbd_img_obj_callback; - rbd_img_request_get(img_request); osd_req_op_extent_init(osd_req, 0, opcode, offset, length, 0, 0); @@ -2295,7 +2278,7 @@ out_partial: rbd_obj_request_put(obj_request); out_unwind: for_each_obj_request_safe(img_request, obj_request, next_obj_request) - rbd_img_obj_request_del(img_request, obj_request); + rbd_obj_request_put(obj_request); return -ENOMEM; } @@ -2690,7 +2673,7 @@ static int rbd_img_obj_request_submit(struct rbd_obj_request *obj_request) */ if (!img_request_write_test(img_request) || !img_request_layered_test(img_request) || - !obj_request_overlaps_parent(obj_request) || + rbd_dev->parent_overlap <= obj_request->img_offset || ((known = obj_request_known_test(obj_request)) && obj_request_exists_test(obj_request))) { @@ -3227,7 +3210,7 @@ static int rbd_obj_read_sync(struct rbd_device *rbd_dev, page_count = (u32) calc_pages_for(offset, length); pages = ceph_alloc_page_vector(page_count, GFP_KERNEL); if (IS_ERR(pages)) - return PTR_ERR(pages); + ret = PTR_ERR(pages); ret = -ENOMEM; obj_request = rbd_obj_request_create(object_name, offset, length, diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index ddd9a098bc6..1735b0d17e2 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -104,7 +104,7 @@ struct blkfront_info struct work_struct work; struct gnttab_free_callback callback; struct blk_shadow shadow[BLK_RING_SIZE]; - struct list_head grants; + struct list_head persistent_gnts; unsigned int persistent_gnts_c; unsigned long shadow_free; unsigned int feature_flush; @@ -175,17 +175,15 @@ static int fill_grant_buffer(struct blkfront_info *info, int num) if (!gnt_list_entry) goto out_of_memory; - if (info->feature_persistent) { - granted_page = alloc_page(GFP_NOIO); - if (!granted_page) { - kfree(gnt_list_entry); - goto out_of_memory; - } - gnt_list_entry->pfn = page_to_pfn(granted_page); + granted_page = alloc_page(GFP_NOIO); + if (!granted_page) { + kfree(gnt_list_entry); + goto out_of_memory; } + gnt_list_entry->pfn = page_to_pfn(granted_page); gnt_list_entry->gref = GRANT_INVALID_REF; - list_add(&gnt_list_entry->node, &info->grants); + list_add(&gnt_list_entry->node, &info->persistent_gnts); i++; } @@ -193,10 +191,9 @@ static int fill_grant_buffer(struct blkfront_info *info, int num) out_of_memory: list_for_each_entry_safe(gnt_list_entry, n, - &info->grants, node) { + &info->persistent_gnts, node) { list_del(&gnt_list_entry->node); - if (info->feature_persistent) - __free_page(pfn_to_page(gnt_list_entry->pfn)); + __free_page(pfn_to_page(gnt_list_entry->pfn)); kfree(gnt_list_entry); i--; } @@ -205,14 +202,14 @@ out_of_memory: } static struct grant *get_grant(grant_ref_t *gref_head, - unsigned long pfn, struct blkfront_info *info) { struct grant *gnt_list_entry; unsigned long buffer_mfn; - BUG_ON(list_empty(&info->grants)); - gnt_list_entry = list_first_entry(&info->grants, struct grant, node); + BUG_ON(list_empty(&info->persistent_gnts)); + gnt_list_entry = list_first_entry(&info->persistent_gnts, struct grant, + node); list_del(&gnt_list_entry->node); if (gnt_list_entry->gref != GRANT_INVALID_REF) { @@ -223,10 +220,6 @@ static struct grant *get_grant(grant_ref_t *gref_head, /* Assign a gref to this page */ gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); BUG_ON(gnt_list_entry->gref == -ENOSPC); - if (!info->feature_persistent) { - BUG_ON(!pfn); - gnt_list_entry->pfn = pfn; - } buffer_mfn = pfn_to_mfn(gnt_list_entry->pfn); gnttab_grant_foreign_access_ref(gnt_list_entry->gref, info->xbdev->otherend_id, @@ -437,12 +430,12 @@ static int blkif_queue_request(struct request *req) fsect = sg->offset >> 9; lsect = fsect + (sg->length >> 9) - 1; - gnt_list_entry = get_grant(&gref_head, page_to_pfn(sg_page(sg)), info); + gnt_list_entry = get_grant(&gref_head, info); ref = gnt_list_entry->gref; info->shadow[id].grants_used[i] = gnt_list_entry; - if (rq_data_dir(req) && info->feature_persistent) { + if (rq_data_dir(req)) { char *bvec_data; void *shared_data; @@ -835,17 +828,16 @@ static void blkif_free(struct blkfront_info *info, int suspend) blk_stop_queue(info->rq); /* Remove all persistent grants */ - if (!list_empty(&info->grants)) { + if (!list_empty(&info->persistent_gnts)) { list_for_each_entry_safe(persistent_gnt, n, - &info->grants, node) { + &info->persistent_gnts, node) { list_del(&persistent_gnt->node); if (persistent_gnt->gref != GRANT_INVALID_REF) { gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL); info->persistent_gnts_c--; } - if (info->feature_persistent) - __free_page(pfn_to_page(persistent_gnt->pfn)); + __free_page(pfn_to_page(persistent_gnt->pfn)); kfree(persistent_gnt); } } @@ -882,7 +874,7 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info, nseg = s->req.u.rw.nr_segments; - if (bret->operation == BLKIF_OP_READ && info->feature_persistent) { + if (bret->operation == BLKIF_OP_READ) { /* * Copy the data received from the backend into the bvec. * Since bv_offset can be different than 0, and bv_len different @@ -902,30 +894,9 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info, } } /* Add the persistent grant into the list of free grants */ - for (i = 0; i < nseg; i++) { - if (gnttab_query_foreign_access(s->grants_used[i]->gref)) { - /* - * If the grant is still mapped by the backend (the - * backend has chosen to make this grant persistent) - * we add it at the head of the list, so it will be - * reused first. - */ - if (!info->feature_persistent) - pr_alert_ratelimited("backed has not unmapped grant: %u\n", - s->grants_used[i]->gref); - list_add(&s->grants_used[i]->node, &info->grants); - info->persistent_gnts_c++; - } else { - /* - * If the grant is not mapped by the backend we end the - * foreign access and add it to the tail of the list, - * so it will not be picked again unless we run out of - * persistent grants. - */ - gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL); - s->grants_used[i]->gref = GRANT_INVALID_REF; - list_add_tail(&s->grants_used[i]->node, &info->grants); - } + for (i = 0; i < s->req.u.rw.nr_segments; i++) { + list_add(&s->grants_used[i]->node, &info->persistent_gnts); + info->persistent_gnts_c++; } } @@ -1063,6 +1034,12 @@ static int setup_blkring(struct xenbus_device *dev, for (i = 0; i < BLK_RING_SIZE; i++) sg_init_table(info->shadow[i].sg, BLKIF_MAX_SEGMENTS_PER_REQUEST); + /* Allocate memory for grants */ + err = fill_grant_buffer(info, BLK_RING_SIZE * + BLKIF_MAX_SEGMENTS_PER_REQUEST); + if (err) + goto fail; + err = xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring)); if (err < 0) { free_page((unsigned long)sring); @@ -1221,7 +1198,7 @@ static int blkfront_probe(struct xenbus_device *dev, spin_lock_init(&info->io_lock); info->xbdev = dev; info->vdevice = vdevice; - INIT_LIST_HEAD(&info->grants); + INIT_LIST_HEAD(&info->persistent_gnts); info->persistent_gnts_c = 0; info->connected = BLKIF_STATE_DISCONNECTED; INIT_WORK(&info->work, blkif_restart_queue); @@ -1250,8 +1227,7 @@ static int blkif_recover(struct blkfront_info *info) int i; struct blkif_request *req; struct blk_shadow *copy; - unsigned int persistent; - int j, rc; + int j; /* Stage 1: Make a safe copy of the shadow state. */ copy = kmemdup(info->shadow, sizeof(info->shadow), @@ -1266,24 +1242,6 @@ static int blkif_recover(struct blkfront_info *info) info->shadow_free = info->ring.req_prod_pvt; info->shadow[BLK_RING_SIZE-1].req.u.rw.id = 0x0fffffff; - /* Check if the backend supports persistent grants */ - rc = xenbus_gather(XBT_NIL, info->xbdev->otherend, - "feature-persistent", "%u", &persistent, - NULL); - if (rc) - info->feature_persistent = 0; - else - info->feature_persistent = persistent; - - /* Allocate memory for grants */ - rc = fill_grant_buffer(info, BLK_RING_SIZE * - BLKIF_MAX_SEGMENTS_PER_REQUEST); - if (rc) { - xenbus_dev_fatal(info->xbdev, rc, "setting grant buffer failed"); - kfree(copy); - return rc; - } - /* Stage 3: Find pending requests and requeue them. */ for (i = 0; i < BLK_RING_SIZE; i++) { /* Not in use? */ @@ -1348,12 +1306,8 @@ static int blkfront_resume(struct xenbus_device *dev) blkif_free(info, info->connected == BLKIF_STATE_CONNECTED); err = talk_to_blkback(dev, info); - - /* - * We have to wait for the backend to switch to - * connected state, since we want to read which - * features it supports. - */ + if (info->connected == BLKIF_STATE_SUSPENDED && !err) + err = blkif_recover(info); return err; } @@ -1457,16 +1411,9 @@ static void blkfront_connect(struct blkfront_info *info) sectors); set_capacity(info->gd, sectors); revalidate_disk(info->gd); - return; + /* fall through */ case BLKIF_STATE_SUSPENDED: - /* - * If we are recovering from suspension, we need to wait - * for the backend to announce it's features before - * reconnecting, we need to know if the backend supports - * persistent grants. - */ - blkif_recover(info); return; default: @@ -1534,14 +1481,6 @@ static void blkfront_connect(struct blkfront_info *info) else info->feature_persistent = persistent; - /* Allocate memory for grants */ - err = fill_grant_buffer(info, BLK_RING_SIZE * - BLKIF_MAX_SEGMENTS_PER_REQUEST); - if (err) { - xenbus_dev_fatal(info->xbdev, err, "setting grant buffer failed"); - return; - } - err = xlvbd_alloc_gendisk(sectors, info, binfo, sector_size); if (err) { xenbus_dev_fatal(info->xbdev, err, "xlvbd_add at %s", diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c index 2acabdaecec..0a327f4154a 100644 --- a/drivers/bluetooth/ath3k.c +++ b/drivers/bluetooth/ath3k.c @@ -82,7 +82,6 @@ static struct usb_device_id ath3k_table[] = { { USB_DEVICE(0x04CA, 0x3004) }, { USB_DEVICE(0x04CA, 0x3005) }, { USB_DEVICE(0x04CA, 0x3006) }, - { USB_DEVICE(0x04CA, 0x3007) }, { USB_DEVICE(0x04CA, 0x3008) }, { USB_DEVICE(0x13d3, 0x3362) }, { USB_DEVICE(0x0CF3, 0xE004) }, @@ -125,7 +124,6 @@ static struct usb_device_id ath3k_blist_tbl[] = { { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 }, - { USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 61a8ec4e5f4..58491f1b279 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -146,7 +146,6 @@ static struct usb_device_id blacklist_table[] = { { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 }, - { USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, @@ -302,9 +301,6 @@ static void btusb_intr_complete(struct urb *urb) BT_ERR("%s corrupted event packet", hdev->name); hdev->stat.err_rx++; } - } else if (urb->status == -ENOENT) { - /* Avoid suspend failed when usb_kill_urb */ - return; } if (!test_bit(BTUSB_INTR_RUNNING, &data->flags)) @@ -393,9 +389,6 @@ static void btusb_bulk_complete(struct urb *urb) BT_ERR("%s corrupted ACL packet", hdev->name); hdev->stat.err_rx++; } - } else if (urb->status == -ENOENT) { - /* Avoid suspend failed when usb_kill_urb */ - return; } if (!test_bit(BTUSB_BULK_RUNNING, &data->flags)) @@ -490,9 +483,6 @@ static void btusb_isoc_complete(struct urb *urb) hdev->stat.err_rx++; } } - } else if (urb->status == -ENOENT) { - /* Avoid suspend failed when usb_kill_urb */ - return; } if (!test_bit(BTUSB_ISOC_RUNNING, &data->flags)) diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c index db35c542eb2..b6154d5a07a 100644 --- a/drivers/bluetooth/hci_h5.c +++ b/drivers/bluetooth/hci_h5.c @@ -237,7 +237,7 @@ static void h5_pkt_cull(struct h5 *h5) break; to_remove--; - seq = (seq - 1) & 0x07; + seq = (seq - 1) % 8; } if (seq != h5->rx_ack) @@ -406,7 +406,6 @@ static int h5_rx_3wire_hdr(struct hci_uart *hu, unsigned char c) H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) { BT_ERR("Non-link packet received in non-active state"); h5_reset_rx(h5); - return 0; } h5->rx_func = h5_rx_payload; diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c index 1c70ff05ac0..f7f7106c475 100644 --- a/drivers/bluetooth/hci_ldisc.c +++ b/drivers/bluetooth/hci_ldisc.c @@ -118,6 +118,10 @@ static inline struct sk_buff *hci_uart_dequeue(struct hci_uart *hu) int hci_uart_tx_wakeup(struct hci_uart *hu) { + struct tty_struct *tty = hu->tty; + struct hci_dev *hdev = hu->hdev; + struct sk_buff *skb; + if (test_and_set_bit(HCI_UART_SENDING, &hu->tx_state)) { set_bit(HCI_UART_TX_WAKEUP, &hu->tx_state); return 0; @@ -125,22 +129,6 @@ int hci_uart_tx_wakeup(struct hci_uart *hu) BT_DBG(""); - schedule_work(&hu->write_work); - - return 0; -} - -static void hci_uart_write_work(struct work_struct *work) -{ - struct hci_uart *hu = container_of(work, struct hci_uart, write_work); - struct tty_struct *tty = hu->tty; - struct hci_dev *hdev = hu->hdev; - struct sk_buff *skb; - - /* REVISIT: should we cope with bad skbs or ->write() returning - * and error value ? - */ - restart: clear_bit(HCI_UART_TX_WAKEUP, &hu->tx_state); @@ -165,6 +153,7 @@ restart: goto restart; clear_bit(HCI_UART_SENDING, &hu->tx_state); + return 0; } static void hci_uart_init_work(struct work_struct *work) @@ -300,7 +289,6 @@ static int hci_uart_tty_open(struct tty_struct *tty) tty->receive_room = 65536; INIT_WORK(&hu->init_ready, hci_uart_init_work); - INIT_WORK(&hu->write_work, hci_uart_write_work); spin_lock_init(&hu->rx_lock); @@ -338,8 +326,6 @@ static void hci_uart_tty_close(struct tty_struct *tty) if (hdev) hci_uart_close(hdev); - cancel_work_sync(&hu->write_work); - if (test_and_clear_bit(HCI_UART_PROTO_SET, &hu->flags)) { hu->proto->close(hu); if (hdev) { diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h index 12df101ca94..fffa61ff5cb 100644 --- a/drivers/bluetooth/hci_uart.h +++ b/drivers/bluetooth/hci_uart.h @@ -68,7 +68,6 @@ struct hci_uart { unsigned long hdev_flags; struct work_struct init_ready; - struct work_struct write_work; struct hci_uart_proto *proto; void *priv; diff --git a/drivers/bus/mvebu-mbus.c b/drivers/bus/mvebu-mbus.c index 5dcc8305abd..8740f46b4d0 100644 --- a/drivers/bus/mvebu-mbus.c +++ b/drivers/bus/mvebu-mbus.c @@ -250,6 +250,12 @@ static int mvebu_mbus_window_conflicts(struct mvebu_mbus_state *mbus, */ if ((u64)base < wend && end > wbase) return 0; + + /* + * Check if target/attribute conflicts + */ + if (target == wtarget && attr == wattr) + return 0; } return 1; diff --git a/drivers/char/applicom.c b/drivers/char/applicom.c index 14790304b84..974321a2508 100644 --- a/drivers/char/applicom.c +++ b/drivers/char/applicom.c @@ -345,6 +345,7 @@ out: free_irq(apbs[i].irq, &dummy); iounmap(apbs[i].RamIO); } + pci_disable_device(dev); return ret; } diff --git a/drivers/char/ipmi/ipmi_bt_sm.c b/drivers/char/ipmi/ipmi_bt_sm.c index 8156cafad11..a22a7a50274 100644 --- a/drivers/char/ipmi/ipmi_bt_sm.c +++ b/drivers/char/ipmi/ipmi_bt_sm.c @@ -352,7 +352,7 @@ static inline void write_all_bytes(struct si_sm_data *bt) static inline int read_all_bytes(struct si_sm_data *bt) { - unsigned int i; + unsigned char i; /* * length is "framing info", minimum = 4: NetFn, Seq, Cmd, cCode. diff --git a/drivers/char/ipmi/ipmi_kcs_sm.c b/drivers/char/ipmi/ipmi_kcs_sm.c index e1ddcf93851..e53fc24c6af 100644 --- a/drivers/char/ipmi/ipmi_kcs_sm.c +++ b/drivers/char/ipmi/ipmi_kcs_sm.c @@ -251,9 +251,8 @@ static inline int check_obf(struct si_sm_data *kcs, unsigned char status, if (!GET_STATUS_OBF(status)) { kcs->obf_timeout -= time; if (kcs->obf_timeout < 0) { - kcs->obf_timeout = OBF_RETRY_TIMEOUT; - start_error_recovery(kcs, "OBF not ready in time"); - return 1; + start_error_recovery(kcs, "OBF not ready in time"); + return 1; } return 0; } diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c index 40b3f756f90..af4b23ffc5a 100644 --- a/drivers/char/ipmi/ipmi_si_intf.c +++ b/drivers/char/ipmi/ipmi_si_intf.c @@ -244,9 +244,6 @@ struct smi_info { /* The timer for this si. */ struct timer_list si_timer; - /* This flag is set, if the timer is running (timer_pending() isn't enough) */ - bool timer_running; - /* The time (in jiffies) the last timeout occurred at. */ unsigned long last_timeout_jiffies; @@ -430,13 +427,6 @@ static void start_clear_flags(struct smi_info *smi_info) smi_info->si_state = SI_CLEARING_FLAGS; } -static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val) -{ - smi_info->last_timeout_jiffies = jiffies; - mod_timer(&smi_info->si_timer, new_val); - smi_info->timer_running = true; -} - /* * When we have a situtaion where we run out of memory and cannot * allocate messages, we just leave them in the BMC and run the system @@ -449,7 +439,8 @@ static inline void disable_si_irq(struct smi_info *smi_info) start_disable_irq(smi_info); smi_info->interrupt_disabled = 1; if (!atomic_read(&smi_info->stop_operation)) - smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); + mod_timer(&smi_info->si_timer, + jiffies + SI_TIMEOUT_JIFFIES); } } @@ -909,7 +900,15 @@ static void sender(void *send_info, list_add_tail(&msg->link, &smi_info->xmit_msgs); if (smi_info->si_state == SI_NORMAL && smi_info->curr_msg == NULL) { - smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); + /* + * last_timeout_jiffies is updated here to avoid + * smi_timeout() handler passing very large time_diff + * value to smi_event_handler() that causes + * the send command to abort. + */ + smi_info->last_timeout_jiffies = jiffies; + + mod_timer(&smi_info->si_timer, jiffies + SI_TIMEOUT_JIFFIES); if (smi_info->thread) wake_up_process(smi_info->thread); @@ -998,17 +997,6 @@ static int ipmi_thread(void *data) spin_lock_irqsave(&(smi_info->si_lock), flags); smi_result = smi_event_handler(smi_info, 0); - - /* - * If the driver is doing something, there is a possible - * race with the timer. If the timer handler see idle, - * and the thread here sees something else, the timer - * handler won't restart the timer even though it is - * required. So start it here if necessary. - */ - if (smi_result != SI_SM_IDLE && !smi_info->timer_running) - smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); - spin_unlock_irqrestore(&(smi_info->si_lock), flags); busy_wait = ipmi_thread_busy_wait(smi_result, smi_info, &busy_until); @@ -1078,6 +1066,10 @@ static void smi_timeout(unsigned long data) * SI_USEC_PER_JIFFY); smi_result = smi_event_handler(smi_info, time_diff); + spin_unlock_irqrestore(&(smi_info->si_lock), flags); + + smi_info->last_timeout_jiffies = jiffies_now; + if ((smi_info->irq) && (!smi_info->interrupt_disabled)) { /* Running with interrupts, only do long timeouts. */ timeout = jiffies + SI_TIMEOUT_JIFFIES; @@ -1099,10 +1091,7 @@ static void smi_timeout(unsigned long data) do_mod_timer: if (smi_result != SI_SM_IDLE) - smi_mod_timer(smi_info, timeout); - else - smi_info->timer_running = false; - spin_unlock_irqrestore(&(smi_info->si_lock), flags); + mod_timer(&(smi_info->si_timer), timeout); } static irqreturn_t si_irq_handler(int irq, void *data) @@ -1150,7 +1139,8 @@ static int smi_start_processing(void *send_info, /* Set up the timer that drives the interface. */ setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi); - smi_mod_timer(new_smi, jiffies + SI_TIMEOUT_JIFFIES); + new_smi->last_timeout_jiffies = jiffies; + mod_timer(&new_smi->si_timer, jiffies + SI_TIMEOUT_JIFFIES); /* * Check if the user forcefully enabled the daemon. diff --git a/drivers/char/random.c b/drivers/char/random.c index aee3464a5bd..81eefa1c0d3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -933,8 +933,8 @@ static void extract_buf(struct entropy_store *r, __u8 *out) * pool while mixing, and hash one final time. */ sha_transform(hash.w, extract, workspace); - memzero_explicit(extract, sizeof(extract)); - memzero_explicit(workspace, sizeof(workspace)); + memset(extract, 0, sizeof(extract)); + memset(workspace, 0, sizeof(workspace)); /* * In case the hash function has some recognizable output @@ -957,7 +957,7 @@ static void extract_buf(struct entropy_store *r, __u8 *out) } memcpy(out, &hash, EXTRACT_SIZE); - memzero_explicit(&hash, sizeof(hash)); + memset(&hash, 0, sizeof(hash)); } static ssize_t extract_entropy(struct entropy_store *r, void *buf, @@ -1005,7 +1005,7 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, } /* Wipe data just returned from memory */ - memzero_explicit(tmp, sizeof(tmp)); + memset(tmp, 0, sizeof(tmp)); return ret; } @@ -1043,7 +1043,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf, } /* Wipe data just returned from memory */ - memzero_explicit(tmp, sizeof(tmp)); + memset(tmp, 0, sizeof(tmp)); return ret; } diff --git a/drivers/char/tpm/tpm.c b/drivers/char/tpm/tpm.c index f659a571ad2..7c3b3dcbfbc 100644 --- a/drivers/char/tpm/tpm.c +++ b/drivers/char/tpm/tpm.c @@ -533,10 +533,11 @@ static int tpm_startup(struct tpm_chip *chip, __be16 startup_type) int tpm_get_timeouts(struct tpm_chip *chip) { struct tpm_cmd_t tpm_cmd; - unsigned long new_timeout[4]; - unsigned long old_timeout[4]; + struct timeout_t *timeout_cap; struct duration_t *duration_cap; ssize_t rc; + u32 timeout; + unsigned int scale = 1; tpm_cmd.header.in = tpm_getcap_header; tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP; @@ -570,46 +571,25 @@ int tpm_get_timeouts(struct tpm_chip *chip) != sizeof(tpm_cmd.header.out) + sizeof(u32) + 4 * sizeof(u32)) return -EINVAL; - old_timeout[0] = be32_to_cpu(tpm_cmd.params.getcap_out.cap.timeout.a); - old_timeout[1] = be32_to_cpu(tpm_cmd.params.getcap_out.cap.timeout.b); - old_timeout[2] = be32_to_cpu(tpm_cmd.params.getcap_out.cap.timeout.c); - old_timeout[3] = be32_to_cpu(tpm_cmd.params.getcap_out.cap.timeout.d); - memcpy(new_timeout, old_timeout, sizeof(new_timeout)); - - /* - * Provide ability for vendor overrides of timeout values in case - * of misreporting. - */ - if (chip->vendor.update_timeouts != NULL) - chip->vendor.timeout_adjusted = - chip->vendor.update_timeouts(chip, new_timeout); - - if (!chip->vendor.timeout_adjusted) { - /* Don't overwrite default if value is 0 */ - if (new_timeout[0] != 0 && new_timeout[0] < 1000) { - int i; - - /* timeouts in msec rather usec */ - for (i = 0; i != ARRAY_SIZE(new_timeout); i++) - new_timeout[i] *= 1000; - chip->vendor.timeout_adjusted = true; - } + timeout_cap = &tpm_cmd.params.getcap_out.cap.timeout; + /* Don't overwrite default if value is 0 */ + timeout = be32_to_cpu(timeout_cap->a); + if (timeout && timeout < 1000) { + /* timeouts in msec rather usec */ + scale = 1000; + chip->vendor.timeout_adjusted = true; } - - /* Report adjusted timeouts */ - if (chip->vendor.timeout_adjusted) { - dev_info(chip->dev, - HW_ERR "Adjusting reported timeouts: A %lu->%luus B %lu->%luus C %lu->%luus D %lu->%luus\n", - old_timeout[0], new_timeout[0], - old_timeout[1], new_timeout[1], - old_timeout[2], new_timeout[2], - old_timeout[3], new_timeout[3]); - } - - chip->vendor.timeout_a = usecs_to_jiffies(new_timeout[0]); - chip->vendor.timeout_b = usecs_to_jiffies(new_timeout[1]); - chip->vendor.timeout_c = usecs_to_jiffies(new_timeout[2]); - chip->vendor.timeout_d = usecs_to_jiffies(new_timeout[3]); + if (timeout) + chip->vendor.timeout_a = usecs_to_jiffies(timeout * scale); + timeout = be32_to_cpu(timeout_cap->b); + if (timeout) + chip->vendor.timeout_b = usecs_to_jiffies(timeout * scale); + timeout = be32_to_cpu(timeout_cap->c); + if (timeout) + chip->vendor.timeout_c = usecs_to_jiffies(timeout * scale); + timeout = be32_to_cpu(timeout_cap->d); + if (timeout) + chip->vendor.timeout_d = usecs_to_jiffies(timeout * scale); duration: tpm_cmd.header.in = tpm_getcap_header; @@ -1443,13 +1423,13 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max) int err, total = 0, retries = 5; u8 *dest = out; - if (!out || !num_bytes || max > TPM_MAX_RNG_DATA) - return -EINVAL; - chip = tpm_chip_find_get(chip_num); if (chip == NULL) return -ENODEV; + if (!out || !num_bytes || max > TPM_MAX_RNG_DATA) + return -EINVAL; + do { tpm_cmd.header.in = tpm_getrandom_header; tpm_cmd.params.getrandom_in.num_bytes = cpu_to_be32(num_bytes); @@ -1468,7 +1448,6 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max) num_bytes -= recd; } while (retries-- && total < max); - tpm_chip_put(chip); return total ? total : -EIO; } EXPORT_SYMBOL_GPL(tpm_get_random); diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h index deffda7678a..0770d1d7936 100644 --- a/drivers/char/tpm/tpm.h +++ b/drivers/char/tpm/tpm.h @@ -95,9 +95,6 @@ struct tpm_vendor_specific { int (*send) (struct tpm_chip *, u8 *, size_t); void (*cancel) (struct tpm_chip *); u8 (*status) (struct tpm_chip *); - bool (*update_timeouts)(struct tpm_chip *chip, - unsigned long *timeout_cap); - void (*release) (struct device *); struct miscdevice miscdev; struct attribute_group *attr_group; diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c index 72f21377fa0..8a41b6be23a 100644 --- a/drivers/char/tpm/tpm_tis.c +++ b/drivers/char/tpm/tpm_tis.c @@ -373,36 +373,6 @@ out_err: return rc; } -struct tis_vendor_timeout_override { - u32 did_vid; - unsigned long timeout_us[4]; -}; - -static const struct tis_vendor_timeout_override vendor_timeout_overrides[] = { - /* Atmel 3204 */ - { 0x32041114, { (TIS_SHORT_TIMEOUT*1000), (TIS_LONG_TIMEOUT*1000), - (TIS_SHORT_TIMEOUT*1000), (TIS_SHORT_TIMEOUT*1000) } }, -}; - -static bool tpm_tis_update_timeouts(struct tpm_chip *chip, - unsigned long *timeout_cap) -{ - int i; - u32 did_vid; - - did_vid = ioread32(chip->vendor.iobase + TPM_DID_VID(0)); - - for (i = 0; i != ARRAY_SIZE(vendor_timeout_overrides); i++) { - if (vendor_timeout_overrides[i].did_vid != did_vid) - continue; - memcpy(timeout_cap, vendor_timeout_overrides[i].timeout_us, - sizeof(vendor_timeout_overrides[i].timeout_us)); - return true; - } - - return false; -} - /* * Early probing for iTPM with STS_DATA_EXPECT flaw. * Try sending command without itpm flag set and if that @@ -505,7 +475,6 @@ static struct tpm_vendor_specific tpm_tis = { .recv = tpm_tis_recv, .send = tpm_tis_send, .cancel = tpm_tis_ready, - .update_timeouts = tpm_tis_update_timeouts, .req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID, .req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID, .req_canceled = tpm_tis_req_canceled, diff --git a/drivers/clk/spear/spear3xx_clock.c b/drivers/clk/spear/spear3xx_clock.c index 1fe25902174..080c3c5e33f 100644 --- a/drivers/clk/spear/spear3xx_clock.c +++ b/drivers/clk/spear/spear3xx_clock.c @@ -211,7 +211,7 @@ static inline void spear310_clk_init(void) { } /* array of all spear 320 clock lookups */ #ifdef CONFIG_MACH_SPEAR320 -#define SPEAR320_CONTROL_REG (soc_config_base + 0x0010) +#define SPEAR320_CONTROL_REG (soc_config_base + 0x0000) #define SPEAR320_EXT_CTRL_REG (soc_config_base + 0x0018) #define SPEAR320_UARTX_PCLK_MASK 0x1 diff --git a/drivers/clk/versatile/clk-vexpress-osc.c b/drivers/clk/versatile/clk-vexpress-osc.c index 8b8798bb93f..256c8be74df 100644 --- a/drivers/clk/versatile/clk-vexpress-osc.c +++ b/drivers/clk/versatile/clk-vexpress-osc.c @@ -102,7 +102,7 @@ void __init vexpress_osc_of_setup(struct device_node *node) osc = kzalloc(sizeof(*osc), GFP_KERNEL); if (!osc) - return; + goto error; osc->func = vexpress_config_func_get_by_node(node); if (!osc->func) { diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c index b7960185919..662fcc06582 100644 --- a/drivers/clocksource/exynos_mct.c +++ b/drivers/clocksource/exynos_mct.c @@ -429,6 +429,8 @@ static int __cpuinit exynos4_local_timer_setup(struct clock_event_device *evt) evt->set_mode = exynos4_tick_set_mode; evt->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT; evt->rating = 450; + clockevents_config_and_register(evt, clk_rate / (TICK_BASE_CNT + 1), + 0xf, 0x7fffffff); exynos4_mct_write(TICK_BASE_CNT, mevt->base + MCT_L_TCNTB_OFFSET); @@ -446,8 +448,6 @@ static int __cpuinit exynos4_local_timer_setup(struct clock_event_device *evt) } else { enable_percpu_irq(mct_irqs[MCT_L0_IRQ], 0); } - clockevents_config_and_register(evt, clk_rate / (TICK_BASE_CNT + 1), - 0xf, 0x7fffffff); return 0; } diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c index 3165811e240..18c5b9b1664 100644 --- a/drivers/connector/cn_proc.c +++ b/drivers/connector/cn_proc.c @@ -369,7 +369,7 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg, return; /* Can only change if privileged. */ - if (!__netlink_ns_capable(nsp, &init_user_ns, CAP_NET_ADMIN)) { + if (!capable(CAP_NET_ADMIN)) { err = EPERM; goto out; } diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile index b2a56e5bfcd..27399ff1d60 100644 --- a/drivers/cpufreq/Makefile +++ b/drivers/cpufreq/Makefile @@ -50,7 +50,7 @@ obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) += arm_big_little.o # LITTLE drivers, so that it is probed last. obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o -obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o +obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index ece1df8eac8..a7272676b23 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -53,7 +53,7 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) policy = cdbs->cur_policy; - /* Get Absolute Load */ + /* Get Absolute Load (in terms of freq for ondemand gov) */ for_each_cpu(j, policy->cpus) { struct cpu_dbs_common_info *j_cdbs; u64 cur_wall_time, cur_idle_time; @@ -104,6 +104,14 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) load = 100 * (wall_time - idle_time) / wall_time; + if (dbs_data->cdata->governor == GOV_ONDEMAND) { + int freq_avg = __cpufreq_driver_getavg(policy, j); + if (freq_avg <= 0) + freq_avg = policy->cur; + + load *= freq_avg; + } + if (load > max_load) max_load = load; } @@ -125,9 +133,6 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, { int i; - if (!policy->governor_enabled) - return; - if (!all_cpus) { __gov_queue_work(smp_processor_id(), dbs_data, delay); } else { diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index c8028ce7554..c501ca83d75 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -169,6 +169,7 @@ struct od_dbs_tuners { unsigned int sampling_rate; unsigned int sampling_down_factor; unsigned int up_threshold; + unsigned int adj_up_threshold; unsigned int powersave_bias; unsigned int io_is_busy; }; diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 25438bbf96b..c087347d668 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -29,9 +29,11 @@ #include "cpufreq_governor.h" /* On-demand governor macros */ +#define DEF_FREQUENCY_DOWN_DIFFERENTIAL (10) #define DEF_FREQUENCY_UP_THRESHOLD (80) #define DEF_SAMPLING_DOWN_FACTOR (1) #define MAX_SAMPLING_DOWN_FACTOR (100000) +#define MICRO_FREQUENCY_DOWN_DIFFERENTIAL (3) #define MICRO_FREQUENCY_UP_THRESHOLD (95) #define MICRO_FREQUENCY_MIN_SAMPLE_RATE (10000) #define MIN_FREQUENCY_UP_THRESHOLD (11) @@ -159,10 +161,14 @@ static void dbs_freq_increase(struct cpufreq_policy *p, unsigned int freq) /* * Every sampling_rate, we check, if current idle time is less than 20% - * (default), then we try to increase frequency. Else, we adjust the frequency - * proportional to load. + * (default), then we try to increase frequency. Every sampling_rate, we look + * for the lowest frequency which can sustain the load while keeping idle time + * over 30%. If such a frequency exist, we try to decrease to this frequency. + * + * Any frequency increase takes it to the maximum frequency. Frequency reduction + * happens at minimum steps of 5% (default) of current frequency */ -static void od_check_cpu(int cpu, unsigned int load) +static void od_check_cpu(int cpu, unsigned int load_freq) { struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu); struct cpufreq_policy *policy = dbs_info->cdbs.cur_policy; @@ -172,17 +178,29 @@ static void od_check_cpu(int cpu, unsigned int load) dbs_info->freq_lo = 0; /* Check for frequency increase */ - if (load > od_tuners->up_threshold) { + if (load_freq > od_tuners->up_threshold * policy->cur) { /* If switching to max speed, apply sampling_down_factor */ if (policy->cur < policy->max) dbs_info->rate_mult = od_tuners->sampling_down_factor; dbs_freq_increase(policy, policy->max); return; - } else { - /* Calculate the next frequency proportional to load */ + } + + /* Check for frequency decrease */ + /* if we cannot reduce the frequency anymore, break out early */ + if (policy->cur == policy->min) + return; + + /* + * The optimal frequency is the frequency that is the lowest that can + * support the current CPU usage without triggering the up policy. To be + * safe, we focus 10 points under the threshold. + */ + if (load_freq < od_tuners->adj_up_threshold + * policy->cur) { unsigned int freq_next; - freq_next = load * policy->cpuinfo.max_freq / 100; + freq_next = load_freq / od_tuners->adj_up_threshold; /* No longer fully busy, reset rate_mult */ dbs_info->rate_mult = 1; @@ -356,6 +374,9 @@ static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf, input < MIN_FREQUENCY_UP_THRESHOLD) { return -EINVAL; } + /* Calculate the new adj_up_threshold */ + od_tuners->adj_up_threshold += input; + od_tuners->adj_up_threshold -= od_tuners->up_threshold; od_tuners->up_threshold = input; return count; @@ -504,6 +525,8 @@ static int od_init(struct dbs_data *dbs_data) if (idle_time != -1ULL) { /* Idle micro accounting is supported. Use finer thresholds */ tuners->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD; + tuners->adj_up_threshold = MICRO_FREQUENCY_UP_THRESHOLD - + MICRO_FREQUENCY_DOWN_DIFFERENTIAL; /* * In nohz/micro accounting case we set the minimum frequency * not depending on HZ, but fixed (very low). The deferred @@ -512,6 +535,8 @@ static int od_init(struct dbs_data *dbs_data) dbs_data->min_sampling_rate = MICRO_FREQUENCY_MIN_SAMPLE_RATE; } else { tuners->up_threshold = DEF_FREQUENCY_UP_THRESHOLD; + tuners->adj_up_threshold = DEF_FREQUENCY_UP_THRESHOLD - + DEF_FREQUENCY_DOWN_DIFFERENTIAL; /* For correct statistics, we need 10 ticks for each measure */ dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO * diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c index 3d62cc03e81..fe588802490 100644 --- a/drivers/cpufreq/cpufreq_stats.c +++ b/drivers/cpufreq/cpufreq_stats.c @@ -86,7 +86,7 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf) for (i = 0; i < stat->state_num; i++) { len += sprintf(buf + len, "%u %llu\n", stat->freq_table[i], (unsigned long long) - jiffies_64_to_clock_t(stat->time_in_state[i])); + cputime64_to_clock_t(stat->time_in_state[i])); } return len; } diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index decf84e7194..34d19b1984a 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -599,7 +599,6 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) { limits.min_perf_pct = 100; limits.min_perf = int_tofp(1); - limits.max_policy_pct = 100; limits.max_perf_pct = 100; limits.max_perf = int_tofp(1); limits.no_turbo = 0; diff --git a/drivers/cpufreq/powernow-k6.c b/drivers/cpufreq/powernow-k6.c index e7e87e6108b..af23e0b9ec9 100644 --- a/drivers/cpufreq/powernow-k6.c +++ b/drivers/cpufreq/powernow-k6.c @@ -26,108 +26,41 @@ static unsigned int busfreq; /* FSB, in 10 kHz */ static unsigned int max_multiplier; -static unsigned int param_busfreq = 0; -static unsigned int param_max_multiplier = 0; - -module_param_named(max_multiplier, param_max_multiplier, uint, S_IRUGO); -MODULE_PARM_DESC(max_multiplier, "Maximum multiplier (allowed values: 20 30 35 40 45 50 55 60)"); - -module_param_named(bus_frequency, param_busfreq, uint, S_IRUGO); -MODULE_PARM_DESC(bus_frequency, "Bus frequency in kHz"); /* Clock ratio multiplied by 10 - see table 27 in AMD#23446 */ static struct cpufreq_frequency_table clock_ratio[] = { - {60, /* 110 -> 6.0x */ 0}, - {55, /* 011 -> 5.5x */ 0}, - {50, /* 001 -> 5.0x */ 0}, {45, /* 000 -> 4.5x */ 0}, + {50, /* 001 -> 5.0x */ 0}, {40, /* 010 -> 4.0x */ 0}, - {35, /* 111 -> 3.5x */ 0}, - {30, /* 101 -> 3.0x */ 0}, + {55, /* 011 -> 5.5x */ 0}, {20, /* 100 -> 2.0x */ 0}, + {30, /* 101 -> 3.0x */ 0}, + {60, /* 110 -> 6.0x */ 0}, + {35, /* 111 -> 3.5x */ 0}, {0, CPUFREQ_TABLE_END} }; -static const u8 index_to_register[8] = { 6, 3, 1, 0, 2, 7, 5, 4 }; -static const u8 register_to_index[8] = { 3, 2, 4, 1, 7, 6, 0, 5 }; - -static const struct { - unsigned freq; - unsigned mult; -} usual_frequency_table[] = { - { 400000, 40 }, // 100 * 4 - { 450000, 45 }, // 100 * 4.5 - { 475000, 50 }, // 95 * 5 - { 500000, 50 }, // 100 * 5 - { 506250, 45 }, // 112.5 * 4.5 - { 533500, 55 }, // 97 * 5.5 - { 550000, 55 }, // 100 * 5.5 - { 562500, 50 }, // 112.5 * 5 - { 570000, 60 }, // 95 * 6 - { 600000, 60 }, // 100 * 6 - { 618750, 55 }, // 112.5 * 5.5 - { 660000, 55 }, // 120 * 5.5 - { 675000, 60 }, // 112.5 * 6 - { 720000, 60 }, // 120 * 6 -}; - -#define FREQ_RANGE 3000 /** * powernow_k6_get_cpu_multiplier - returns the current FSB multiplier * - * Returns the current setting of the frequency multiplier. Core clock + * Returns the current setting of the frequency multiplier. Core clock * speed is frequency of the Front-Side Bus multiplied with this value. */ static int powernow_k6_get_cpu_multiplier(void) { - unsigned long invalue = 0; + u64 invalue = 0; u32 msrval; - local_irq_disable(); - msrval = POWERNOW_IOPORT + 0x1; wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */ invalue = inl(POWERNOW_IOPORT + 0x8); msrval = POWERNOW_IOPORT + 0x0; wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */ - local_irq_enable(); - - return clock_ratio[register_to_index[(invalue >> 5)&7]].index; + return clock_ratio[(invalue >> 5)&7].index; } -static void powernow_k6_set_cpu_multiplier(unsigned int best_i) -{ - unsigned long outvalue, invalue; - unsigned long msrval; - unsigned long cr0; - - /* we now need to transform best_i to the BVC format, see AMD#23446 */ - - /* - * The processor doesn't respond to inquiry cycles while changing the - * frequency, so we must disable cache. - */ - local_irq_disable(); - cr0 = read_cr0(); - write_cr0(cr0 | X86_CR0_CD); - wbinvd(); - - outvalue = (1<<12) | (1<<10) | (1<<9) | (index_to_register[best_i]<<5); - - msrval = POWERNOW_IOPORT + 0x1; - wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */ - invalue = inl(POWERNOW_IOPORT + 0x8); - invalue = invalue & 0x1f; - outvalue = outvalue | invalue; - outl(outvalue, (POWERNOW_IOPORT + 0x8)); - msrval = POWERNOW_IOPORT + 0x0; - wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */ - - write_cr0(cr0); - local_irq_enable(); -} /** * powernow_k6_set_state - set the PowerNow! multiplier @@ -137,6 +70,8 @@ static void powernow_k6_set_cpu_multiplier(unsigned int best_i) */ static void powernow_k6_set_state(unsigned int best_i) { + unsigned long outvalue = 0, invalue = 0; + unsigned long msrval; struct cpufreq_freqs freqs; if (clock_ratio[best_i].index > max_multiplier) { @@ -150,7 +85,18 @@ static void powernow_k6_set_state(unsigned int best_i) cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); - powernow_k6_set_cpu_multiplier(best_i); + /* we now need to transform best_i to the BVC format, see AMD#23446 */ + + outvalue = (1<<12) | (1<<10) | (1<<9) | (best_i<<5); + + msrval = POWERNOW_IOPORT + 0x1; + wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */ + invalue = inl(POWERNOW_IOPORT + 0x8); + invalue = invalue & 0xf; + outvalue = outvalue | invalue; + outl(outvalue , (POWERNOW_IOPORT + 0x8)); + msrval = POWERNOW_IOPORT + 0x0; + wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */ cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); @@ -195,57 +141,18 @@ static int powernow_k6_target(struct cpufreq_policy *policy, return 0; } + static int powernow_k6_cpu_init(struct cpufreq_policy *policy) { unsigned int i, f; int result; - unsigned khz; if (policy->cpu != 0) return -ENODEV; - max_multiplier = 0; - khz = cpu_khz; - for (i = 0; i < ARRAY_SIZE(usual_frequency_table); i++) { - if (khz >= usual_frequency_table[i].freq - FREQ_RANGE && - khz <= usual_frequency_table[i].freq + FREQ_RANGE) { - khz = usual_frequency_table[i].freq; - max_multiplier = usual_frequency_table[i].mult; - break; - } - } - if (param_max_multiplier) { - for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) { - if (clock_ratio[i].index == param_max_multiplier) { - max_multiplier = param_max_multiplier; - goto have_max_multiplier; - } - } - printk(KERN_ERR "powernow-k6: invalid max_multiplier parameter, valid parameters 20, 30, 35, 40, 45, 50, 55, 60\n"); - return -EINVAL; - } - - if (!max_multiplier) { - printk(KERN_WARNING "powernow-k6: unknown frequency %u, cannot determine current multiplier\n", khz); - printk(KERN_WARNING "powernow-k6: use module parameters max_multiplier and bus_frequency\n"); - return -EOPNOTSUPP; - } - -have_max_multiplier: - param_max_multiplier = max_multiplier; - - if (param_busfreq) { - if (param_busfreq >= 50000 && param_busfreq <= 150000) { - busfreq = param_busfreq / 10; - goto have_busfreq; - } - printk(KERN_ERR "powernow-k6: invalid bus_frequency parameter, allowed range 50000 - 150000 kHz\n"); - return -EINVAL; - } - - busfreq = khz / max_multiplier; -have_busfreq: - param_busfreq = busfreq * 10; + /* get frequencies */ + max_multiplier = powernow_k6_get_cpu_multiplier(); + busfreq = cpu_khz / max_multiplier; /* table init */ for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) { @@ -257,7 +164,7 @@ have_busfreq: } /* cpuinfo and default policy values */ - policy->cpuinfo.transition_latency = 500000; + policy->cpuinfo.transition_latency = 200000; policy->cur = busfreq * max_multiplier; result = cpufreq_frequency_table_cpuinfo(policy, clock_ratio); diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c index 0eabd81e1a9..9f25f529602 100644 --- a/drivers/crypto/caam/error.c +++ b/drivers/crypto/caam/error.c @@ -16,13 +16,9 @@ char *tmp; \ \ tmp = kmalloc(sizeof(format) + max_alloc, GFP_ATOMIC); \ - if (likely(tmp)) { \ - sprintf(tmp, format, param); \ - strcat(str, tmp); \ - kfree(tmp); \ - } else { \ - strcat(str, "kmalloc failure in SPRINTFCAT"); \ - } \ + sprintf(tmp, format, param); \ + strcat(str, tmp); \ + kfree(tmp); \ } static void report_jump_idx(u32 status, char *outstr) diff --git a/drivers/crypto/ux500/cryp/cryp_core.c b/drivers/crypto/ux500/cryp/cryp_core.c index 3833bd71cc5..32f480622b9 100644 --- a/drivers/crypto/ux500/cryp/cryp_core.c +++ b/drivers/crypto/ux500/cryp/cryp_core.c @@ -190,7 +190,7 @@ static void add_session_id(struct cryp_ctx *ctx) static irqreturn_t cryp_interrupt_handler(int irq, void *param) { struct cryp_ctx *ctx; - int count; + int i; struct cryp_device_data *device_data; if (param == NULL) { @@ -215,11 +215,12 @@ static irqreturn_t cryp_interrupt_handler(int irq, void *param) if (cryp_pending_irq_src(device_data, CRYP_IRQ_SRC_OUTPUT_FIFO)) { if (ctx->outlen / ctx->blocksize > 0) { - count = ctx->blocksize / 4; - - readsl(&device_data->base->dout, ctx->outdata, count); - ctx->outdata += count; - ctx->outlen -= count; + for (i = 0; i < ctx->blocksize / 4; i++) { + *(ctx->outdata) = readl_relaxed( + &device_data->base->dout); + ctx->outdata += 4; + ctx->outlen -= 4; + } if (ctx->outlen == 0) { cryp_disable_irq_src(device_data, @@ -229,12 +230,12 @@ static irqreturn_t cryp_interrupt_handler(int irq, void *param) } else if (cryp_pending_irq_src(device_data, CRYP_IRQ_SRC_INPUT_FIFO)) { if (ctx->datalen / ctx->blocksize > 0) { - count = ctx->blocksize / 4; - - writesl(&device_data->base->din, ctx->indata, count); - - ctx->indata += count; - ctx->datalen -= count; + for (i = 0 ; i < ctx->blocksize / 4; i++) { + writel_relaxed(ctx->indata, + &device_data->base->din); + ctx->indata += 4; + ctx->datalen -= 4; + } if (ctx->datalen == 0) cryp_disable_irq_src(device_data, diff --git a/drivers/edac/cpc925_edac.c b/drivers/edac/cpc925_edac.c index 1e08ce765f0..7f3c57113ba 100644 --- a/drivers/edac/cpc925_edac.c +++ b/drivers/edac/cpc925_edac.c @@ -562,7 +562,7 @@ static void cpc925_mc_check(struct mem_ctl_info *mci) if (apiexcp & UECC_EXCP_DETECTED) { cpc925_mc_printk(mci, KERN_INFO, "DRAM UECC Fault\n"); - edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, + edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, pfn, offset, 0, csrow, -1, -1, mci->ctl_name, ""); diff --git a/drivers/edac/e7xxx_edac.c b/drivers/edac/e7xxx_edac.c index 2697deae3ab..1c4056a5038 100644 --- a/drivers/edac/e7xxx_edac.c +++ b/drivers/edac/e7xxx_edac.c @@ -226,7 +226,7 @@ static void process_ce(struct mem_ctl_info *mci, struct e7xxx_error_info *info) static void process_ce_no_info(struct mem_ctl_info *mci) { edac_dbg(3, "\n"); - edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, 0, 0, 0, -1, -1, -1, + edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0, -1, -1, -1, "e7xxx CE log register overflow", ""); } diff --git a/drivers/edac/i3200_edac.c b/drivers/edac/i3200_edac.c index 71b26513b93..aa44c1718f5 100644 --- a/drivers/edac/i3200_edac.c +++ b/drivers/edac/i3200_edac.c @@ -242,11 +242,11 @@ static void i3200_process_error_info(struct mem_ctl_info *mci, -1, -1, "i3000 UE", ""); } else if (log & I3200_ECCERRLOG_CE) { - edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, + edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, eccerrlog_syndrome(log), eccerrlog_row(channel, log), -1, -1, - "i3000 CE", ""); + "i3000 UE", ""); } } } diff --git a/drivers/edac/i82860_edac.c b/drivers/edac/i82860_edac.c index b93b0d006eb..3e3e431c830 100644 --- a/drivers/edac/i82860_edac.c +++ b/drivers/edac/i82860_edac.c @@ -124,7 +124,7 @@ static int i82860_process_error_info(struct mem_ctl_info *mci, dimm->location[0], dimm->location[1], -1, "i82860 UE", ""); else - edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, + edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, info->eap, 0, info->derrsyn, dimm->location[0], dimm->location[1], -1, "i82860 CE", ""); diff --git a/drivers/extcon/extcon-max77693.c b/drivers/extcon/extcon-max77693.c index 9966fc0a527..b56bdaa27d4 100644 --- a/drivers/extcon/extcon-max77693.c +++ b/drivers/extcon/extcon-max77693.c @@ -1180,7 +1180,7 @@ static int max77693_muic_probe(struct platform_device *pdev) /* Initialize MUIC register by using platform data or default data */ - if (pdata && pdata->muic_data) { + if (pdata->muic_data) { init_data = pdata->muic_data->init_data; num_init_data = pdata->muic_data->num_init_data; } else { @@ -1213,7 +1213,7 @@ static int max77693_muic_probe(struct platform_device *pdev) = init_data[i].data; } - if (pdata && pdata->muic_data) { + if (pdata->muic_data) { struct max77693_muic_platform_data *muic_pdata = pdata->muic_data; /* diff --git a/drivers/extcon/extcon-max8997.c b/drivers/extcon/extcon-max8997.c index 09f4a9374cf..67d6738d85a 100644 --- a/drivers/extcon/extcon-max8997.c +++ b/drivers/extcon/extcon-max8997.c @@ -712,7 +712,7 @@ static int max8997_muic_probe(struct platform_device *pdev) goto err_irq; } - if (pdata && pdata->muic_pdata) { + if (pdata->muic_pdata) { struct max8997_muic_platform_data *muic_pdata = pdata->muic_pdata; diff --git a/drivers/firewire/core-device.c b/drivers/firewire/core-device.c index 392ad513dc0..664a6ff0a82 100644 --- a/drivers/firewire/core-device.c +++ b/drivers/firewire/core-device.c @@ -895,7 +895,7 @@ static int lookup_existing_device(struct device *dev, void *data) old->config_rom_retries = 0; fw_notice(card, "rediscovered device %s\n", dev_name(dev)); - old->workfn = fw_device_update; + PREPARE_DELAYED_WORK(&old->work, fw_device_update); fw_schedule_device_work(old, 0); if (current_node == card->root_node) @@ -1054,7 +1054,7 @@ static void fw_device_init(struct work_struct *work) if (atomic_cmpxchg(&device->state, FW_DEVICE_INITIALIZING, FW_DEVICE_RUNNING) == FW_DEVICE_GONE) { - device->workfn = fw_device_shutdown; + PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown); fw_schedule_device_work(device, SHUTDOWN_DELAY); } else { fw_notice(card, "created device %s: GUID %08x%08x, S%d00\n", @@ -1175,20 +1175,13 @@ static void fw_device_refresh(struct work_struct *work) dev_name(&device->device), fw_rcode_string(ret)); gone: atomic_set(&device->state, FW_DEVICE_GONE); - device->workfn = fw_device_shutdown; + PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown); fw_schedule_device_work(device, SHUTDOWN_DELAY); out: if (node_id == card->root_node->node_id) fw_schedule_bm_work(card, 0); } -static void fw_device_workfn(struct work_struct *work) -{ - struct fw_device *device = container_of(to_delayed_work(work), - struct fw_device, work); - device->workfn(work); -} - void fw_node_event(struct fw_card *card, struct fw_node *node, int event) { struct fw_device *device; @@ -1238,8 +1231,7 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event) * power-up after getting plugged in. We schedule the * first config rom scan half a second after bus reset. */ - device->workfn = fw_device_init; - INIT_DELAYED_WORK(&device->work, fw_device_workfn); + INIT_DELAYED_WORK(&device->work, fw_device_init); fw_schedule_device_work(device, INITIAL_DELAY); break; @@ -1255,7 +1247,7 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event) if (atomic_cmpxchg(&device->state, FW_DEVICE_RUNNING, FW_DEVICE_INITIALIZING) == FW_DEVICE_RUNNING) { - device->workfn = fw_device_refresh; + PREPARE_DELAYED_WORK(&device->work, fw_device_refresh); fw_schedule_device_work(device, device->is_local ? 0 : INITIAL_DELAY); } @@ -1270,7 +1262,7 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event) smp_wmb(); /* update node_id before generation */ device->generation = card->generation; if (atomic_read(&device->state) == FW_DEVICE_RUNNING) { - device->workfn = fw_device_update; + PREPARE_DELAYED_WORK(&device->work, fw_device_update); fw_schedule_device_work(device, 0); } break; @@ -1295,7 +1287,7 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event) device = node->data; if (atomic_xchg(&device->state, FW_DEVICE_GONE) == FW_DEVICE_RUNNING) { - device->workfn = fw_device_shutdown; + PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown); fw_schedule_device_work(device, list_empty(&card->link) ? 0 : SHUTDOWN_DELAY); } diff --git a/drivers/firewire/net.c b/drivers/firewire/net.c index 7bdb6fe6323..815b0fcbe91 100644 --- a/drivers/firewire/net.c +++ b/drivers/firewire/net.c @@ -929,6 +929,8 @@ static void fwnet_write_complete(struct fw_card *card, int rcode, if (rcode == RCODE_COMPLETE) { fwnet_transmit_packet_done(ptask); } else { + fwnet_transmit_packet_failed(ptask); + if (printk_timed_ratelimit(&j, 1000) || rcode != last_rcode) { dev_err(&ptask->dev->netdev->dev, "fwnet_write_complete failed: %x (skipped %d)\n", @@ -936,10 +938,8 @@ static void fwnet_write_complete(struct fw_card *card, int rcode, errors_skipped = 0; last_rcode = rcode; - } else { + } else errors_skipped++; - } - fwnet_transmit_packet_failed(ptask); } } diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c index 0f3e3047e29..afb701ec90c 100644 --- a/drivers/firewire/ohci.c +++ b/drivers/firewire/ohci.c @@ -271,7 +271,6 @@ static inline struct fw_ohci *fw_ohci(struct fw_card *card) static char ohci_driver_name[] = KBUILD_MODNAME; -#define PCI_VENDOR_ID_PINNACLE_SYSTEMS 0x11bd #define PCI_DEVICE_ID_AGERE_FW643 0x5901 #define PCI_DEVICE_ID_CREATIVE_SB1394 0x4001 #define PCI_DEVICE_ID_JMICRON_JMB38X_FW 0x2380 @@ -279,15 +278,17 @@ static char ohci_driver_name[] = KBUILD_MODNAME; #define PCI_DEVICE_ID_TI_TSB12LV26 0x8020 #define PCI_DEVICE_ID_TI_TSB82AA2 0x8025 #define PCI_DEVICE_ID_VIA_VT630X 0x3044 +#define PCI_VENDOR_ID_PINNACLE_SYSTEMS 0x11bd #define PCI_REV_ID_VIA_VT6306 0x46 -#define QUIRK_CYCLE_TIMER 0x1 -#define QUIRK_RESET_PACKET 0x2 -#define QUIRK_BE_HEADERS 0x4 -#define QUIRK_NO_1394A 0x8 -#define QUIRK_NO_MSI 0x10 -#define QUIRK_TI_SLLZ059 0x20 -#define QUIRK_IR_WAKE 0x40 +#define QUIRK_CYCLE_TIMER 1 +#define QUIRK_RESET_PACKET 2 +#define QUIRK_BE_HEADERS 4 +#define QUIRK_NO_1394A 8 +#define QUIRK_NO_MSI 16 +#define QUIRK_TI_SLLZ059 32 +#define QUIRK_IR_WAKE 64 +#define QUIRK_PHY_LCTRL_TIMEOUT 128 /* In case of multiple matches in ohci_quirks[], only the first one is used. */ static const struct { @@ -300,7 +301,10 @@ static const struct { QUIRK_BE_HEADERS}, {PCI_VENDOR_ID_ATT, PCI_DEVICE_ID_AGERE_FW643, 6, - QUIRK_NO_MSI}, + QUIRK_PHY_LCTRL_TIMEOUT | QUIRK_NO_MSI}, + + {PCI_VENDOR_ID_ATT, PCI_ANY_ID, PCI_ANY_ID, + QUIRK_PHY_LCTRL_TIMEOUT}, {PCI_VENDOR_ID_CREATIVE, PCI_DEVICE_ID_CREATIVE_SB1394, PCI_ANY_ID, QUIRK_RESET_PACKET}, @@ -347,6 +351,7 @@ MODULE_PARM_DESC(quirks, "Chip quirks (default = 0" ", disable MSI = " __stringify(QUIRK_NO_MSI) ", TI SLLZ059 erratum = " __stringify(QUIRK_TI_SLLZ059) ", IR wake unreliable = " __stringify(QUIRK_IR_WAKE) + ", phy LCtrl timeout = " __stringify(QUIRK_PHY_LCTRL_TIMEOUT) ")"); #define OHCI_PARAM_DEBUG_AT_AR 1 @@ -2288,6 +2293,9 @@ static int ohci_enable(struct fw_card *card, * TI TSB82AA2 + TSB81BA3(A) cards signal LPS enabled early but * cannot actually use the phy at that time. These need tens of * millisecods pause between LPS write and first phy access too. + * + * But do not wait for 50msec on Agere/LSI cards. Their phy + * arbitration state machine may time out during such a long wait. */ reg_write(ohci, OHCI1394_HCControlSet, @@ -2295,8 +2303,11 @@ static int ohci_enable(struct fw_card *card, OHCI1394_HCControl_postedWriteEnable); flush_writes(ohci); - for (lps = 0, i = 0; !lps && i < 3; i++) { + if (!(ohci->quirks & QUIRK_PHY_LCTRL_TIMEOUT)) msleep(50); + + for (lps = 0, i = 0; !lps && i < 150; i++) { + msleep(1); lps = reg_read(ohci, OHCI1394_HCControlSet) & OHCI1394_HCControl_LPS; } diff --git a/drivers/firewire/sbp2.c b/drivers/firewire/sbp2.c index 1b1c37dd830..47674b91384 100644 --- a/drivers/firewire/sbp2.c +++ b/drivers/firewire/sbp2.c @@ -146,7 +146,6 @@ struct sbp2_logical_unit { */ int generation; int retries; - work_func_t workfn; struct delayed_work work; bool has_sdev; bool blocked; @@ -865,7 +864,7 @@ static void sbp2_login(struct work_struct *work) /* set appropriate retry limit(s) in BUSY_TIMEOUT register */ sbp2_set_busy_timeout(lu); - lu->workfn = sbp2_reconnect; + PREPARE_DELAYED_WORK(&lu->work, sbp2_reconnect); sbp2_agent_reset(lu); /* This was a re-login. */ @@ -919,7 +918,7 @@ static void sbp2_login(struct work_struct *work) * If a bus reset happened, sbp2_update will have requeued * lu->work already. Reset the work from reconnect to login. */ - lu->workfn = sbp2_login; + PREPARE_DELAYED_WORK(&lu->work, sbp2_login); } static void sbp2_reconnect(struct work_struct *work) @@ -953,7 +952,7 @@ static void sbp2_reconnect(struct work_struct *work) lu->retries++ >= 5) { dev_err(tgt_dev(tgt), "failed to reconnect\n"); lu->retries = 0; - lu->workfn = sbp2_login; + PREPARE_DELAYED_WORK(&lu->work, sbp2_login); } sbp2_queue_work(lu, DIV_ROUND_UP(HZ, 5)); @@ -973,13 +972,6 @@ static void sbp2_reconnect(struct work_struct *work) sbp2_conditionally_unblock(lu); } -static void sbp2_lu_workfn(struct work_struct *work) -{ - struct sbp2_logical_unit *lu = container_of(to_delayed_work(work), - struct sbp2_logical_unit, work); - lu->workfn(work); -} - static int sbp2_add_logical_unit(struct sbp2_target *tgt, int lun_entry) { struct sbp2_logical_unit *lu; @@ -1006,8 +998,7 @@ static int sbp2_add_logical_unit(struct sbp2_target *tgt, int lun_entry) lu->blocked = false; ++tgt->dont_block; INIT_LIST_HEAD(&lu->orb_list); - lu->workfn = sbp2_login; - INIT_DELAYED_WORK(&lu->work, sbp2_lu_workfn); + INIT_DELAYED_WORK(&lu->work, sbp2_login); list_add_tail(&lu->link, &tgt->lu_list); return 0; diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c index 7dbc319e1cf..391c67b182d 100644 --- a/drivers/firmware/efi/vars.c +++ b/drivers/firmware/efi/vars.c @@ -481,7 +481,7 @@ EXPORT_SYMBOL_GPL(efivar_entry_remove); */ static void efivar_entry_list_del_unlock(struct efivar_entry *entry) { - lockdep_assert_held(&__efivars->lock); + WARN_ON(!spin_is_locked(&__efivars->lock)); list_del(&entry->list); spin_unlock_irq(&__efivars->lock); @@ -507,7 +507,7 @@ int __efivar_entry_delete(struct efivar_entry *entry) const struct efivar_operations *ops = __efivars->ops; efi_status_t status; - lockdep_assert_held(&__efivars->lock); + WARN_ON(!spin_is_locked(&__efivars->lock)); status = ops->set_variable(entry->var.VariableName, &entry->var.VendorGuid, @@ -667,7 +667,7 @@ struct efivar_entry *efivar_entry_find(efi_char16_t *name, efi_guid_t guid, int strsize1, strsize2; bool found = false; - lockdep_assert_held(&__efivars->lock); + WARN_ON(!spin_is_locked(&__efivars->lock)); list_for_each_entry_safe(entry, n, head, list) { strsize1 = ucs2_strsize(name, 1024); @@ -731,7 +731,7 @@ int __efivar_entry_get(struct efivar_entry *entry, u32 *attributes, const struct efivar_operations *ops = __efivars->ops; efi_status_t status; - lockdep_assert_held(&__efivars->lock); + WARN_ON(!spin_is_locked(&__efivars->lock)); status = ops->get_variable(entry->var.VariableName, &entry->var.VendorGuid, diff --git a/drivers/gpio/gpio-mxs.c b/drivers/gpio/gpio-mxs.c index d599fc42ae8..f8e6af20dfb 100644 --- a/drivers/gpio/gpio-mxs.c +++ b/drivers/gpio/gpio-mxs.c @@ -214,8 +214,7 @@ static void __init mxs_gpio_init_gc(struct mxs_gpio_port *port, int irq_base) ct->regs.ack = PINCTRL_IRQSTAT(port) + MXS_CLR; ct->regs.mask = PINCTRL_IRQEN(port); - irq_setup_generic_chip(gc, IRQ_MSK(32), IRQ_GC_INIT_NESTED_LOCK, - IRQ_NOREQUEST, 0); + irq_setup_generic_chip(gc, IRQ_MSK(32), 0, IRQ_NOREQUEST, 0); } static int mxs_gpio_to_irq(struct gpio_chip *gc, unsigned offset) diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c index 96f874a508e..f60fd7bd118 100644 --- a/drivers/gpu/drm/ast/ast_main.c +++ b/drivers/gpu/drm/ast/ast_main.c @@ -100,7 +100,7 @@ static int ast_detect_chip(struct drm_device *dev) } ast->vga2_clone = false; } else { - ast->chip = AST2000; + ast->chip = 2000; DRM_INFO("AST 2000 detected\n"); } } diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c index e8f6418b6de..7fc9f7272b5 100644 --- a/drivers/gpu/drm/ast/ast_mode.c +++ b/drivers/gpu/drm/ast/ast_mode.c @@ -1012,8 +1012,8 @@ static u32 copy_cursor_image(u8 *src, u8 *dst, int width, int height) srcdata32[1].ul = *((u32 *)(srcxor + 4)) & 0xf0f0f0f0; data32.b[0] = srcdata32[0].b[1] | (srcdata32[0].b[0] >> 4); data32.b[1] = srcdata32[0].b[3] | (srcdata32[0].b[2] >> 4); - data32.b[2] = srcdata32[1].b[1] | (srcdata32[1].b[0] >> 4); - data32.b[3] = srcdata32[1].b[3] | (srcdata32[1].b[2] >> 4); + data32.b[2] = srcdata32[0].b[1] | (srcdata32[1].b[0] >> 4); + data32.b[3] = srcdata32[0].b[3] | (srcdata32[1].b[2] >> 4); writel(data32.ul, dstxor); csum += data32.ul; diff --git a/drivers/gpu/drm/cirrus/cirrus_drv.c b/drivers/gpu/drm/cirrus/cirrus_drv.c index 64bfc235021..8ecb601152e 100644 --- a/drivers/gpu/drm/cirrus/cirrus_drv.c +++ b/drivers/gpu/drm/cirrus/cirrus_drv.c @@ -11,7 +11,6 @@ #include <linux/module.h> #include <linux/console.h> #include <drm/drmP.h> -#include <drm/drm_crtc_helper.h> #include "cirrus_drv.h" @@ -76,41 +75,6 @@ static void cirrus_pci_remove(struct pci_dev *pdev) drm_put_dev(dev); } -static int cirrus_pm_suspend(struct device *dev) -{ - struct pci_dev *pdev = to_pci_dev(dev); - struct drm_device *drm_dev = pci_get_drvdata(pdev); - struct cirrus_device *cdev = drm_dev->dev_private; - - drm_kms_helper_poll_disable(drm_dev); - - if (cdev->mode_info.gfbdev) { - console_lock(); - fb_set_suspend(cdev->mode_info.gfbdev->helper.fbdev, 1); - console_unlock(); - } - - return 0; -} - -static int cirrus_pm_resume(struct device *dev) -{ - struct pci_dev *pdev = to_pci_dev(dev); - struct drm_device *drm_dev = pci_get_drvdata(pdev); - struct cirrus_device *cdev = drm_dev->dev_private; - - drm_helper_resume_force_mode(drm_dev); - - if (cdev->mode_info.gfbdev) { - console_lock(); - fb_set_suspend(cdev->mode_info.gfbdev->helper.fbdev, 0); - console_unlock(); - } - - drm_kms_helper_poll_enable(drm_dev); - return 0; -} - static const struct file_operations cirrus_driver_fops = { .owner = THIS_MODULE, .open = drm_open, @@ -141,17 +105,11 @@ static struct drm_driver driver = { .dumb_destroy = cirrus_dumb_destroy, }; -static const struct dev_pm_ops cirrus_pm_ops = { - SET_SYSTEM_SLEEP_PM_OPS(cirrus_pm_suspend, - cirrus_pm_resume) -}; - static struct pci_driver cirrus_pci_driver = { .name = DRIVER_NAME, .id_table = pciidlist, .probe = cirrus_pci_probe, .remove = cirrus_pci_remove, - .driver.pm = &cirrus_pm_ops, }; static int __init cirrus_init(void) diff --git a/drivers/gpu/drm/cirrus/cirrus_mode.c b/drivers/gpu/drm/cirrus/cirrus_mode.c index b86f68d8b72..379a47ea99f 100644 --- a/drivers/gpu/drm/cirrus/cirrus_mode.c +++ b/drivers/gpu/drm/cirrus/cirrus_mode.c @@ -308,9 +308,6 @@ static int cirrus_crtc_mode_set(struct drm_crtc *crtc, WREG_HDR(hdr); cirrus_crtc_do_set_base(crtc, old_fb, x, y, 0); - - /* Unblank (needed on S3 resume, vgabios doesn't do it then) */ - outb(0x20, 0x3c0); return 0; } diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 6416d0d0739..117ce381368 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -635,9 +635,9 @@ i915_gem_execbuffer_relocate_slow(struct drm_device *dev, * relocations were valid. */ for (j = 0; j < exec[i].relocation_count; j++) { - if (__copy_to_user(&user_relocs[j].presumed_offset, - &invalid_offset, - sizeof(invalid_offset))) { + if (copy_to_user(&user_relocs[j].presumed_offset, + &invalid_offset, + sizeof(invalid_offset))) { ret = -EFAULT; mutex_lock(&dev->struct_mutex); goto err; @@ -1151,21 +1151,18 @@ i915_gem_execbuffer(struct drm_device *dev, void *data, ret = i915_gem_do_execbuffer(dev, data, file, &exec2, exec2_list); if (!ret) { - struct drm_i915_gem_exec_object __user *user_exec_list = - to_user_ptr(args->buffers_ptr); - /* Copy the new buffer offsets back to the user's exec list. */ - for (i = 0; i < args->buffer_count; i++) { - ret = __copy_to_user(&user_exec_list[i].offset, - &exec2_list[i].offset, - sizeof(user_exec_list[i].offset)); - if (ret) { - ret = -EFAULT; - DRM_DEBUG("failed to copy %d exec entries " - "back to user (%d)\n", - args->buffer_count, ret); - break; - } + for (i = 0; i < args->buffer_count; i++) + exec_list[i].offset = exec2_list[i].offset; + /* ... and back out to userspace */ + ret = copy_to_user(to_user_ptr(args->buffers_ptr), + exec_list, + sizeof(*exec_list) * args->buffer_count); + if (ret) { + ret = -EFAULT; + DRM_DEBUG("failed to copy %d exec entries " + "back to user (%d)\n", + args->buffer_count, ret); } } @@ -1211,21 +1208,14 @@ i915_gem_execbuffer2(struct drm_device *dev, void *data, ret = i915_gem_do_execbuffer(dev, data, file, args, exec2_list); if (!ret) { /* Copy the new buffer offsets back to the user's exec list. */ - struct drm_i915_gem_exec_object2 *user_exec_list = - to_user_ptr(args->buffers_ptr); - int i; - - for (i = 0; i < args->buffer_count; i++) { - ret = __copy_to_user(&user_exec_list[i].offset, - &exec2_list[i].offset, - sizeof(user_exec_list[i].offset)); - if (ret) { - ret = -EFAULT; - DRM_DEBUG("failed to copy %d exec entries " - "back to user\n", - args->buffer_count); - break; - } + ret = copy_to_user(to_user_ptr(args->buffers_ptr), + exec2_list, + sizeof(*exec2_list) * args->buffer_count); + if (ret) { + ret = -EFAULT; + DRM_DEBUG("failed to copy %d exec entries " + "back to user (%d)\n", + args->buffer_count, ret); } } diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c index 49acec15504..95070b2124c 100644 --- a/drivers/gpu/drm/i915/intel_bios.c +++ b/drivers/gpu/drm/i915/intel_bios.c @@ -657,7 +657,7 @@ init_vbt_defaults(struct drm_i915_private *dev_priv) DRM_DEBUG_KMS("Set default to SSC at %dMHz\n", dev_priv->lvds_ssc_freq); } -static int intel_no_opregion_vbt_callback(const struct dmi_system_id *id) +static int __init intel_no_opregion_vbt_callback(const struct dmi_system_id *id) { DRM_DEBUG_KMS("Falling back to manually reading VBT from " "VBIOS ROM for %s\n", diff --git a/drivers/gpu/drm/i915/intel_crt.c b/drivers/gpu/drm/i915/intel_crt.c index 53435a9d847..58b4a53715c 100644 --- a/drivers/gpu/drm/i915/intel_crt.c +++ b/drivers/gpu/drm/i915/intel_crt.c @@ -702,7 +702,7 @@ static const struct drm_encoder_funcs intel_crt_enc_funcs = { .destroy = intel_encoder_destroy, }; -static int intel_no_crt_dmi_callback(const struct dmi_system_id *id) +static int __init intel_no_crt_dmi_callback(const struct dmi_system_id *id) { DRM_INFO("Skipping CRT initialization for %s\n", id->ident); return 1; @@ -717,14 +717,6 @@ static const struct dmi_system_id intel_no_crt[] = { DMI_MATCH(DMI_PRODUCT_NAME, "ZGB"), }, }, - { - .callback = intel_no_crt_dmi_callback, - .ident = "DELL XPS 8700", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), - DMI_MATCH(DMI_PRODUCT_NAME, "XPS 8700"), - }, - }, { } }; diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 8814b0dbfc4..54ae96f7bec 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -9123,6 +9123,15 @@ void intel_modeset_init(struct drm_device *dev) intel_disable_fbc(dev); } +static void +intel_connector_break_all_links(struct intel_connector *connector) +{ + connector->base.dpms = DRM_MODE_DPMS_OFF; + connector->base.encoder = NULL; + connector->encoder->connectors_active = false; + connector->encoder->base.crtc = NULL; +} + static void intel_enable_pipe_a(struct drm_device *dev) { struct intel_connector *connector; @@ -9204,17 +9213,8 @@ static void intel_sanitize_crtc(struct intel_crtc *crtc) if (connector->encoder->base.crtc != &crtc->base) continue; - connector->base.dpms = DRM_MODE_DPMS_OFF; - connector->base.encoder = NULL; + intel_connector_break_all_links(connector); } - /* multiple connectors may have the same encoder: - * handle them and break crtc link separately */ - list_for_each_entry(connector, &dev->mode_config.connector_list, - base.head) - if (connector->encoder->base.crtc == &crtc->base) { - connector->encoder->base.crtc = NULL; - connector->encoder->connectors_active = false; - } WARN_ON(crtc->active); crtc->base.enabled = false; @@ -9285,8 +9285,6 @@ static void intel_sanitize_encoder(struct intel_encoder *encoder) drm_get_encoder_name(&encoder->base)); encoder->disable(encoder); } - encoder->base.crtc = NULL; - encoder->connectors_active = false; /* Inconsistent output/port/pipe state happens presumably due to * a bug in one of the get_hw_state functions. Or someplace else @@ -9297,8 +9295,8 @@ static void intel_sanitize_encoder(struct intel_encoder *encoder) base.head) { if (connector->encoder != encoder) continue; - connector->base.dpms = DRM_MODE_DPMS_OFF; - connector->base.encoder = NULL; + + intel_connector_break_all_links(connector); } } /* Enabled encoders without active connectors will be fixed in diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c index 08e8e18b3f8..f77d42f7442 100644 --- a/drivers/gpu/drm/i915/intel_lvds.c +++ b/drivers/gpu/drm/i915/intel_lvds.c @@ -694,7 +694,7 @@ static const struct drm_encoder_funcs intel_lvds_enc_funcs = { .destroy = intel_encoder_destroy, }; -static int intel_no_lvds_dmi_callback(const struct dmi_system_id *id) +static int __init intel_no_lvds_dmi_callback(const struct dmi_system_id *id) { DRM_INFO("Skipping LVDS initialization for %s\n", id->ident); return 1; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 4605c3877c9..629527d205d 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -396,9 +396,6 @@ static int init_ring_common(struct intel_ring_buffer *ring) } } - /* Enforce ordering by reading HEAD register back */ - I915_READ_HEAD(ring); - /* Initialize the ring. This must happen _after_ we've cleared the ring * registers with the above sequence (the readback of the HEAD registers * also enforces ordering), otherwise the hw might lose the new ring diff --git a/drivers/gpu/drm/i915/intel_tv.c b/drivers/gpu/drm/i915/intel_tv.c index 7c4e3126df2..a202d8d08c5 100644 --- a/drivers/gpu/drm/i915/intel_tv.c +++ b/drivers/gpu/drm/i915/intel_tv.c @@ -856,10 +856,6 @@ intel_enable_tv(struct intel_encoder *encoder) struct drm_device *dev = encoder->base.dev; struct drm_i915_private *dev_priv = dev->dev_private; - /* Prevents vblank waits from timing out in intel_tv_detect_type() */ - intel_wait_for_vblank(encoder->base.dev, - to_intel_crtc(encoder->base.crtc)->pipe); - I915_WRITE(TV_CTL, I915_READ(TV_CTL) | TV_ENC_ENABLE); } diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c b/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c index 9ee40042fa3..019eacd8a68 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c @@ -679,7 +679,7 @@ exec_clkcmp(struct nv50_disp_priv *priv, int head, int id, } if (outp == 8) - return conf; + return false; data = exec_lookup(priv, head, outp, ctrl, dcb, &ver, &hdr, &cnt, &len, &info1); if (data == 0x0000) diff --git a/drivers/gpu/drm/nouveau/core/subdev/bios/dcb.c b/drivers/gpu/drm/nouveau/core/subdev/bios/dcb.c index f3edd2841f2..2d9b9d7a799 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/bios/dcb.c +++ b/drivers/gpu/drm/nouveau/core/subdev/bios/dcb.c @@ -124,7 +124,6 @@ dcb_outp_parse(struct nouveau_bios *bios, u8 idx, u8 *ver, u8 *len, struct dcb_output *outp) { u16 dcb = dcb_outp(bios, idx, ver, len); - memset(outp, 0x00, sizeof(*outp)); if (dcb) { if (*ver >= 0x20) { u32 conn = nv_ro32(bios, dcb + 0x00); diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/fan.c b/drivers/gpu/drm/nouveau/core/subdev/therm/fan.c index ea19acd2078..c728380d3d6 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/therm/fan.c +++ b/drivers/gpu/drm/nouveau/core/subdev/therm/fan.c @@ -54,10 +54,8 @@ nouveau_fan_update(struct nouveau_fan *fan, bool immediate, int target) /* check that we're not already at the target duty cycle */ duty = fan->get(therm); - if (duty == target) { - spin_unlock_irqrestore(&fan->lock, flags); - return 0; - } + if (duty == target) + goto done; /* smooth out the fanspeed increase/decrease */ if (!immediate && duty >= 0) { @@ -75,15 +73,8 @@ nouveau_fan_update(struct nouveau_fan *fan, bool immediate, int target) nv_debug(therm, "FAN update: %d\n", duty); ret = fan->set(therm, duty); - if (ret) { - spin_unlock_irqrestore(&fan->lock, flags); - return ret; - } - - /* fan speed updated, drop the fan lock before grabbing the - * alarm-scheduling lock and risking a deadlock - */ - spin_unlock_irqrestore(&fan->lock, flags); + if (ret) + goto done; /* schedule next fan update, if not at target speed already */ if (list_empty(&fan->alarm.head) && target != duty) { @@ -101,6 +92,8 @@ nouveau_fan_update(struct nouveau_fan *fan, bool immediate, int target) ptimer->alarm(ptimer, delay * 1000 * 1000, &fan->alarm); } +done: + spin_unlock_irqrestore(&fan->lock, flags); return ret; } diff --git a/drivers/gpu/drm/nouveau/nouveau_acpi.c b/drivers/gpu/drm/nouveau/nouveau_acpi.c index 5cec3a0c6c8..d97f20069d3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_acpi.c +++ b/drivers/gpu/drm/nouveau/nouveau_acpi.c @@ -372,6 +372,9 @@ bool nouveau_acpi_rom_supported(struct pci_dev *pdev) acpi_status status; acpi_handle dhandle, rom_handle; + if (!nouveau_dsm_priv.dsm_detected && !nouveau_dsm_priv.optimus_detected) + return false; + dhandle = DEVICE_ACPI_HANDLE(&pdev->dev); if (!dhandle) return false; diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c index b5df614660a..9b794c933c8 100644 --- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c +++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c @@ -199,7 +199,7 @@ static struct dmm_txn *dmm_txn_init(struct dmm *dmm, struct tcm *tcm) static void dmm_txn_append(struct dmm_txn *txn, struct pat_area *area, struct page **pages, uint32_t npages, uint32_t roll) { - dma_addr_t pat_pa = 0, data_pa = 0; + dma_addr_t pat_pa = 0; uint32_t *data; struct pat *pat; struct refill_engine *engine = txn->engine_handle; @@ -223,9 +223,7 @@ static void dmm_txn_append(struct dmm_txn *txn, struct pat_area *area, .lut_id = engine->tcm->lut_id, }; - data = alloc_dma(txn, 4*i, &data_pa); - /* FIXME: what if data_pa is more than 32-bit ? */ - pat->data_pa = data_pa; + data = alloc_dma(txn, 4*i, &pat->data_pa); while (i--) { int n = i + roll; diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c index 2272c66f184..ebbdf4132e9 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem.c +++ b/drivers/gpu/drm/omapdrm/omap_gem.c @@ -806,7 +806,7 @@ int omap_gem_get_paddr(struct drm_gem_object *obj, omap_obj->paddr = tiler_ssptr(block); omap_obj->block = block; - DBG("got paddr: %pad", &omap_obj->paddr); + DBG("got paddr: %08x", omap_obj->paddr); } omap_obj->paddr_cnt++; @@ -1004,9 +1004,9 @@ void omap_gem_describe(struct drm_gem_object *obj, struct seq_file *m) if (obj->map_list.map) off = (uint64_t)obj->map_list.hash.key; - seq_printf(m, "%08x: %2d (%2d) %08llx %pad (%2d) %p %4d", + seq_printf(m, "%08x: %2d (%2d) %08llx %08Zx (%2d) %p %4d", omap_obj->flags, obj->name, obj->refcount.refcount.counter, - off, &omap_obj->paddr, omap_obj->paddr_cnt, + off, omap_obj->paddr, omap_obj->paddr_cnt, omap_obj->vaddr, omap_obj->roll); if (omap_obj->flags & OMAP_BO_TILED) { @@ -1489,8 +1489,8 @@ void omap_gem_init(struct drm_device *dev) entry->paddr = tiler_ssptr(block); entry->block = block; - DBG("%d:%d: %dx%d: paddr=%pad stride=%d", i, j, w, h, - &entry->paddr, + DBG("%d:%d: %dx%d: paddr=%08x stride=%d", i, j, w, h, + entry->paddr, usergart[i].stride_pfn << PAGE_SHIFT); } } diff --git a/drivers/gpu/drm/omapdrm/omap_plane.c b/drivers/gpu/drm/omapdrm/omap_plane.c index 6d01c2ad842..8d225d7ff4e 100644 --- a/drivers/gpu/drm/omapdrm/omap_plane.c +++ b/drivers/gpu/drm/omapdrm/omap_plane.c @@ -146,8 +146,8 @@ static void omap_plane_pre_apply(struct omap_drm_apply *apply) DBG("%dx%d -> %dx%d (%d)", info->width, info->height, info->out_width, info->out_height, info->screen_width); - DBG("%d,%d %pad %pad", info->pos_x, info->pos_y, - &info->paddr, &info->p_uv_addr); + DBG("%d,%d %08x %08x", info->pos_x, info->pos_y, + info->paddr, info->p_uv_addr); /* TODO: */ ilace = false; diff --git a/drivers/gpu/drm/qxl/qxl_irq.c b/drivers/gpu/drm/qxl/qxl_irq.c index f4b6b89b98f..21393dc4700 100644 --- a/drivers/gpu/drm/qxl/qxl_irq.c +++ b/drivers/gpu/drm/qxl/qxl_irq.c @@ -33,9 +33,6 @@ irqreturn_t qxl_irq_handler(DRM_IRQ_ARGS) pending = xchg(&qdev->ram_header->int_pending, 0); - if (!pending) - return IRQ_NONE; - atomic_inc(&qdev->irq_received); if (pending & QXL_INTERRUPT_DISPLAY) { diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c index 3401eb86786..489cb8cece4 100644 --- a/drivers/gpu/drm/qxl/qxl_ttm.c +++ b/drivers/gpu/drm/qxl/qxl_ttm.c @@ -431,7 +431,6 @@ static int qxl_sync_obj_flush(void *sync_obj) static void qxl_sync_obj_unref(void **sync_obj) { - *sync_obj = NULL; } static void *qxl_sync_obj_ref(void *sync_obj) diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c index 971dd8795b6..a56d0199e33 100644 --- a/drivers/gpu/drm/radeon/atombios_crtc.c +++ b/drivers/gpu/drm/radeon/atombios_crtc.c @@ -839,16 +839,14 @@ static void atombios_crtc_program_pll(struct drm_crtc *crtc, args.v5.ucMiscInfo = 0; /* HDMI depth, etc. */ if (ss_enabled && (ss->type & ATOM_EXTERNAL_SS_MASK)) args.v5.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_REF_DIV_SRC; - if (encoder_mode == ATOM_ENCODER_MODE_HDMI) { - switch (bpc) { - case 8: - default: - args.v5.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_HDMI_24BPP; - break; - case 10: - args.v5.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_HDMI_30BPP; - break; - } + switch (bpc) { + case 8: + default: + args.v5.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_HDMI_24BPP; + break; + case 10: + args.v5.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_HDMI_30BPP; + break; } args.v5.ucTransmitterID = encoder_id; args.v5.ucEncoderMode = encoder_mode; @@ -863,22 +861,20 @@ static void atombios_crtc_program_pll(struct drm_crtc *crtc, args.v6.ucMiscInfo = 0; /* HDMI depth, etc. */ if (ss_enabled && (ss->type & ATOM_EXTERNAL_SS_MASK)) args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_REF_DIV_SRC; - if (encoder_mode == ATOM_ENCODER_MODE_HDMI) { - switch (bpc) { - case 8: - default: - args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_24BPP; - break; - case 10: - args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_30BPP; - break; - case 12: - args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_36BPP; - break; - case 16: - args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_48BPP; - break; - } + switch (bpc) { + case 8: + default: + args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_24BPP; + break; + case 10: + args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_30BPP; + break; + case 12: + args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_36BPP; + break; + case 16: + args.v6.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_48BPP; + break; } args.v6.ucTransmitterID = encoder_id; args.v6.ucEncoderMode = encoder_mode; diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c index 4c05f2b015c..16023986d30 100644 --- a/drivers/gpu/drm/radeon/atombios_dp.c +++ b/drivers/gpu/drm/radeon/atombios_dp.c @@ -384,19 +384,6 @@ static int dp_get_max_dp_pix_clock(int link_rate, /***** radeon specific DP functions *****/ -static int radeon_dp_get_max_link_rate(struct drm_connector *connector, - u8 dpcd[DP_DPCD_SIZE]) -{ - int max_link_rate; - - if (radeon_connector_is_dp12_capable(connector)) - max_link_rate = min(drm_dp_max_link_rate(dpcd), 540000); - else - max_link_rate = min(drm_dp_max_link_rate(dpcd), 270000); - - return max_link_rate; -} - /* First get the min lane# when low rate is used according to pixel clock * (prefer low rate), second check max lane# supported by DP panel, * if the max lane# < low rate lane# then use max lane# instead. @@ -406,7 +393,7 @@ static int radeon_dp_get_dp_lane_number(struct drm_connector *connector, int pix_clock) { int bpp = convert_bpc_to_bpp(radeon_get_monitor_bpc(connector)); - int max_link_rate = radeon_dp_get_max_link_rate(connector, dpcd); + int max_link_rate = drm_dp_max_link_rate(dpcd); int max_lane_num = drm_dp_max_lane_count(dpcd); int lane_num; int max_dp_pix_clock; @@ -444,7 +431,7 @@ static int radeon_dp_get_dp_link_clock(struct drm_connector *connector, return 540000; } - return radeon_dp_get_max_link_rate(connector, dpcd); + return drm_dp_max_link_rate(dpcd); } static u8 radeon_dp_encoder_service(struct radeon_device *rdev, diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c index 1b564d7e419..4c81e9faa63 100644 --- a/drivers/gpu/drm/radeon/atombios_encoders.c +++ b/drivers/gpu/drm/radeon/atombios_encoders.c @@ -183,6 +183,7 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, struct backlight_properties props; struct radeon_backlight_privdata *pdata; struct radeon_encoder_atom_dig *dig; + u8 backlight_level; char bl_name[16]; /* Mac laptops with multiple GPUs use the gmux driver for backlight @@ -221,17 +222,12 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, pdata->encoder = radeon_encoder; + backlight_level = radeon_atom_get_backlight_level_from_reg(rdev); + dig = radeon_encoder->enc_priv; dig->bl_dev = bd; bd->props.brightness = radeon_atom_backlight_get_brightness(bd); - /* Set a reasonable default here if the level is 0 otherwise - * fbdev will attempt to turn the backlight on after console - * unblanking and it will try and restore 0 which turns the backlight - * off again. - */ - if (bd->props.brightness == 0) - bd->props.brightness = RADEON_MAX_BL_LEVEL; bd->props.power = FB_BLANK_UNBLANK; backlight_update_status(bd); @@ -1285,7 +1281,7 @@ atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t } if (is_dp) args.v5.ucLaneNum = dp_lane_count; - else if (radeon_dig_monitor_is_duallink(encoder, radeon_encoder->pixel_clock)) + else if (radeon_encoder->pixel_clock > 165000) args.v5.ucLaneNum = 8; else args.v5.ucLaneNum = 4; @@ -1881,11 +1877,8 @@ atombios_set_encoder_crtc_source(struct drm_encoder *encoder) args.v2.ucEncodeMode = ATOM_ENCODER_MODE_CRT; else args.v2.ucEncodeMode = atombios_get_encoder_mode(encoder); - } else if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { - args.v2.ucEncodeMode = ATOM_ENCODER_MODE_LVDS; - } else { + } else args.v2.ucEncodeMode = atombios_get_encoder_mode(encoder); - } switch (radeon_encoder->encoder_id) { case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c index e62a9ce3e4d..93e26339051 100644 --- a/drivers/gpu/drm/radeon/evergreen.c +++ b/drivers/gpu/drm/radeon/evergreen.c @@ -94,7 +94,7 @@ static const u32 evergreen_golden_registers[] = 0x8c1c, 0xffffffff, 0x00001010, 0x28350, 0xffffffff, 0x00000000, 0xa008, 0xffffffff, 0x00010000, - 0x5c4, 0xffffffff, 0x00000001, + 0x5cc, 0xffffffff, 0x00000001, 0x9508, 0xffffffff, 0x00000002, 0x913c, 0x0000000f, 0x0000000a }; @@ -381,7 +381,7 @@ static const u32 cedar_golden_registers[] = 0x8c1c, 0xffffffff, 0x00001010, 0x28350, 0xffffffff, 0x00000000, 0xa008, 0xffffffff, 0x00010000, - 0x5c4, 0xffffffff, 0x00000001, + 0x5cc, 0xffffffff, 0x00000001, 0x9508, 0xffffffff, 0x00000002 }; @@ -540,7 +540,7 @@ static const u32 juniper_mgcg_init[] = static const u32 supersumo_golden_registers[] = { 0x5eb4, 0xffffffff, 0x00000002, - 0x5c4, 0xffffffff, 0x00000001, + 0x5cc, 0xffffffff, 0x00000001, 0x7030, 0xffffffff, 0x00000011, 0x7c30, 0xffffffff, 0x00000011, 0x6104, 0x01000300, 0x00000000, @@ -624,7 +624,7 @@ static const u32 sumo_golden_registers[] = static const u32 wrestler_golden_registers[] = { 0x5eb4, 0xffffffff, 0x00000002, - 0x5c4, 0xffffffff, 0x00000001, + 0x5cc, 0xffffffff, 0x00000001, 0x7030, 0xffffffff, 0x00000011, 0x7c30, 0xffffffff, 0x00000011, 0x6104, 0x01000300, 0x00000000, diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c index ba2ab9a9b98..efb06e34aed 100644 --- a/drivers/gpu/drm/radeon/radeon_atombios.c +++ b/drivers/gpu/drm/radeon/radeon_atombios.c @@ -463,13 +463,6 @@ static bool radeon_atom_apply_quirks(struct drm_device *dev, } } - /* Fujitsu D3003-S2 board lists DVI-I as DVI-I and VGA */ - if ((dev->pdev->device == 0x9805) && - (dev->pdev->subsystem_vendor == 0x1734) && - (dev->pdev->subsystem_device == 0x11bd)) { - if (*connector_type == DRM_MODE_CONNECTOR_VGA) - return false; - } return true; } @@ -1915,7 +1908,7 @@ static const char *thermal_controller_names[] = { "adm1032", "adm1030", "max6649", - "lm63", /* lm64 */ + "lm64", "f75375", "asc7xxx", }; @@ -1926,7 +1919,7 @@ static const char *pp_lib_thermal_controller_names[] = { "adm1032", "adm1030", "max6649", - "lm63", /* lm64 */ + "lm64", "f75375", "RV6xx", "RV770", diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c b/drivers/gpu/drm/radeon/radeon_atpx_handler.c index 8c44ef57864..cbb06d7c89b 100644 --- a/drivers/gpu/drm/radeon/radeon_atpx_handler.c +++ b/drivers/gpu/drm/radeon/radeon_atpx_handler.c @@ -523,13 +523,6 @@ static bool radeon_atpx_detect(void) has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true); } - /* some newer PX laptops mark the dGPU as a non-VGA display device */ - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { - vga_count++; - - has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true); - } - if (has_atpx && vga_count == 2) { acpi_get_name(radeon_atpx_priv.atpx.handle, ACPI_FULL_PATHNAME, &buffer); printk(KERN_INFO "VGA switcheroo: detected switching method %s handle\n", diff --git a/drivers/gpu/drm/radeon/radeon_bios.c b/drivers/gpu/drm/radeon/radeon_bios.c index b131520521e..061b227dae0 100644 --- a/drivers/gpu/drm/radeon/radeon_bios.c +++ b/drivers/gpu/drm/radeon/radeon_bios.c @@ -196,20 +196,6 @@ static bool radeon_atrm_get_bios(struct radeon_device *rdev) } } - if (!found) { - while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { - dhandle = ACPI_HANDLE(&pdev->dev); - if (!dhandle) - continue; - - status = acpi_get_handle(dhandle, "ATRM", &atrm_handle); - if (!ACPI_FAILURE(status)) { - found = true; - break; - } - } - } - if (!found) return false; diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c index fc604fc7579..5a87c9fc78d 100644 --- a/drivers/gpu/drm/radeon/radeon_connectors.c +++ b/drivers/gpu/drm/radeon/radeon_connectors.c @@ -1345,7 +1345,7 @@ bool radeon_connector_is_dp12_capable(struct drm_connector *connector) struct radeon_device *rdev = dev->dev_private; if (ASIC_IS_DCE5(rdev) && - (rdev->clock.default_dispclk >= 53900) && + (rdev->clock.dp_extclk >= 53900) && radeon_connector_encoder_is_hbr2(connector)) { return true; } diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c index 60af3cda587..fe36f1d9496 100644 --- a/drivers/gpu/drm/radeon/radeon_cs.c +++ b/drivers/gpu/drm/radeon/radeon_cs.c @@ -96,12 +96,6 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p) uint32_t domain = r->write_domain ? r->write_domain : r->read_domains; - if (domain & RADEON_GEM_DOMAIN_CPU) { - DRM_ERROR("RADEON_GEM_DOMAIN_CPU is not valid " - "for command submission\n"); - return -EINVAL; - } - p->relocs[i].lobj.domain = domain; if (domain == RADEON_GEM_DOMAIN_VRAM) domain |= RADEON_GEM_DOMAIN_GTT; diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c index a84de32a91f..eb18bb7af1c 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c @@ -688,10 +688,6 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) struct radeon_device *rdev = dev->dev_private; int ret = 0; - /* don't leak the edid if we already fetched it in detect() */ - if (radeon_connector->edid) - goto got_edid; - /* on hw with routers, select right port */ if (radeon_connector->router.ddc_valid) radeon_router_select_ddc_port(radeon_connector); @@ -731,10 +727,8 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) radeon_connector->edid = radeon_bios_get_hardcoded_edid(rdev); } if (radeon_connector->edid) { -got_edid: drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); - drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid); return ret; } drm_mode_connector_update_edid_property(&radeon_connector->base, NULL); diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index f8372791578..1424ccde237 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -582,30 +582,22 @@ int radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo) rbo = container_of(bo, struct radeon_bo, tbo); radeon_bo_check_tiling(rbo, 0, 0); rdev = rbo->rdev; - if (bo->mem.mem_type != TTM_PL_VRAM) - return 0; - - size = bo->mem.num_pages << PAGE_SHIFT; - offset = bo->mem.start << PAGE_SHIFT; - if ((offset + size) <= rdev->mc.visible_vram_size) - return 0; - - /* hurrah the memory is not visible ! */ - radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM); - rbo->placement.lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT; - r = ttm_bo_validate(bo, &rbo->placement, false, false); - if (unlikely(r == -ENOMEM)) { - radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_GTT); - return ttm_bo_validate(bo, &rbo->placement, false, false); - } else if (unlikely(r != 0)) { - return r; + if (bo->mem.mem_type == TTM_PL_VRAM) { + size = bo->mem.num_pages << PAGE_SHIFT; + offset = bo->mem.start << PAGE_SHIFT; + if ((offset + size) > rdev->mc.visible_vram_size) { + /* hurrah the memory is not visible ! */ + radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM); + rbo->placement.lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT; + r = ttm_bo_validate(bo, &rbo->placement, false, false); + if (unlikely(r != 0)) + return r; + offset = bo->mem.start << PAGE_SHIFT; + /* this should not happen */ + if ((offset + size) > rdev->mc.visible_vram_size) + return -EINVAL; + } } - - offset = bo->mem.start << PAGE_SHIFT; - /* this should never happen */ - if ((offset + size) > rdev->mc.visible_vram_size) - return -EINVAL; - return 0; } diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c index 5715429279f..21d2d5280fc 100644 --- a/drivers/gpu/drm/radeon/radeon_uvd.c +++ b/drivers/gpu/drm/radeon/radeon_uvd.c @@ -449,10 +449,6 @@ static int radeon_uvd_cs_reloc(struct radeon_cs_parser *p, cmd = radeon_get_ib_value(p, p->idx) >> 1; if (cmd < 0x4) { - if (end <= start) { - DRM_ERROR("invalid reloc offset %X!\n", offset); - return -EINVAL; - } if ((end - start) < buf_sizes[cmd]) { DRM_ERROR("buffer to small (%d / %d)!\n", (unsigned)(end - start), buf_sizes[cmd]); diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c index ae813fef081..670b555d2ca 100644 --- a/drivers/gpu/drm/radeon/rs600.c +++ b/drivers/gpu/drm/radeon/rs600.c @@ -582,10 +582,8 @@ int rs600_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) return -EINVAL; } addr = addr & 0xFFFFFFFFFFFFF000ULL; - if (addr != rdev->dummy_page.addr) - addr |= R600_PTE_VALID | R600_PTE_READABLE | - R600_PTE_WRITEABLE; - addr |= R600_PTE_SYSTEM | R600_PTE_SNOOPED; + addr |= R600_PTE_VALID | R600_PTE_SYSTEM | R600_PTE_SNOOPED; + addr |= R600_PTE_READABLE | R600_PTE_WRITEABLE; writeq(addr, ptr + (i * 8)); return 0; } diff --git a/drivers/gpu/drm/tilcdc/tilcdc_drv.c b/drivers/gpu/drm/tilcdc/tilcdc_drv.c index f5ddd355079..2b5461bcd9f 100644 --- a/drivers/gpu/drm/tilcdc/tilcdc_drv.c +++ b/drivers/gpu/drm/tilcdc/tilcdc_drv.c @@ -78,7 +78,6 @@ static int modeset_init(struct drm_device *dev) if ((priv->num_encoders == 0) || (priv->num_connectors == 0)) { /* oh nos! */ dev_err(dev->dev, "no encoders/connectors found\n"); - drm_mode_config_cleanup(dev); return -ENXIO; } @@ -117,7 +116,6 @@ static int tilcdc_unload(struct drm_device *dev) struct tilcdc_drm_private *priv = dev->dev_private; struct tilcdc_module *mod, *cur; - drm_fbdev_cma_fini(priv->fbdev); drm_kms_helper_poll_fini(dev); drm_mode_config_cleanup(dev); drm_vblank_cleanup(dev); @@ -171,37 +169,33 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) dev->dev_private = priv; priv->wq = alloc_ordered_workqueue("tilcdc", 0); - if (!priv->wq) { - ret = -ENOMEM; - goto fail_free_priv; - } res = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!res) { dev_err(dev->dev, "failed to get memory resource\n"); ret = -EINVAL; - goto fail_free_wq; + goto fail; } priv->mmio = ioremap_nocache(res->start, resource_size(res)); if (!priv->mmio) { dev_err(dev->dev, "failed to ioremap\n"); ret = -ENOMEM; - goto fail_free_wq; + goto fail; } priv->clk = clk_get(dev->dev, "fck"); if (IS_ERR(priv->clk)) { dev_err(dev->dev, "failed to get functional clock\n"); ret = -ENODEV; - goto fail_iounmap; + goto fail; } priv->disp_clk = clk_get(dev->dev, "dpll_disp_ck"); if (IS_ERR(priv->clk)) { dev_err(dev->dev, "failed to get display clock\n"); ret = -ENODEV; - goto fail_put_clk; + goto fail; } #ifdef CONFIG_CPU_FREQ @@ -211,7 +205,7 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) CPUFREQ_TRANSITION_NOTIFIER); if (ret) { dev_err(dev->dev, "failed to register cpufreq notifier\n"); - goto fail_put_disp_clk; + goto fail; } #endif @@ -243,13 +237,13 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) ret = modeset_init(dev); if (ret < 0) { dev_err(dev->dev, "failed to initialize mode setting\n"); - goto fail_cpufreq_unregister; + goto fail; } ret = drm_vblank_init(dev, 1); if (ret < 0) { dev_err(dev->dev, "failed to initialize vblank\n"); - goto fail_mode_config_cleanup; + goto fail; } pm_runtime_get_sync(dev->dev); @@ -257,7 +251,7 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) pm_runtime_put_sync(dev->dev); if (ret < 0) { dev_err(dev->dev, "failed to install IRQ handler\n"); - goto fail_vblank_cleanup; + goto fail; } platform_set_drvdata(pdev, dev); @@ -265,48 +259,13 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) priv->fbdev = drm_fbdev_cma_init(dev, 16, dev->mode_config.num_crtc, dev->mode_config.num_connector); - if (IS_ERR(priv->fbdev)) { - ret = PTR_ERR(priv->fbdev); - goto fail_irq_uninstall; - } drm_kms_helper_poll_init(dev); return 0; -fail_irq_uninstall: - pm_runtime_get_sync(dev->dev); - drm_irq_uninstall(dev); - pm_runtime_put_sync(dev->dev); - -fail_vblank_cleanup: - drm_vblank_cleanup(dev); - -fail_mode_config_cleanup: - drm_mode_config_cleanup(dev); - -fail_cpufreq_unregister: - pm_runtime_disable(dev->dev); -#ifdef CONFIG_CPU_FREQ - cpufreq_unregister_notifier(&priv->freq_transition, - CPUFREQ_TRANSITION_NOTIFIER); -fail_put_disp_clk: - clk_put(priv->disp_clk); -#endif - -fail_put_clk: - clk_put(priv->clk); - -fail_iounmap: - iounmap(priv->mmio); - -fail_free_wq: - flush_workqueue(priv->wq); - destroy_workqueue(priv->wq); - -fail_free_priv: - dev->dev_private = NULL; - kfree(priv); +fail: + tilcdc_unload(dev); return ret; } @@ -637,10 +596,10 @@ static int __init tilcdc_drm_init(void) static void __exit tilcdc_drm_fini(void) { DBG("fini"); - platform_driver_unregister(&tilcdc_platform_driver); - tilcdc_panel_fini(); - tilcdc_slave_fini(); tilcdc_tfp410_fini(); + tilcdc_slave_fini(); + tilcdc_panel_fini(); + platform_driver_unregister(&tilcdc_platform_driver); } late_initcall(tilcdc_drm_init); diff --git a/drivers/gpu/drm/tilcdc/tilcdc_panel.c b/drivers/gpu/drm/tilcdc/tilcdc_panel.c index 779d508616d..09176654fdd 100644 --- a/drivers/gpu/drm/tilcdc/tilcdc_panel.c +++ b/drivers/gpu/drm/tilcdc/tilcdc_panel.c @@ -151,7 +151,6 @@ struct panel_connector { static void panel_connector_destroy(struct drm_connector *connector) { struct panel_connector *panel_connector = to_panel_connector(connector); - drm_sysfs_connector_remove(connector); drm_connector_cleanup(connector); kfree(panel_connector); } @@ -286,8 +285,10 @@ static void panel_destroy(struct tilcdc_module *mod) { struct panel_module *panel_mod = to_panel_module(mod); - if (panel_mod->timings) + if (panel_mod->timings) { display_timings_release(panel_mod->timings); + kfree(panel_mod->timings); + } tilcdc_module_cleanup(mod); kfree(panel_mod->info); diff --git a/drivers/gpu/drm/tilcdc/tilcdc_slave.c b/drivers/gpu/drm/tilcdc/tilcdc_slave.c index 5d6c597a5d6..db1d2fc9dfb 100644 --- a/drivers/gpu/drm/tilcdc/tilcdc_slave.c +++ b/drivers/gpu/drm/tilcdc/tilcdc_slave.c @@ -142,7 +142,6 @@ struct slave_connector { static void slave_connector_destroy(struct drm_connector *connector) { struct slave_connector *slave_connector = to_slave_connector(connector); - drm_sysfs_connector_remove(connector); drm_connector_cleanup(connector); kfree(slave_connector); } diff --git a/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c b/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c index 986131dd9f4..a36788fbcd9 100644 --- a/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c +++ b/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c @@ -168,7 +168,6 @@ struct tfp410_connector { static void tfp410_connector_destroy(struct drm_connector *connector) { struct tfp410_connector *tfp410_connector = to_tfp410_connector(connector); - drm_sysfs_connector_remove(connector); drm_connector_cleanup(connector); kfree(tfp410_connector); } diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 0ac0a88860a..8697abd7b17 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -498,11 +498,9 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo, moved: if (bo->evicted) { - if (bdev->driver->invalidate_caches) { - ret = bdev->driver->invalidate_caches(bdev, bo->mem.placement); - if (ret) - pr_err("Can not flush read caches\n"); - } + ret = bdev->driver->invalidate_caches(bdev, bo->mem.placement); + if (ret) + pr_err("Can not flush read caches\n"); bo->evicted = false; } diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index de1a753b1d5..b8b394319b4 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -1006,9 +1006,9 @@ EXPORT_SYMBOL_GPL(ttm_dma_unpopulate); static int ttm_dma_pool_mm_shrink(struct shrinker *shrink, struct shrink_control *sc) { - static unsigned start_pool; + static atomic_t start_pool = ATOMIC_INIT(0); unsigned idx = 0; - unsigned pool_offset; + unsigned pool_offset = atomic_add_return(1, &start_pool); unsigned shrink_pages = sc->nr_to_scan; struct device_pools *p; @@ -1016,9 +1016,7 @@ static int ttm_dma_pool_mm_shrink(struct shrinker *shrink, return 0; mutex_lock(&_manager->lock); - if (!_manager->npools) - goto out; - pool_offset = ++start_pool % _manager->npools; + pool_offset = pool_offset % _manager->npools; list_for_each_entry(p, &_manager->pools, pools) { unsigned nr_free; @@ -1035,7 +1033,6 @@ static int ttm_dma_pool_mm_shrink(struct shrinker *shrink, p->pool->dev_name, p->pool->name, current->pid, nr_free, shrink_pages); } -out: mutex_unlock(&_manager->lock); /* return estimated number of unused pages in pool */ return ttm_dma_pool_get_num_unused_pages(); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c index da068bd13f9..394e6476105 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c @@ -834,36 +834,14 @@ static int vmw_cmd_dma(struct vmw_private *dev_priv, SVGA3dCmdSurfaceDMA dma; } *cmd; int ret; - SVGA3dCmdSurfaceDMASuffix *suffix; - uint32_t bo_size; cmd = container_of(header, struct vmw_dma_cmd, header); - suffix = (SVGA3dCmdSurfaceDMASuffix *)((unsigned long) &cmd->dma + - header->size - sizeof(*suffix)); - - /* Make sure device and verifier stays in sync. */ - if (unlikely(suffix->suffixSize != sizeof(*suffix))) { - DRM_ERROR("Invalid DMA suffix size.\n"); - return -EINVAL; - } - ret = vmw_translate_guest_ptr(dev_priv, sw_context, &cmd->dma.guest.ptr, &vmw_bo); if (unlikely(ret != 0)) return ret; - /* Make sure DMA doesn't cross BO boundaries. */ - bo_size = vmw_bo->base.num_pages * PAGE_SIZE; - if (unlikely(cmd->dma.guest.ptr.offset > bo_size)) { - DRM_ERROR("Invalid DMA offset.\n"); - return -EINVAL; - } - - bo_size -= cmd->dma.guest.ptr.offset; - if (unlikely(suffix->maximumOffset > bo_size)) - suffix->maximumOffset = bo_size; - ret = vmw_cmd_res_check(dev_priv, sw_context, vmw_res_surface, user_surface_converter, &cmd->dma.host.sid, NULL); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c index 1b0f34bd3a0..ed5ce2a41bb 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c @@ -147,7 +147,7 @@ static int vmw_fb_check_var(struct fb_var_screeninfo *var, } if (!vmw_kms_validate_mode_vram(vmw_priv, - var->xres * var->bits_per_pixel/8, + info->fix.line_length, var->yoffset + var->yres)) { DRM_ERROR("Requested geom can not fit in framebuffer\n"); return -EINVAL; @@ -162,8 +162,6 @@ static int vmw_fb_set_par(struct fb_info *info) struct vmw_private *vmw_priv = par->vmw_priv; int ret; - info->fix.line_length = info->var.xres * info->var.bits_per_pixel/8; - ret = vmw_kms_write_svga(vmw_priv, info->var.xres, info->var.yres, info->fix.line_length, par->bpp, par->depth); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c index 89664933861..3eb148667d6 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c @@ -163,9 +163,8 @@ void vmw_fifo_release(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) mutex_lock(&dev_priv->hw_mutex); - vmw_write(dev_priv, SVGA_REG_SYNC, SVGA_SYNC_GENERIC); while (vmw_read(dev_priv, SVGA_REG_BUSY) != 0) - ; + vmw_write(dev_priv, SVGA_REG_SYNC, SVGA_SYNC_GENERIC); dev_priv->last_read_seqno = ioread32(fifo_mem + SVGA_FIFO_FENCE); diff --git a/drivers/gpu/host1x/hw/intr_hw.c b/drivers/gpu/host1x/hw/intr_hw.c index b083509325e..b592eef1efc 100644 --- a/drivers/gpu/host1x/hw/intr_hw.c +++ b/drivers/gpu/host1x/hw/intr_hw.c @@ -48,7 +48,7 @@ static irqreturn_t syncpt_thresh_isr(int irq, void *dev_id) unsigned long reg; int i, id; - for (i = 0; i < DIV_ROUND_UP(host->info->nb_pts, 32); i++) { + for (i = 0; i <= BIT_WORD(host->info->nb_pts); i++) { reg = host1x_sync_readl(host, HOST1X_SYNC_SYNCPT_THRESH_CPU0_INT_STATUS(i)); for_each_set_bit(id, ®, BITS_PER_LONG) { @@ -65,7 +65,7 @@ static void _host1x_intr_disable_all_syncpt_intrs(struct host1x *host) { u32 i; - for (i = 0; i < DIV_ROUND_UP(host->info->nb_pts, 32); ++i) { + for (i = 0; i <= BIT_WORD(host->info->nb_pts); ++i) { host1x_sync_writel(host, 0xffffffffu, HOST1X_SYNC_SYNCPT_THRESH_INT_DISABLE(i)); host1x_sync_writel(host, 0xffffffffu, diff --git a/drivers/hid/hid-cherry.c b/drivers/hid/hid-cherry.c index f745d2c1325..1bdcccc54a1 100644 --- a/drivers/hid/hid-cherry.c +++ b/drivers/hid/hid-cherry.c @@ -28,7 +28,7 @@ static __u8 *ch_report_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize) { - if (*rsize >= 18 && rdesc[11] == 0x3c && rdesc[12] == 0x02) { + if (*rsize >= 17 && rdesc[11] == 0x3c && rdesc[12] == 0x02) { hid_info(hdev, "fixing up Cherry Cymotion report descriptor\n"); rdesc[11] = rdesc[16] = 0xff; rdesc[12] = rdesc[17] = 0x03; diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index 677145b20db..98fca43ae7d 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -796,17 +796,7 @@ struct hid_report *hid_validate_values(struct hid_device *hid, * ->numbered being checked, which may not always be the case when * drivers go to access report values. */ - if (id == 0) { - /* - * Validating on id 0 means we should examine the first - * report in the list. - */ - report = list_entry( - hid->report_enum[type].report_list.next, - struct hid_report, list); - } else { - report = hid->report_enum[type].report_id_hash[id]; - } + report = hid->report_enum[type].report_id_hash[id]; if (!report) { hid_err(hid, "missing %s %u\n", hid_report_names[type], id); return NULL; diff --git a/drivers/hid/hid-kye.c b/drivers/hid/hid-kye.c index 843f2dd5520..6af90dbdc3d 100644 --- a/drivers/hid/hid-kye.c +++ b/drivers/hid/hid-kye.c @@ -280,7 +280,7 @@ static __u8 *kye_report_fixup(struct hid_device *hdev, __u8 *rdesc, * - change the button usage range to 4-7 for the extra * buttons */ - if (*rsize >= 75 && + if (*rsize >= 74 && rdesc[61] == 0x05 && rdesc[62] == 0x08 && rdesc[63] == 0x19 && rdesc[64] == 0x08 && rdesc[65] == 0x29 && rdesc[66] == 0x0f && diff --git a/drivers/hid/hid-lg.c b/drivers/hid/hid-lg.c index 12fc48c968e..06eb45fa633 100644 --- a/drivers/hid/hid-lg.c +++ b/drivers/hid/hid-lg.c @@ -345,14 +345,14 @@ static __u8 *lg_report_fixup(struct hid_device *hdev, __u8 *rdesc, struct usb_device_descriptor *udesc; __u16 bcdDevice, rev_maj, rev_min; - if ((drv_data->quirks & LG_RDESC) && *rsize >= 91 && rdesc[83] == 0x26 && + if ((drv_data->quirks & LG_RDESC) && *rsize >= 90 && rdesc[83] == 0x26 && rdesc[84] == 0x8c && rdesc[85] == 0x02) { hid_info(hdev, "fixing up Logitech keyboard report descriptor\n"); rdesc[84] = rdesc[89] = 0x4d; rdesc[85] = rdesc[90] = 0x10; } - if ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 51 && + if ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 50 && rdesc[32] == 0x81 && rdesc[33] == 0x06 && rdesc[49] == 0x81 && rdesc[50] == 0x06) { hid_info(hdev, diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c index d4c6d9f85ca..1be9156a395 100644 --- a/drivers/hid/hid-logitech-dj.c +++ b/drivers/hid/hid-logitech-dj.c @@ -237,6 +237,13 @@ static void logi_dj_recv_add_djhid_device(struct dj_receiver_dev *djrcv_dev, return; } + if ((dj_report->device_index < DJ_DEVICE_INDEX_MIN) || + (dj_report->device_index > DJ_DEVICE_INDEX_MAX)) { + dev_err(&djrcv_hdev->dev, "%s: invalid device index:%d\n", + __func__, dj_report->device_index); + return; + } + if (djrcv_dev->paired_dj_devices[dj_report->device_index]) { /* The device is already known. No need to reallocate it. */ dbg_hid("%s: device is already known\n", __func__); @@ -679,6 +686,7 @@ static int logi_dj_raw_event(struct hid_device *hdev, struct dj_receiver_dev *djrcv_dev = hid_get_drvdata(hdev); struct dj_report *dj_report = (struct dj_report *) data; unsigned long flags; + bool report_processed = false; dbg_hid("%s, size:%d\n", __func__, size); @@ -706,41 +714,27 @@ static int logi_dj_raw_event(struct hid_device *hdev, * anything else with it. */ - /* case 1) */ - if (data[0] != REPORT_ID_DJ_SHORT) - return false; - - if ((dj_report->device_index < DJ_DEVICE_INDEX_MIN) || - (dj_report->device_index > DJ_DEVICE_INDEX_MAX)) { - /* - * Device index is wrong, bail out. - * This driver can ignore safely the receiver notifications, - * so ignore those reports too. - */ - if (dj_report->device_index != DJ_RECEIVER_INDEX) - dev_err(&hdev->dev, "%s: invalid device index:%d\n", - __func__, dj_report->device_index); - return false; - } - spin_lock_irqsave(&djrcv_dev->lock, flags); - switch (dj_report->report_type) { - case REPORT_TYPE_NOTIF_DEVICE_PAIRED: - case REPORT_TYPE_NOTIF_DEVICE_UNPAIRED: - logi_dj_recv_queue_notification(djrcv_dev, dj_report); - break; - case REPORT_TYPE_NOTIF_CONNECTION_STATUS: - if (dj_report->report_params[CONNECTION_STATUS_PARAM_STATUS] == - STATUS_LINKLOSS) { - logi_dj_recv_forward_null_report(djrcv_dev, dj_report); + if (dj_report->report_id == REPORT_ID_DJ_SHORT) { + switch (dj_report->report_type) { + case REPORT_TYPE_NOTIF_DEVICE_PAIRED: + case REPORT_TYPE_NOTIF_DEVICE_UNPAIRED: + logi_dj_recv_queue_notification(djrcv_dev, dj_report); + break; + case REPORT_TYPE_NOTIF_CONNECTION_STATUS: + if (dj_report->report_params[CONNECTION_STATUS_PARAM_STATUS] == + STATUS_LINKLOSS) { + logi_dj_recv_forward_null_report(djrcv_dev, dj_report); + } + break; + default: + logi_dj_recv_forward_report(djrcv_dev, dj_report); } - break; - default: - logi_dj_recv_forward_report(djrcv_dev, dj_report); + report_processed = true; } spin_unlock_irqrestore(&djrcv_dev->lock, flags); - return true; + return report_processed; } static int logi_dj_probe(struct hid_device *hdev, diff --git a/drivers/hid/hid-logitech-dj.h b/drivers/hid/hid-logitech-dj.h index daeb0aa4bee..4a4000340ce 100644 --- a/drivers/hid/hid-logitech-dj.h +++ b/drivers/hid/hid-logitech-dj.h @@ -27,7 +27,6 @@ #define DJ_MAX_PAIRED_DEVICES 6 #define DJ_MAX_NUMBER_NOTIFICATIONS 8 -#define DJ_RECEIVER_INDEX 0 #define DJ_DEVICE_INDEX_MIN 1 #define DJ_DEVICE_INDEX_MAX 6 diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c index abeee5d5489..bd0af6add0f 100644 --- a/drivers/hid/hid-magicmouse.c +++ b/drivers/hid/hid-magicmouse.c @@ -291,11 +291,6 @@ static int magicmouse_raw_event(struct hid_device *hdev, if (size < 4 || ((size - 4) % 9) != 0) return 0; npoints = (size - 4) / 9; - if (npoints > 15) { - hid_warn(hdev, "invalid size value (%d) for TRACKPAD_REPORT_ID\n", - size); - return 0; - } msc->ntouches = 0; for (ii = 0; ii < npoints; ii++) magicmouse_emit_touch(msc, ii, data + ii * 9 + 4); @@ -313,11 +308,6 @@ static int magicmouse_raw_event(struct hid_device *hdev, if (size < 6 || ((size - 6) % 8) != 0) return 0; npoints = (size - 6) / 8; - if (npoints > 15) { - hid_warn(hdev, "invalid size value (%d) for MOUSE_REPORT_ID\n", - size); - return 0; - } msc->ntouches = 0; for (ii = 0; ii < npoints; ii++) magicmouse_emit_touch(msc, ii, data + ii * 8 + 6); diff --git a/drivers/hid/hid-monterey.c b/drivers/hid/hid-monterey.c index 25daf28b26b..9e14c00eb1b 100644 --- a/drivers/hid/hid-monterey.c +++ b/drivers/hid/hid-monterey.c @@ -24,7 +24,7 @@ static __u8 *mr_report_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize) { - if (*rsize >= 31 && rdesc[29] == 0x05 && rdesc[30] == 0x09) { + if (*rsize >= 30 && rdesc[29] == 0x05 && rdesc[30] == 0x09) { hid_info(hdev, "fixing up button/consumer in HID report descriptor\n"); rdesc[30] = 0x0c; } diff --git a/drivers/hid/hid-petalynx.c b/drivers/hid/hid-petalynx.c index 6aca4f2554b..736b2502df4 100644 --- a/drivers/hid/hid-petalynx.c +++ b/drivers/hid/hid-petalynx.c @@ -25,7 +25,7 @@ static __u8 *pl_report_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize) { - if (*rsize >= 62 && rdesc[39] == 0x2a && rdesc[40] == 0xf5 && + if (*rsize >= 60 && rdesc[39] == 0x2a && rdesc[40] == 0xf5 && rdesc[41] == 0x00 && rdesc[59] == 0x26 && rdesc[60] == 0xf9 && rdesc[61] == 0x00) { hid_info(hdev, "fixing up Petalynx Maxter Remote report descriptor\n"); diff --git a/drivers/hid/hid-picolcd_core.c b/drivers/hid/hid-picolcd_core.c index 020df3c2e8b..acbb021065e 100644 --- a/drivers/hid/hid-picolcd_core.c +++ b/drivers/hid/hid-picolcd_core.c @@ -350,12 +350,6 @@ static int picolcd_raw_event(struct hid_device *hdev, if (!data) return 1; - if (size > 64) { - hid_warn(hdev, "invalid size value (%d) for picolcd raw event\n", - size); - return 0; - } - if (report->id == REPORT_KEY_STATE) { if (data->input_keys) ret = picolcd_raw_keypad(data, report, raw_data+1, size-1); diff --git a/drivers/hid/hid-sunplus.c b/drivers/hid/hid-sunplus.c index 91072fa5466..87fc91e1c8d 100644 --- a/drivers/hid/hid-sunplus.c +++ b/drivers/hid/hid-sunplus.c @@ -24,7 +24,7 @@ static __u8 *sp_report_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize) { - if (*rsize >= 112 && rdesc[104] == 0x26 && rdesc[105] == 0x80 && + if (*rsize >= 107 && rdesc[104] == 0x26 && rdesc[105] == 0x80 && rdesc[106] == 0x03) { hid_info(hdev, "fixing up Sunplus Wireless Desktop report descriptor\n"); rdesc[105] = rdesc[110] = 0x03; diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 92f34de7aee..0b122f8c700 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -199,10 +199,8 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size, ret = vmbus_post_msg(open_msg, sizeof(struct vmbus_channel_open_channel)); - if (ret != 0) { - err = ret; + if (ret != 0) goto error1; - } t = wait_for_completion_timeout(&open_info->waitevent, 5*HZ); if (t == 0) { @@ -394,6 +392,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, u32 next_gpadl_handle; unsigned long flags; int ret = 0; + int t; next_gpadl_handle = atomic_read(&vmbus_connection.next_gpadl_handle); atomic_inc(&vmbus_connection.next_gpadl_handle); @@ -440,7 +439,9 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, } } - wait_for_completion(&msginfo->waitevent); + t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ); + BUG_ON(t == 0); + /* At this point, we received the gpadl created msg */ *gpadl_handle = gpadlmsg->gpadl; @@ -463,7 +464,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) struct vmbus_channel_gpadl_teardown *msg; struct vmbus_channel_msginfo *info; unsigned long flags; - int ret; + int ret, t; info = kmalloc(sizeof(*info) + sizeof(struct vmbus_channel_gpadl_teardown), GFP_KERNEL); @@ -485,12 +486,11 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_gpadl_teardown)); - if (ret) - goto post_msg_err; - - wait_for_completion(&info->waitevent); + BUG_ON(ret != 0); + t = wait_for_completion_timeout(&info->waitevent, 5*HZ); + BUG_ON(t == 0); -post_msg_err: + /* Received a torndown response */ spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); list_del(&info->msglistentry); spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index a3b55580876..d4fac934b22 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -55,9 +55,6 @@ static __u32 vmbus_get_next_version(__u32 current_version) case (VERSION_WIN8): return VERSION_WIN7; - case (VERSION_WIN8_1): - return VERSION_WIN8; - case (VERSION_WS2008): default: return VERSION_INVAL; @@ -83,9 +80,6 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, (void *)((unsigned long)vmbus_connection.monitor_pages + PAGE_SIZE)); - if (version == VERSION_WIN8_1) - msg->target_vcpu = hv_context.vp_index[smp_processor_id()]; - /* * Add to list before we send the request since we may * receive the response before returning from this routine @@ -304,13 +298,9 @@ static void process_chn_event(u32 relid) */ do { - if (read_state) - hv_begin_read(&channel->inbound); + hv_begin_read(&channel->inbound); channel->onchannel_callback(arg); - if (read_state) - bytes_to_read = hv_end_read(&channel->inbound); - else - bytes_to_read = 0; + bytes_to_read = hv_end_read(&channel->inbound); } while (read_state && (bytes_to_read != 0)); } else { pr_err("no channel callback for relid - %u\n", relid); @@ -393,21 +383,10 @@ int vmbus_post_msg(void *buffer, size_t buflen) * insufficient resources. Retry the operation a couple of * times before giving up. */ - while (retries < 10) { - ret = hv_post_message(conn_id, 1, buffer, buflen); - - switch (ret) { - case HV_STATUS_INSUFFICIENT_BUFFERS: - ret = -ENOMEM; - case -ENOMEM: - break; - case HV_STATUS_SUCCESS: + while (retries < 3) { + ret = hv_post_message(conn_id, 1, buffer, buflen); + if (ret != HV_STATUS_INSUFFICIENT_BUFFERS) return ret; - default: - pr_err("hv_post_msg() failed; error code:%d\n", ret); - return -EINVAL; - } - retries++; msleep(100); } diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index 694173f662d..deb5c25305a 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -19,7 +19,6 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/kernel.h> -#include <linux/jiffies.h> #include <linux/mman.h> #include <linux/delay.h> #include <linux/init.h> @@ -460,11 +459,6 @@ static bool do_hot_add; */ static uint pressure_report_delay = 45; -/* - * The last time we posted a pressure report to host. - */ -static unsigned long last_post_time; - module_param(hot_add, bool, (S_IRUGO | S_IWUSR)); MODULE_PARM_DESC(hot_add, "If set attempt memory hot_add"); @@ -548,7 +542,6 @@ struct hv_dynmem_device { static struct hv_dynmem_device dm_device; -static void post_status(struct hv_dynmem_device *dm); #ifdef CONFIG_MEMORY_HOTPLUG static void hv_bring_pgs_online(unsigned long start_pfn, unsigned long size) @@ -619,7 +612,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size, * have not been "onlined" within the allowed time. */ wait_for_completion_timeout(&dm_device.ol_waitevent, 5*HZ); - post_status(&dm_device); + } return; @@ -958,17 +951,11 @@ static void post_status(struct hv_dynmem_device *dm) { struct dm_status status; struct sysinfo val; - unsigned long now = jiffies; - unsigned long last_post = last_post_time; if (pressure_report_delay > 0) { --pressure_report_delay; return; } - - if (!time_after(now, (last_post_time + HZ))) - return; - si_meminfo(&val); memset(&status, 0, sizeof(struct dm_status)); status.hdr.type = DM_STATUS_REPORT; @@ -996,14 +983,6 @@ static void post_status(struct hv_dynmem_device *dm) if (status.hdr.trans_id != atomic_read(&trans_id)) return; - /* - * If the last post time that we sampled has changed, - * we have raced, don't post the status. - */ - if (last_post != last_post_time) - return; - - last_post_time = jiffies; vmbus_sendpacket(dm->dev->channel, &status, sizeof(struct dm_status), (unsigned long)NULL, @@ -1138,7 +1117,7 @@ static void balloon_up(struct work_struct *dummy) if (ret == -EAGAIN) msleep(20); - post_status(&dm_device); + } while (ret == -EAGAIN); if (ret) { @@ -1165,10 +1144,8 @@ static void balloon_down(struct hv_dynmem_device *dm, struct dm_unballoon_response resp; int i; - for (i = 0; i < range_count; i++) { + for (i = 0; i < range_count; i++) free_balloon_pages(dm, &range_array[i]); - post_status(&dm_device); - } if (req->more_pages == 1) return; diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c index 0e8c1ea4dd5..ed50e9e83c6 100644 --- a/drivers/hv/hv_kvp.c +++ b/drivers/hv/hv_kvp.c @@ -111,15 +111,6 @@ kvp_work_func(struct work_struct *dummy) kvp_respond_to_host(NULL, HV_E_FAIL); } -static void poll_channel(struct vmbus_channel *channel) -{ - unsigned long flags; - - spin_lock_irqsave(&channel->inbound_lock, flags); - hv_kvp_onchannelcallback(channel); - spin_unlock_irqrestore(&channel->inbound_lock, flags); -} - static int kvp_handle_handshake(struct hv_kvp_msg *msg) { int ret = 1; @@ -148,7 +139,7 @@ static int kvp_handle_handshake(struct hv_kvp_msg *msg) kvp_register(dm_reg_value); kvp_transaction.active = false; if (kvp_transaction.kvp_context) - poll_channel(kvp_transaction.kvp_context); + hv_kvp_onchannelcallback(kvp_transaction.kvp_context); } return ret; } @@ -561,7 +552,6 @@ response_done: vmbus_sendpacket(channel, recv_buffer, buf_len, req_id, VM_PKT_DATA_INBAND, 0); - poll_channel(channel); } @@ -595,7 +585,7 @@ void hv_kvp_onchannelcallback(void *context) return; } - vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 4, &recvlen, + vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 2, &recvlen, &requestid); if (recvlen > 0) { diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c index 64c778f7756..2f561c5dfe2 100644 --- a/drivers/hv/hv_util.c +++ b/drivers/hv/hv_util.c @@ -279,7 +279,7 @@ static int util_probe(struct hv_device *dev, (struct hv_util_service *)dev_id->driver_data; int ret; - srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); + srv->recv_buffer = kmalloc(PAGE_SIZE * 2, GFP_KERNEL); if (!srv->recv_buffer) return -ENOMEM; if (srv->util_init) { diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig index e8979bf2fb3..9f444b3d758 100644 --- a/drivers/hwmon/Kconfig +++ b/drivers/hwmon/Kconfig @@ -954,7 +954,7 @@ config SENSORS_NCT6775 config SENSORS_NTC_THERMISTOR tristate "NTC thermistor support" - depends on !OF || IIO=n || IIO + depends on (!OF && !IIO) || (OF && IIO) help This driver supports NTC thermistors sensor reading and its interpretation. The driver can also monitor the temperature and diff --git a/drivers/hwmon/adm1021.c b/drivers/hwmon/adm1021.c index 27ad7fb0657..f920619cd6d 100644 --- a/drivers/hwmon/adm1021.c +++ b/drivers/hwmon/adm1021.c @@ -185,7 +185,7 @@ static ssize_t set_temp_max(struct device *dev, struct i2c_client *client = to_i2c_client(dev); struct adm1021_data *data = i2c_get_clientdata(client); long temp; - int reg_val, err; + int err; err = kstrtol(buf, 10, &temp); if (err) @@ -193,11 +193,10 @@ static ssize_t set_temp_max(struct device *dev, temp /= 1000; mutex_lock(&data->update_lock); - reg_val = clamp_val(temp, -128, 127); - data->temp_max[index] = reg_val * 1000; + data->temp_max[index] = clamp_val(temp, -128, 127); if (!read_only) i2c_smbus_write_byte_data(client, ADM1021_REG_TOS_W(index), - reg_val); + data->temp_max[index]); mutex_unlock(&data->update_lock); return count; @@ -211,7 +210,7 @@ static ssize_t set_temp_min(struct device *dev, struct i2c_client *client = to_i2c_client(dev); struct adm1021_data *data = i2c_get_clientdata(client); long temp; - int reg_val, err; + int err; err = kstrtol(buf, 10, &temp); if (err) @@ -219,11 +218,10 @@ static ssize_t set_temp_min(struct device *dev, temp /= 1000; mutex_lock(&data->update_lock); - reg_val = clamp_val(temp, -128, 127); - data->temp_min[index] = reg_val * 1000; + data->temp_min[index] = clamp_val(temp, -128, 127); if (!read_only) i2c_smbus_write_byte_data(client, ADM1021_REG_THYST_W(index), - reg_val); + data->temp_min[index]); mutex_unlock(&data->update_lock); return count; diff --git a/drivers/hwmon/adm1029.c b/drivers/hwmon/adm1029.c index 39441e5d922..9ee5e066423 100644 --- a/drivers/hwmon/adm1029.c +++ b/drivers/hwmon/adm1029.c @@ -232,9 +232,6 @@ static ssize_t set_fan_div(struct device *dev, /* Update the value */ reg = (reg & 0x3F) | (val << 6); - /* Update the cache */ - data->fan_div[attr->index] = reg; - /* Write value */ i2c_smbus_write_byte_data(client, ADM1029_REG_FAN_DIV[attr->index], reg); diff --git a/drivers/hwmon/adm1031.c b/drivers/hwmon/adm1031.c index bdceca0d7e2..253ea396106 100644 --- a/drivers/hwmon/adm1031.c +++ b/drivers/hwmon/adm1031.c @@ -365,7 +365,6 @@ set_auto_temp_min(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, 0, 127000); mutex_lock(&data->update_lock); data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]); adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr), @@ -395,7 +394,6 @@ set_auto_temp_max(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, 0, 127000); mutex_lock(&data->update_lock); data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr], data->pwm[nr]); @@ -698,7 +696,7 @@ static ssize_t set_temp_min(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, 127000); + val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); mutex_lock(&data->update_lock); data->temp_min[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr), @@ -719,7 +717,7 @@ static ssize_t set_temp_max(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, 127000); + val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); mutex_lock(&data->update_lock); data->temp_max[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr), @@ -740,7 +738,7 @@ static ssize_t set_temp_crit(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, 127000); + val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); mutex_lock(&data->update_lock); data->temp_crit[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr), diff --git a/drivers/hwmon/ads1015.c b/drivers/hwmon/ads1015.c index 3930a7e7a56..2798246ad81 100644 --- a/drivers/hwmon/ads1015.c +++ b/drivers/hwmon/ads1015.c @@ -184,7 +184,7 @@ static int ads1015_get_channels_config_of(struct i2c_client *client) } channel = be32_to_cpup(property); - if (channel >= ADS1015_CHANNELS) { + if (channel > ADS1015_CHANNELS) { dev_err(&client->dev, "invalid channel index %d on %s\n", channel, node->full_name); @@ -198,7 +198,6 @@ static int ads1015_get_channels_config_of(struct i2c_client *client) dev_err(&client->dev, "invalid gain on %s\n", node->full_name); - return -EINVAL; } } @@ -209,7 +208,6 @@ static int ads1015_get_channels_config_of(struct i2c_client *client) dev_err(&client->dev, "invalid data_rate on %s\n", node->full_name); - return -EINVAL; } } diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c index 79610bdf1d3..58637355c1f 100644 --- a/drivers/hwmon/adt7470.c +++ b/drivers/hwmon/adt7470.c @@ -515,7 +515,7 @@ static ssize_t set_temp_min(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, -128, 127); + temp = clamp_val(temp, 0, 255); mutex_lock(&data->lock); data->temp_min[attr->index] = temp; @@ -549,7 +549,7 @@ static ssize_t set_temp_max(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, -128, 127); + temp = clamp_val(temp, 0, 255); mutex_lock(&data->lock); data->temp_max[attr->index] = temp; @@ -826,7 +826,7 @@ static ssize_t set_pwm_tmin(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, -128, 127); + temp = clamp_val(temp, 0, 255); mutex_lock(&data->lock); data->pwm_tmin[attr->index] = temp; diff --git a/drivers/hwmon/amc6821.c b/drivers/hwmon/amc6821.c index 09d2d78d482..4fe49d2bfe1 100644 --- a/drivers/hwmon/amc6821.c +++ b/drivers/hwmon/amc6821.c @@ -707,7 +707,7 @@ static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, get_temp_alarm, NULL, IDX_TEMP1_MAX); static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, get_temp_alarm, NULL, IDX_TEMP1_CRIT); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, +static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO | S_IWUSR, get_temp, NULL, IDX_TEMP2_INPUT); static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, get_temp, set_temp, IDX_TEMP2_MIN); diff --git a/drivers/hwmon/da9052-hwmon.c b/drivers/hwmon/da9052-hwmon.c index 48044b044b7..960fac3fb16 100644 --- a/drivers/hwmon/da9052-hwmon.c +++ b/drivers/hwmon/da9052-hwmon.c @@ -194,7 +194,7 @@ static ssize_t da9052_hwmon_show_name(struct device *dev, struct device_attribute *devattr, char *buf) { - return sprintf(buf, "da9052\n"); + return sprintf(buf, "da9052-hwmon\n"); } static ssize_t show_label(struct device *dev, diff --git a/drivers/hwmon/da9055-hwmon.c b/drivers/hwmon/da9055-hwmon.c index 1b275a2881d..029ecabc438 100644 --- a/drivers/hwmon/da9055-hwmon.c +++ b/drivers/hwmon/da9055-hwmon.c @@ -204,7 +204,7 @@ static ssize_t da9055_hwmon_show_name(struct device *dev, struct device_attribute *devattr, char *buf) { - return sprintf(buf, "da9055\n"); + return sprintf(buf, "da9055-hwmon\n"); } static ssize_t show_label(struct device *dev, diff --git a/drivers/hwmon/dme1737.c b/drivers/hwmon/dme1737.c index bea0a344fab..4ae3fff13f4 100644 --- a/drivers/hwmon/dme1737.c +++ b/drivers/hwmon/dme1737.c @@ -247,8 +247,8 @@ struct dme1737_data { u8 pwm_acz[3]; u8 pwm_freq[6]; u8 pwm_rr[2]; - s8 zone_low[3]; - s8 zone_abs[3]; + u8 zone_low[3]; + u8 zone_abs[3]; u8 zone_hyst[2]; u32 alarms; }; @@ -277,7 +277,7 @@ static inline int IN_FROM_REG(int reg, int nominal, int res) return (reg * nominal + (3 << (res - 3))) / (3 << (res - 2)); } -static inline int IN_TO_REG(long val, int nominal) +static inline int IN_TO_REG(int val, int nominal) { return clamp_val((val * 192 + nominal / 2) / nominal, 0, 255); } @@ -293,7 +293,7 @@ static inline int TEMP_FROM_REG(int reg, int res) return (reg * 1000) >> (res - 8); } -static inline int TEMP_TO_REG(long val) +static inline int TEMP_TO_REG(int val) { return clamp_val((val < 0 ? val - 500 : val + 500) / 1000, -128, 127); } @@ -308,7 +308,7 @@ static inline int TEMP_RANGE_FROM_REG(int reg) return TEMP_RANGE[(reg >> 4) & 0x0f]; } -static int TEMP_RANGE_TO_REG(long val, int reg) +static int TEMP_RANGE_TO_REG(int val, int reg) { int i; @@ -331,7 +331,7 @@ static inline int TEMP_HYST_FROM_REG(int reg, int ix) return (((ix == 1) ? reg : reg >> 4) & 0x0f) * 1000; } -static inline int TEMP_HYST_TO_REG(long val, int ix, int reg) +static inline int TEMP_HYST_TO_REG(int val, int ix, int reg) { int hyst = clamp_val((val + 500) / 1000, 0, 15); @@ -347,7 +347,7 @@ static inline int FAN_FROM_REG(int reg, int tpc) return (reg == 0 || reg == 0xffff) ? 0 : 90000 * 60 / reg; } -static inline int FAN_TO_REG(long val, int tpc) +static inline int FAN_TO_REG(int val, int tpc) { if (tpc) { return clamp_val(val / tpc, 0, 0xffff); @@ -379,7 +379,7 @@ static inline int FAN_TYPE_FROM_REG(int reg) return (edge > 0) ? 1 << (edge - 1) : 0; } -static inline int FAN_TYPE_TO_REG(long val, int reg) +static inline int FAN_TYPE_TO_REG(int val, int reg) { int edge = (val == 4) ? 3 : val; @@ -402,7 +402,7 @@ static int FAN_MAX_FROM_REG(int reg) return 1000 + i * 500; } -static int FAN_MAX_TO_REG(long val) +static int FAN_MAX_TO_REG(int val) { int i; @@ -460,7 +460,7 @@ static inline int PWM_ACZ_FROM_REG(int reg) return acz[(reg >> 5) & 0x07]; } -static inline int PWM_ACZ_TO_REG(long val, int reg) +static inline int PWM_ACZ_TO_REG(int val, int reg) { int acz = (val == 4) ? 2 : val - 1; @@ -476,7 +476,7 @@ static inline int PWM_FREQ_FROM_REG(int reg) return PWM_FREQ[reg & 0x0f]; } -static int PWM_FREQ_TO_REG(long val, int reg) +static int PWM_FREQ_TO_REG(int val, int reg) { int i; @@ -510,7 +510,7 @@ static inline int PWM_RR_FROM_REG(int reg, int ix) return (rr & 0x08) ? PWM_RR[rr & 0x07] : 0; } -static int PWM_RR_TO_REG(long val, int ix, int reg) +static int PWM_RR_TO_REG(int val, int ix, int reg) { int i; @@ -528,7 +528,7 @@ static inline int PWM_RR_EN_FROM_REG(int reg, int ix) return PWM_RR_FROM_REG(reg, ix) ? 1 : 0; } -static inline int PWM_RR_EN_TO_REG(long val, int ix, int reg) +static inline int PWM_RR_EN_TO_REG(int val, int ix, int reg) { int en = (ix == 1) ? 0x80 : 0x08; @@ -1481,16 +1481,13 @@ static ssize_t set_vrm(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { struct dme1737_data *data = dev_get_drvdata(dev); - unsigned long val; + long val; int err; - err = kstrtoul(buf, 10, &val); + err = kstrtol(buf, 10, &val); if (err) return err; - if (val > 255) - return -EINVAL; - data->vrm = val; return count; } diff --git a/drivers/hwmon/emc1403.c b/drivers/hwmon/emc1403.c index 361f50b221b..142e1cb8dea 100644 --- a/drivers/hwmon/emc1403.c +++ b/drivers/hwmon/emc1403.c @@ -162,7 +162,7 @@ static ssize_t store_hyst(struct device *dev, if (retval < 0) goto fail; - hyst = retval * 1000 - val; + hyst = val - retval * 1000; hyst = DIV_ROUND_CLOSEST(hyst, 1000); if (hyst < 0 || hyst > 255) { retval = -ERANGE; @@ -295,7 +295,7 @@ static int emc1403_detect(struct i2c_client *client, } id = i2c_smbus_read_byte_data(client, THERMAL_REVISION_REG); - if (id < 0x01 || id > 0x04) + if (id != 0x01) return -ENODEV; return 0; diff --git a/drivers/hwmon/gpio-fan.c b/drivers/hwmon/gpio-fan.c index ce1d82762ba..3104149795c 100644 --- a/drivers/hwmon/gpio-fan.c +++ b/drivers/hwmon/gpio-fan.c @@ -172,7 +172,7 @@ static int get_fan_speed_index(struct gpio_fan_data *fan_data) return -EINVAL; } -static int rpm_to_speed_index(struct gpio_fan_data *fan_data, unsigned long rpm) +static int rpm_to_speed_index(struct gpio_fan_data *fan_data, int rpm) { struct gpio_fan_speed *speed = fan_data->speed; int i; diff --git a/drivers/hwmon/ina2xx.c b/drivers/hwmon/ina2xx.c index 371c1ee233b..4958b2f89dc 100644 --- a/drivers/hwmon/ina2xx.c +++ b/drivers/hwmon/ina2xx.c @@ -147,8 +147,7 @@ static int ina2xx_get_value(struct ina2xx_data *data, u8 reg) switch (reg) { case INA2XX_SHUNT_VOLTAGE: - /* signed register */ - val = DIV_ROUND_CLOSEST((s16)data->regs[reg], + val = DIV_ROUND_CLOSEST(data->regs[reg], data->config->shunt_div); break; case INA2XX_BUS_VOLTAGE: @@ -160,8 +159,8 @@ static int ina2xx_get_value(struct ina2xx_data *data, u8 reg) val = data->regs[reg] * data->config->power_lsb; break; case INA2XX_CURRENT: - /* signed register, LSB=1mA (selected), in mA */ - val = (s16)data->regs[reg]; + /* LSB=1mA (selected). Is in mA */ + val = data->regs[reg]; break; default: /* programmer goofed */ diff --git a/drivers/hwmon/lm78.c b/drivers/hwmon/lm78.c index b879427e9a4..a2f3b4a365e 100644 --- a/drivers/hwmon/lm78.c +++ b/drivers/hwmon/lm78.c @@ -108,7 +108,7 @@ static inline int FAN_FROM_REG(u8 val, int div) * TEMP: mC (-128C to +127C) * REG: 1C/bit, two's complement */ -static inline s8 TEMP_TO_REG(long val) +static inline s8 TEMP_TO_REG(int val) { int nval = clamp_val(val, -128000, 127000) ; return nval < 0 ? (nval - 500) / 1000 : (nval + 500) / 1000; diff --git a/drivers/hwmon/lm85.c b/drivers/hwmon/lm85.c index b9d6e7d0ba3..3894c408fda 100644 --- a/drivers/hwmon/lm85.c +++ b/drivers/hwmon/lm85.c @@ -158,7 +158,7 @@ static inline u16 FAN_TO_REG(unsigned long val) /* Temperature is reported in .001 degC increments */ #define TEMP_TO_REG(val) \ - DIV_ROUND_CLOSEST(clamp_val((val), -127000, 127000), 1000) + clamp_val(SCALE(val, 1000, 1), -127, 127) #define TEMPEXT_FROM_REG(val, ext) \ SCALE(((val) << 4) + (ext), 16, 1000) #define TEMP_FROM_REG(val) ((val) * 1000) @@ -192,7 +192,7 @@ static const int lm85_range_map[] = { 13300, 16000, 20000, 26600, 32000, 40000, 53300, 80000 }; -static int RANGE_TO_REG(long range) +static int RANGE_TO_REG(int range) { int i; @@ -214,7 +214,7 @@ static const int adm1027_freq_map[8] = { /* 1 Hz */ 11, 15, 22, 29, 35, 44, 59, 88 }; -static int FREQ_TO_REG(const int *map, unsigned long freq) +static int FREQ_TO_REG(const int *map, int freq) { int i; @@ -463,9 +463,6 @@ static ssize_t store_vrm_reg(struct device *dev, struct device_attribute *attr, if (err) return err; - if (val > 255) - return -EINVAL; - data->vrm = val; return count; } diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c index c64d3d497c5..9297164a23a 100644 --- a/drivers/hwmon/ntc_thermistor.c +++ b/drivers/hwmon/ntc_thermistor.c @@ -44,7 +44,6 @@ struct ntc_compensation { unsigned int ohm; }; -/* Order matters, ntc_match references the entries by index */ static const struct platform_device_id ntc_thermistor_id[] = { { "ncp15wb473", TYPE_NCPXXWB473 }, { "ncp18wb473", TYPE_NCPXXWB473 }, @@ -142,7 +141,7 @@ struct ntc_data { char name[PLATFORM_NAME_SIZE]; }; -#if defined(CONFIG_OF) && IS_ENABLED(CONFIG_IIO) +#ifdef CONFIG_OF static int ntc_adc_iio_read(struct ntc_thermistor_platform_data *pdata) { struct iio_channel *channel = pdata->chan; @@ -164,15 +163,15 @@ static int ntc_adc_iio_read(struct ntc_thermistor_platform_data *pdata) static const struct of_device_id ntc_match[] = { { .compatible = "ntc,ncp15wb473", - .data = &ntc_thermistor_id[0] }, + .data = &ntc_thermistor_id[TYPE_NCPXXWB473] }, { .compatible = "ntc,ncp18wb473", - .data = &ntc_thermistor_id[1] }, + .data = &ntc_thermistor_id[TYPE_NCPXXWB473] }, { .compatible = "ntc,ncp21wb473", - .data = &ntc_thermistor_id[2] }, + .data = &ntc_thermistor_id[TYPE_NCPXXWB473] }, { .compatible = "ntc,ncp03wb473", - .data = &ntc_thermistor_id[3] }, + .data = &ntc_thermistor_id[TYPE_NCPXXWB473] }, { .compatible = "ntc,ncp15wl333", - .data = &ntc_thermistor_id[4] }, + .data = &ntc_thermistor_id[TYPE_NCPXXWL333] }, { }, }; MODULE_DEVICE_TABLE(of, ntc_match); @@ -224,8 +223,6 @@ ntc_thermistor_parse_dt(struct platform_device *pdev) return NULL; } -#define ntc_match NULL - static void ntc_iio_channel_release(struct ntc_thermistor_platform_data *pdata) { } #endif diff --git a/drivers/hwmon/sis5595.c b/drivers/hwmon/sis5595.c index 9ec7d2e2542..72a889702f0 100644 --- a/drivers/hwmon/sis5595.c +++ b/drivers/hwmon/sis5595.c @@ -159,7 +159,7 @@ static inline int TEMP_FROM_REG(s8 val) { return val * 830 + 52120; } -static inline s8 TEMP_TO_REG(long val) +static inline s8 TEMP_TO_REG(int val) { int nval = clamp_val(val, -54120, 157530) ; return nval < 0 ? (nval - 5212 - 415) / 830 : (nval - 5212 + 415) / 830; diff --git a/drivers/hwmon/smsc47m192.c b/drivers/hwmon/smsc47m192.c index 34b9a601ad0..efee4c59239 100644 --- a/drivers/hwmon/smsc47m192.c +++ b/drivers/hwmon/smsc47m192.c @@ -86,7 +86,7 @@ static inline u8 IN_TO_REG(unsigned long val, int n) */ static inline s8 TEMP_TO_REG(int val) { - return SCALE(clamp_val(val, -128000, 127000), 1, 1000); + return clamp_val(SCALE(val, 1, 1000), -128000, 127000); } static inline int TEMP_FROM_REG(s8 val) @@ -384,8 +384,6 @@ static ssize_t set_vrm(struct device *dev, struct device_attribute *attr, err = kstrtoul(buf, 10, &val); if (err) return err; - if (val > 255) - return -EINVAL; data->vrm = val; return count; diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig index 63323cbd357..1c0b382ccc4 100644 --- a/drivers/i2c/busses/Kconfig +++ b/drivers/i2c/busses/Kconfig @@ -109,8 +109,6 @@ config I2C_I801 Avoton (SOC) Wellsburg (PCH) Coleto Creek (PCH) - Wildcat Point-LP (PCH) - BayTrail (SOC) This driver can also be built as a module. If so, the module will be called i2c-i801. diff --git a/drivers/i2c/busses/i2c-at91.c b/drivers/i2c/busses/i2c-at91.c index 09324d0178d..6bb839b688b 100644 --- a/drivers/i2c/busses/i2c-at91.c +++ b/drivers/i2c/busses/i2c-at91.c @@ -102,7 +102,6 @@ struct at91_twi_dev { unsigned twi_cwgr_reg; struct at91_twi_pdata *pdata; bool use_dma; - bool recv_len_abort; struct at91_twi_dma dma; }; @@ -212,7 +211,7 @@ static void at91_twi_write_data_dma_callback(void *data) struct at91_twi_dev *dev = (struct at91_twi_dev *)data; dma_unmap_single(dev->dev, sg_dma_address(&dev->dma.sg), - dev->buf_len, DMA_TO_DEVICE); + dev->buf_len, DMA_MEM_TO_DEV); at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP); } @@ -269,24 +268,12 @@ static void at91_twi_read_next_byte(struct at91_twi_dev *dev) *dev->buf = at91_twi_read(dev, AT91_TWI_RHR) & 0xff; --dev->buf_len; - /* return if aborting, we only needed to read RHR to clear RXRDY*/ - if (dev->recv_len_abort) - return; - /* handle I2C_SMBUS_BLOCK_DATA */ if (unlikely(dev->msg->flags & I2C_M_RECV_LEN)) { - /* ensure length byte is a valid value */ - if (*dev->buf <= I2C_SMBUS_BLOCK_MAX && *dev->buf > 0) { - dev->msg->flags &= ~I2C_M_RECV_LEN; - dev->buf_len += *dev->buf; - dev->msg->len = dev->buf_len + 1; - dev_dbg(dev->dev, "received block length %d\n", - dev->buf_len); - } else { - /* abort and send the stop by reading one more byte */ - dev->recv_len_abort = true; - dev->buf_len = 1; - } + dev->msg->flags &= ~I2C_M_RECV_LEN; + dev->buf_len += *dev->buf; + dev->msg->len = dev->buf_len + 1; + dev_dbg(dev->dev, "received block length %d\n", dev->buf_len); } /* send stop if second but last byte has been read */ @@ -303,7 +290,7 @@ static void at91_twi_read_data_dma_callback(void *data) struct at91_twi_dev *dev = (struct at91_twi_dev *)data; dma_unmap_single(dev->dev, sg_dma_address(&dev->dma.sg), - dev->buf_len, DMA_FROM_DEVICE); + dev->buf_len, DMA_DEV_TO_MEM); /* The last two bytes have to be read without using dma */ dev->buf += dev->buf_len - 2; @@ -435,8 +422,8 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev) } } - ret = wait_for_completion_timeout(&dev->cmd_complete, - dev->adapter.timeout); + ret = wait_for_completion_interruptible_timeout(&dev->cmd_complete, + dev->adapter.timeout); if (ret == 0) { dev_err(dev->dev, "controller timed out\n"); at91_init_twi_bus(dev); @@ -458,12 +445,6 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev) ret = -EIO; goto error; } - if (dev->recv_len_abort) { - dev_err(dev->dev, "invalid smbus block length recvd\n"); - ret = -EPROTO; - goto error; - } - dev_dbg(dev->dev, "transfer complete\n"); return 0; @@ -520,7 +501,6 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num) dev->buf_len = m_start->len; dev->buf = m_start->buf; dev->msg = m_start; - dev->recv_len_abort = false; ret = at91_do_twi_transfer(dev); diff --git a/drivers/i2c/busses/i2c-designware-core.c b/drivers/i2c/busses/i2c-designware-core.c index f24a7385260..c41ca6354fc 100644 --- a/drivers/i2c/busses/i2c-designware-core.c +++ b/drivers/i2c/busses/i2c-designware-core.c @@ -380,9 +380,6 @@ static void i2c_dw_xfer_init(struct dw_i2c_dev *dev) ic_con &= ~DW_IC_CON_10BITADDR_MASTER; dw_writel(dev, ic_con, DW_IC_CON); - /* enforce disabled interrupts (due to HW issues) */ - i2c_dw_disable_int(dev); - /* Enable the adapter */ __i2c_dw_enable(dev, true); diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c index 783fa75e13a..4ebceed6bc6 100644 --- a/drivers/i2c/busses/i2c-i801.c +++ b/drivers/i2c/busses/i2c-i801.c @@ -59,8 +59,6 @@ Wellsburg (PCH) MS 0x8d7e 32 hard yes yes yes Wellsburg (PCH) MS 0x8d7f 32 hard yes yes yes Coleto Creek (PCH) 0x23b0 32 hard yes yes yes - Wildcat Point-LP (PCH) 0x9ca2 32 hard yes yes yes - BayTrail (SOC) 0x0f12 32 hard yes yes yes Features supported by this driver: Software PEC no @@ -163,7 +161,6 @@ STATUS_ERROR_FLAGS) /* Older devices have their ID defined in <linux/pci_ids.h> */ -#define PCI_DEVICE_ID_INTEL_BAYTRAIL_SMBUS 0x0f12 #define PCI_DEVICE_ID_INTEL_COUGARPOINT_SMBUS 0x1c22 #define PCI_DEVICE_ID_INTEL_PATSBURG_SMBUS 0x1d22 /* Patsburg also has three 'Integrated Device Function' SMBus controllers */ @@ -181,7 +178,6 @@ #define PCI_DEVICE_ID_INTEL_WELLSBURG_SMBUS_MS1 0x8d7e #define PCI_DEVICE_ID_INTEL_WELLSBURG_SMBUS_MS2 0x8d7f #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_SMBUS 0x9c22 -#define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_SMBUS 0x9ca2 struct i801_mux_config { char *gpio_chip; @@ -824,8 +820,6 @@ static DEFINE_PCI_DEVICE_TABLE(i801_ids) = { { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_WELLSBURG_SMBUS_MS1) }, { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_WELLSBURG_SMBUS_MS2) }, { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_COLETOCREEK_SMBUS) }, - { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_SMBUS) }, - { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BAYTRAIL_SMBUS) }, { 0, } }; diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c index 8a806f5c40c..4ba4a95b6b2 100644 --- a/drivers/i2c/busses/i2c-rcar.c +++ b/drivers/i2c/busses/i2c-rcar.c @@ -541,12 +541,6 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, ret = -EINVAL; for (i = 0; i < num; i++) { - /* This HW can't send STOP after address phase */ - if (msgs[i].len == 0) { - ret = -EOPNOTSUPP; - break; - } - /*-------------- spin lock -----------------*/ spin_lock_irqsave(&priv->lock, flags); @@ -611,8 +605,7 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, static u32 rcar_i2c_func(struct i2c_adapter *adap) { - /* This HW can't do SMBUS_QUICK and NOSTART */ - return I2C_FUNC_I2C | (I2C_FUNC_SMBUS_EMUL & ~I2C_FUNC_SMBUS_QUICK); + return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL; } static const struct i2c_algorithm rcar_i2c_algo = { diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c index 8bbd097a620..6e8ee92ab55 100644 --- a/drivers/i2c/busses/i2c-s3c2410.c +++ b/drivers/i2c/busses/i2c-s3c2410.c @@ -1209,10 +1209,10 @@ static int s3c24xx_i2c_resume(struct device *dev) struct platform_device *pdev = to_platform_device(dev); struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev); + i2c->suspended = 0; clk_prepare_enable(i2c->clk); s3c24xx_i2c_init(i2c); clk_disable_unprepare(i2c->clk); - i2c->suspended = 0; return 0; } diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c index 05d2733ef48..f0d6335ae08 100644 --- a/drivers/iio/adc/ad_sigma_delta.c +++ b/drivers/iio/adc/ad_sigma_delta.c @@ -477,7 +477,7 @@ static int ad_sd_probe_trigger(struct iio_dev *indio_dev) goto error_free_irq; /* select default trigger */ - indio_dev->trig = iio_trigger_get(sigma_delta->trig); + indio_dev->trig = sigma_delta->trig; return 0; diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c index 14fdaf0f9d2..e5b88d5d3b5 100644 --- a/drivers/iio/adc/at91_adc.c +++ b/drivers/iio/adc/at91_adc.c @@ -161,11 +161,12 @@ static int at91_adc_channel_init(struct iio_dev *idev) return idev->num_channels; } -static int at91_adc_get_trigger_value_by_name(struct iio_dev *idev, +static u8 at91_adc_get_trigger_value_by_name(struct iio_dev *idev, struct at91_adc_trigger *triggers, const char *trigger_name) { struct at91_adc_state *st = iio_priv(idev); + u8 value = 0; int i; for (i = 0; i < st->trigger_number; i++) { @@ -178,16 +179,15 @@ static int at91_adc_get_trigger_value_by_name(struct iio_dev *idev, return -ENOMEM; if (strcmp(trigger_name, name) == 0) { + value = triggers[i].value; kfree(name); - if (triggers[i].value == 0) - return -EINVAL; - return triggers[i].value; + break; } kfree(name); } - return -EINVAL; + return value; } static int at91_adc_configure_trigger(struct iio_trigger *trig, bool state) @@ -197,14 +197,14 @@ static int at91_adc_configure_trigger(struct iio_trigger *trig, bool state) struct iio_buffer *buffer = idev->buffer; struct at91_adc_reg_desc *reg = st->registers; u32 status = at91_adc_readl(st, reg->trigger_register); - int value; + u8 value; u8 bit; value = at91_adc_get_trigger_value_by_name(idev, st->trigger_list, idev->trig->name); - if (value < 0) - return value; + if (value == 0) + return -EINVAL; if (state) { st->buffer = kmalloc(idev->scan_bytes, GFP_KERNEL); diff --git a/drivers/iio/adc/max1363.c b/drivers/iio/adc/max1363.c index b2b5dcbf712..9e6da72ad82 100644 --- a/drivers/iio/adc/max1363.c +++ b/drivers/iio/adc/max1363.c @@ -1214,8 +1214,8 @@ static const struct max1363_chip_info max1363_chip_info_tbl[] = { .num_modes = ARRAY_SIZE(max1238_mode_list), .default_mode = s0to11, .info = &max1238_info, - .channels = max1038_channels, - .num_channels = ARRAY_SIZE(max1038_channels), + .channels = max1238_channels, + .num_channels = ARRAY_SIZE(max1238_channels), }, [max11605] = { .bits = 8, @@ -1224,8 +1224,8 @@ static const struct max1363_chip_info max1363_chip_info_tbl[] = { .num_modes = ARRAY_SIZE(max1238_mode_list), .default_mode = s0to11, .info = &max1238_info, - .channels = max1038_channels, - .num_channels = ARRAY_SIZE(max1038_channels), + .channels = max1238_channels, + .num_channels = ARRAY_SIZE(max1238_channels), }, [max11606] = { .bits = 10, @@ -1274,8 +1274,8 @@ static const struct max1363_chip_info max1363_chip_info_tbl[] = { .num_modes = ARRAY_SIZE(max1238_mode_list), .default_mode = s0to11, .info = &max1238_info, - .channels = max1138_channels, - .num_channels = ARRAY_SIZE(max1138_channels), + .channels = max1238_channels, + .num_channels = ARRAY_SIZE(max1238_channels), }, [max11611] = { .bits = 10, @@ -1284,8 +1284,8 @@ static const struct max1363_chip_info max1363_chip_info_tbl[] = { .num_modes = ARRAY_SIZE(max1238_mode_list), .default_mode = s0to11, .info = &max1238_info, - .channels = max1138_channels, - .num_channels = ARRAY_SIZE(max1138_channels), + .channels = max1238_channels, + .num_channels = ARRAY_SIZE(max1238_channels), }, [max11612] = { .bits = 12, diff --git a/drivers/iio/common/st_sensors/st_sensors_trigger.c b/drivers/iio/common/st_sensors/st_sensors_trigger.c index 8d8ca6f1e16..8fc3a97eb26 100644 --- a/drivers/iio/common/st_sensors/st_sensors_trigger.c +++ b/drivers/iio/common/st_sensors/st_sensors_trigger.c @@ -49,7 +49,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev, dev_err(&indio_dev->dev, "failed to register iio trigger.\n"); goto iio_trigger_register_error; } - indio_dev->trig = iio_trigger_get(sdata->trig); + indio_dev->trig = sdata->trig; return 0; diff --git a/drivers/iio/gyro/itg3200_buffer.c b/drivers/iio/gyro/itg3200_buffer.c index 14917fae2d9..6c43af9bb0a 100644 --- a/drivers/iio/gyro/itg3200_buffer.c +++ b/drivers/iio/gyro/itg3200_buffer.c @@ -135,7 +135,7 @@ int itg3200_probe_trigger(struct iio_dev *indio_dev) goto error_free_irq; /* select default trigger */ - indio_dev->trig = iio_trigger_get(st->trig); + indio_dev->trig = st->trig; return 0; diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c index 111ac381b40..fe4c61e219f 100644 --- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c +++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c @@ -660,7 +660,6 @@ static int inv_mpu_probe(struct i2c_client *client, { struct inv_mpu6050_state *st; struct iio_dev *indio_dev; - struct inv_mpu6050_platform_data *pdata; int result; if (!i2c_check_functionality(client->adapter, @@ -676,10 +675,8 @@ static int inv_mpu_probe(struct i2c_client *client, } st = iio_priv(indio_dev); st->client = client; - pdata = (struct inv_mpu6050_platform_data - *)dev_get_platdata(&client->dev); - if (pdata) - st->plat_data = *pdata; + st->plat_data = *(struct inv_mpu6050_platform_data + *)dev_get_platdata(&client->dev); /* power is turned on inside check chip type*/ result = inv_check_and_setup_chip(st, id); if (result) diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c index 926fccea8de..03b9372c121 100644 --- a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c +++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c @@ -135,7 +135,7 @@ int inv_mpu6050_probe_trigger(struct iio_dev *indio_dev) ret = iio_trigger_register(st->trig); if (ret) goto error_free_irq; - indio_dev->trig = iio_trigger_get(st->trig); + indio_dev->trig = st->trig; return 0; diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index e13c5f4b12c..aaadd32f9f0 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -119,8 +119,7 @@ static ssize_t iio_scan_el_show(struct device *dev, int ret; struct iio_dev *indio_dev = dev_to_iio_dev(dev); - /* Ensure ret is 0 or 1. */ - ret = !!test_bit(to_iio_dev_attr(attr)->address, + ret = test_bit(to_iio_dev_attr(attr)->address, indio_dev->buffer->scan_mask); return sprintf(buf, "%d\n", ret); @@ -763,8 +762,7 @@ int iio_scan_mask_query(struct iio_dev *indio_dev, if (!buffer->scan_mask) return 0; - /* Ensure return value is 0 or 1. */ - return !!test_bit(bit, buffer->scan_mask); + return test_bit(bit, buffer->scan_mask); }; EXPORT_SYMBOL_GPL(iio_scan_mask_query); @@ -849,7 +847,7 @@ static int iio_buffer_update_demux(struct iio_dev *indio_dev, /* Now we have the two masks, work from least sig and build up sizes */ for_each_set_bit(out_ind, - buffer->scan_mask, + indio_dev->active_scan_mask, indio_dev->masklength) { in_ind = find_next_bit(indio_dev->active_scan_mask, indio_dev->masklength, diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c index 75772c3e30d..992d50e27a2 100644 --- a/drivers/iio/inkern.c +++ b/drivers/iio/inkern.c @@ -178,12 +178,12 @@ static struct iio_channel *of_iio_channel_get_by_name(struct device_node *np, index = of_property_match_string(np, "io-channel-names", name); chan = of_iio_channel_get(np, index); - if (!IS_ERR(chan) || PTR_ERR(chan) == -EPROBE_DEFER) + if (!IS_ERR(chan)) break; else if (name && index >= 0) { pr_err("ERROR: could not get IIO channel %s:%s(%i)\n", np->full_name, name ? name : "", index); - return NULL; + return chan; } /* @@ -193,9 +193,8 @@ static struct iio_channel *of_iio_channel_get_by_name(struct device_node *np, */ np = np->parent; if (np && !of_get_property(np, "io-channel-ranges", NULL)) - return NULL; + break; } - return chan; } @@ -318,7 +317,6 @@ struct iio_channel *iio_channel_get(struct device *dev, if ((channel != NULL) && (!IS_ERR(channel))) return channel; } - return iio_channel_get_sys(name, channel_name); } EXPORT_SYMBOL_GPL(iio_channel_get); diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c index 681e63fd116..21c8f48341d 100644 --- a/drivers/iio/magnetometer/ak8975.c +++ b/drivers/iio/magnetometer/ak8975.c @@ -276,6 +276,8 @@ static int ak8975_read_axis(struct iio_dev *indio_dev, int index, int *val) { struct ak8975_data *data = iio_priv(indio_dev); struct i2c_client *client = data->client; + u16 meas_reg; + s16 raw; int ret; mutex_lock(&data->lock); @@ -320,11 +322,16 @@ static int ak8975_read_axis(struct iio_dev *indio_dev, int index, int *val) dev_err(&client->dev, "Read axis data fails\n"); goto exit; } + meas_reg = ret; mutex_unlock(&data->lock); + /* Endian conversion of the measured values. */ + raw = (s16) (le16_to_cpu(meas_reg)); + /* Clamp to valid range. */ - *val = clamp_t(s16, ret, -4096, 4095); + raw = clamp_t(s16, raw, -4096, 4095); + *val = raw; return IIO_VAL_INT; exit: diff --git a/drivers/iio/magnetometer/st_magn_core.c b/drivers/iio/magnetometer/st_magn_core.c index 3ce3769c082..16f0d6df239 100644 --- a/drivers/iio/magnetometer/st_magn_core.c +++ b/drivers/iio/magnetometer/st_magn_core.c @@ -40,8 +40,7 @@ #define ST_MAGN_FS_AVL_5600MG 5600 #define ST_MAGN_FS_AVL_8000MG 8000 #define ST_MAGN_FS_AVL_8100MG 8100 -#define ST_MAGN_FS_AVL_12000MG 12000 -#define ST_MAGN_FS_AVL_16000MG 16000 +#define ST_MAGN_FS_AVL_10000MG 10000 /* CUSTOM VALUES FOR SENSOR 1 */ #define ST_MAGN_1_WAI_EXP 0x3c @@ -68,20 +67,20 @@ #define ST_MAGN_1_FS_AVL_4700_VAL 0x05 #define ST_MAGN_1_FS_AVL_5600_VAL 0x06 #define ST_MAGN_1_FS_AVL_8100_VAL 0x07 -#define ST_MAGN_1_FS_AVL_1300_GAIN_XY 909 -#define ST_MAGN_1_FS_AVL_1900_GAIN_XY 1169 -#define ST_MAGN_1_FS_AVL_2500_GAIN_XY 1492 -#define ST_MAGN_1_FS_AVL_4000_GAIN_XY 2222 -#define ST_MAGN_1_FS_AVL_4700_GAIN_XY 2500 -#define ST_MAGN_1_FS_AVL_5600_GAIN_XY 3030 -#define ST_MAGN_1_FS_AVL_8100_GAIN_XY 4347 -#define ST_MAGN_1_FS_AVL_1300_GAIN_Z 1020 -#define ST_MAGN_1_FS_AVL_1900_GAIN_Z 1315 -#define ST_MAGN_1_FS_AVL_2500_GAIN_Z 1666 -#define ST_MAGN_1_FS_AVL_4000_GAIN_Z 2500 -#define ST_MAGN_1_FS_AVL_4700_GAIN_Z 2816 -#define ST_MAGN_1_FS_AVL_5600_GAIN_Z 3389 -#define ST_MAGN_1_FS_AVL_8100_GAIN_Z 4878 +#define ST_MAGN_1_FS_AVL_1300_GAIN_XY 1100 +#define ST_MAGN_1_FS_AVL_1900_GAIN_XY 855 +#define ST_MAGN_1_FS_AVL_2500_GAIN_XY 670 +#define ST_MAGN_1_FS_AVL_4000_GAIN_XY 450 +#define ST_MAGN_1_FS_AVL_4700_GAIN_XY 400 +#define ST_MAGN_1_FS_AVL_5600_GAIN_XY 330 +#define ST_MAGN_1_FS_AVL_8100_GAIN_XY 230 +#define ST_MAGN_1_FS_AVL_1300_GAIN_Z 980 +#define ST_MAGN_1_FS_AVL_1900_GAIN_Z 760 +#define ST_MAGN_1_FS_AVL_2500_GAIN_Z 600 +#define ST_MAGN_1_FS_AVL_4000_GAIN_Z 400 +#define ST_MAGN_1_FS_AVL_4700_GAIN_Z 355 +#define ST_MAGN_1_FS_AVL_5600_GAIN_Z 295 +#define ST_MAGN_1_FS_AVL_8100_GAIN_Z 205 #define ST_MAGN_1_MULTIREAD_BIT false /* CUSTOM VALUES FOR SENSOR 2 */ @@ -104,12 +103,10 @@ #define ST_MAGN_2_FS_MASK 0x60 #define ST_MAGN_2_FS_AVL_4000_VAL 0x00 #define ST_MAGN_2_FS_AVL_8000_VAL 0x01 -#define ST_MAGN_2_FS_AVL_12000_VAL 0x02 -#define ST_MAGN_2_FS_AVL_16000_VAL 0x03 -#define ST_MAGN_2_FS_AVL_4000_GAIN 146 -#define ST_MAGN_2_FS_AVL_8000_GAIN 292 -#define ST_MAGN_2_FS_AVL_12000_GAIN 438 -#define ST_MAGN_2_FS_AVL_16000_GAIN 584 +#define ST_MAGN_2_FS_AVL_10000_VAL 0x02 +#define ST_MAGN_2_FS_AVL_4000_GAIN 430 +#define ST_MAGN_2_FS_AVL_8000_GAIN 230 +#define ST_MAGN_2_FS_AVL_10000_GAIN 230 #define ST_MAGN_2_MULTIREAD_BIT false #define ST_MAGN_2_OUT_X_L_ADDR 0x28 #define ST_MAGN_2_OUT_Y_L_ADDR 0x2a @@ -255,14 +252,9 @@ static const struct st_sensors st_magn_sensors[] = { .gain = ST_MAGN_2_FS_AVL_8000_GAIN, }, [2] = { - .num = ST_MAGN_FS_AVL_12000MG, - .value = ST_MAGN_2_FS_AVL_12000_VAL, - .gain = ST_MAGN_2_FS_AVL_12000_GAIN, - }, - [3] = { - .num = ST_MAGN_FS_AVL_16000MG, - .value = ST_MAGN_2_FS_AVL_16000_VAL, - .gain = ST_MAGN_2_FS_AVL_16000_GAIN, + .num = ST_MAGN_FS_AVL_10000MG, + .value = ST_MAGN_2_FS_AVL_10000_VAL, + .gain = ST_MAGN_2_FS_AVL_10000_GAIN, }, }, }, diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c index 4293e89bbbd..c47c2034ca7 100644 --- a/drivers/infiniband/core/iwcm.c +++ b/drivers/infiniband/core/iwcm.c @@ -46,7 +46,6 @@ #include <linux/completion.h> #include <linux/slab.h> #include <linux/module.h> -#include <linux/sysctl.h> #include <rdma/iw_cm.h> #include <rdma/ib_addr.h> @@ -66,20 +65,6 @@ struct iwcm_work { struct list_head free_list; }; -static unsigned int default_backlog = 256; - -static struct ctl_table_header *iwcm_ctl_table_hdr; -static struct ctl_table iwcm_ctl_table[] = { - { - .procname = "default_backlog", - .data = &default_backlog, - .maxlen = sizeof(default_backlog), - .mode = 0644, - .proc_handler = proc_dointvec, - }, - { } -}; - /* * The following services provide a mechanism for pre-allocating iwcm_work * elements. The design pre-allocates them based on the cm_id type: @@ -434,9 +419,6 @@ int iw_cm_listen(struct iw_cm_id *cm_id, int backlog) cm_id_priv = container_of(cm_id, struct iwcm_id_private, id); - if (!backlog) - backlog = default_backlog; - ret = alloc_work_entries(cm_id_priv, backlog); if (ret) return ret; @@ -1042,20 +1024,11 @@ static int __init iw_cm_init(void) if (!iwcm_wq) return -ENOMEM; - iwcm_ctl_table_hdr = register_net_sysctl(&init_net, "net/iw_cm", - iwcm_ctl_table); - if (!iwcm_ctl_table_hdr) { - pr_err("iw_cm: couldn't register sysctl paths\n"); - destroy_workqueue(iwcm_wq); - return -ENOMEM; - } - return 0; } static void __exit iw_cm_cleanup(void) { - unregister_net_sysctl_table(iwcm_ctl_table_hdr); destroy_workqueue(iwcm_wq); } diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c index 1acb9910055..f0d588f8859 100644 --- a/drivers/infiniband/core/user_mad.c +++ b/drivers/infiniband/core/user_mad.c @@ -98,7 +98,7 @@ struct ib_umad_port { struct ib_umad_device { int start_port, end_port; - struct kobject kobj; + struct kref ref; struct ib_umad_port port[0]; }; @@ -134,18 +134,14 @@ static DECLARE_BITMAP(dev_map, IB_UMAD_MAX_PORTS); static void ib_umad_add_one(struct ib_device *device); static void ib_umad_remove_one(struct ib_device *device); -static void ib_umad_release_dev(struct kobject *kobj) +static void ib_umad_release_dev(struct kref *ref) { struct ib_umad_device *dev = - container_of(kobj, struct ib_umad_device, kobj); + container_of(ref, struct ib_umad_device, ref); kfree(dev); } -static struct kobj_type ib_umad_dev_ktype = { - .release = ib_umad_release_dev, -}; - static int hdr_size(struct ib_umad_file *file) { return file->use_pkey_index ? sizeof (struct ib_user_mad_hdr) : @@ -784,19 +780,27 @@ static int ib_umad_open(struct inode *inode, struct file *filp) { struct ib_umad_port *port; struct ib_umad_file *file; - int ret = -ENXIO; + int ret; port = container_of(inode->i_cdev, struct ib_umad_port, cdev); + if (port) + kref_get(&port->umad_dev->ref); + else + return -ENXIO; mutex_lock(&port->file_mutex); - if (!port->ib_dev) + if (!port->ib_dev) { + ret = -ENXIO; goto out; + } - ret = -ENOMEM; file = kzalloc(sizeof *file, GFP_KERNEL); - if (!file) + if (!file) { + kref_put(&port->umad_dev->ref, ib_umad_release_dev); + ret = -ENOMEM; goto out; + } mutex_init(&file->mutex); spin_lock_init(&file->send_lock); @@ -810,13 +814,6 @@ static int ib_umad_open(struct inode *inode, struct file *filp) list_add_tail(&file->port_list, &port->file_list); ret = nonseekable_open(inode, filp); - if (ret) { - list_del(&file->port_list); - kfree(file); - goto out; - } - - kobject_get(&port->umad_dev->kobj); out: mutex_unlock(&port->file_mutex); @@ -855,7 +852,7 @@ static int ib_umad_close(struct inode *inode, struct file *filp) mutex_unlock(&file->port->file_mutex); kfree(file); - kobject_put(&dev->kobj); + kref_put(&dev->ref, ib_umad_release_dev); return 0; } @@ -883,6 +880,10 @@ static int ib_umad_sm_open(struct inode *inode, struct file *filp) int ret; port = container_of(inode->i_cdev, struct ib_umad_port, sm_cdev); + if (port) + kref_get(&port->umad_dev->ref); + else + return -ENXIO; if (filp->f_flags & O_NONBLOCK) { if (down_trylock(&port->sm_sem)) { @@ -897,27 +898,17 @@ static int ib_umad_sm_open(struct inode *inode, struct file *filp) } ret = ib_modify_port(port->ib_dev, port->port_num, 0, &props); - if (ret) - goto err_up_sem; + if (ret) { + up(&port->sm_sem); + goto fail; + } filp->private_data = port; - ret = nonseekable_open(inode, filp); - if (ret) - goto err_clr_sm_cap; - - kobject_get(&port->umad_dev->kobj); - - return 0; - -err_clr_sm_cap: - swap(props.set_port_cap_mask, props.clr_port_cap_mask); - ib_modify_port(port->ib_dev, port->port_num, 0, &props); - -err_up_sem: - up(&port->sm_sem); + return nonseekable_open(inode, filp); fail: + kref_put(&port->umad_dev->ref, ib_umad_release_dev); return ret; } @@ -936,7 +927,7 @@ static int ib_umad_sm_close(struct inode *inode, struct file *filp) up(&port->sm_sem); - kobject_put(&port->umad_dev->kobj); + kref_put(&port->umad_dev->ref, ib_umad_release_dev); return ret; } @@ -1004,7 +995,6 @@ static int find_overflow_devnum(void) } static int ib_umad_init_port(struct ib_device *device, int port_num, - struct ib_umad_device *umad_dev, struct ib_umad_port *port) { int devnum; @@ -1037,7 +1027,6 @@ static int ib_umad_init_port(struct ib_device *device, int port_num, cdev_init(&port->cdev, &umad_fops); port->cdev.owner = THIS_MODULE; - port->cdev.kobj.parent = &umad_dev->kobj; kobject_set_name(&port->cdev.kobj, "umad%d", port->dev_num); if (cdev_add(&port->cdev, base, 1)) goto err_cdev; @@ -1056,7 +1045,6 @@ static int ib_umad_init_port(struct ib_device *device, int port_num, base += IB_UMAD_MAX_PORTS; cdev_init(&port->sm_cdev, &umad_sm_fops); port->sm_cdev.owner = THIS_MODULE; - port->sm_cdev.kobj.parent = &umad_dev->kobj; kobject_set_name(&port->sm_cdev.kobj, "issm%d", port->dev_num); if (cdev_add(&port->sm_cdev, base, 1)) goto err_sm_cdev; @@ -1150,7 +1138,7 @@ static void ib_umad_add_one(struct ib_device *device) if (!umad_dev) return; - kobject_init(&umad_dev->kobj, &ib_umad_dev_ktype); + kref_init(&umad_dev->ref); umad_dev->start_port = s; umad_dev->end_port = e; @@ -1158,8 +1146,7 @@ static void ib_umad_add_one(struct ib_device *device) for (i = s; i <= e; ++i) { umad_dev->port[i - s].umad_dev = umad_dev; - if (ib_umad_init_port(device, i, umad_dev, - &umad_dev->port[i - s])) + if (ib_umad_init_port(device, i, &umad_dev->port[i - s])) goto err; } @@ -1171,7 +1158,7 @@ err: while (--i >= s) ib_umad_kill_port(&umad_dev->port[i - s]); - kobject_put(&umad_dev->kobj); + kref_put(&umad_dev->ref, ib_umad_release_dev); } static void ib_umad_remove_one(struct ib_device *device) @@ -1185,7 +1172,7 @@ static void ib_umad_remove_one(struct ib_device *device) for (i = 0; i <= umad_dev->end_port - umad_dev->start_port; ++i) ib_umad_kill_port(&umad_dev->port[i]); - kobject_put(&umad_dev->kobj); + kref_put(&umad_dev->ref, ib_umad_release_dev); } static char *umad_devnode(struct device *dev, umode_t *mode) diff --git a/drivers/infiniband/hw/ehca/ehca_cq.c b/drivers/infiniband/hw/ehca/ehca_cq.c index 8cc83753776..212150c25ea 100644 --- a/drivers/infiniband/hw/ehca/ehca_cq.c +++ b/drivers/infiniband/hw/ehca/ehca_cq.c @@ -283,7 +283,6 @@ struct ib_cq *ehca_create_cq(struct ib_device *device, int cqe, int comp_vector, (my_cq->galpas.user.fw_handle & (PAGE_SIZE - 1)); if (ib_copy_to_udata(udata, &resp, sizeof(resp))) { ehca_err(device, "Copy to udata failed."); - cq = ERR_PTR(-EFAULT); goto create_cq_exit4; } } diff --git a/drivers/infiniband/hw/ipath/ipath_diag.c b/drivers/infiniband/hw/ipath/ipath_diag.c index 45802e97332..714293b7851 100644 --- a/drivers/infiniband/hw/ipath/ipath_diag.c +++ b/drivers/infiniband/hw/ipath/ipath_diag.c @@ -326,7 +326,7 @@ static ssize_t ipath_diagpkt_write(struct file *fp, size_t count, loff_t *off) { u32 __iomem *piobuf; - u32 plen, pbufn, maxlen_reserve; + u32 plen, clen, pbufn; struct ipath_diag_pkt odp; struct ipath_diag_xpkt dp; u32 *tmpbuf = NULL; @@ -335,24 +335,42 @@ static ssize_t ipath_diagpkt_write(struct file *fp, u64 val; u32 l_state, lt_state; /* LinkState, LinkTrainingState */ + if (count < sizeof(odp)) { + ret = -EINVAL; + goto bail; + } if (count == sizeof(dp)) { if (copy_from_user(&dp, data, sizeof(dp))) { ret = -EFAULT; goto bail; } - } else if (count == sizeof(odp)) { - if (copy_from_user(&odp, data, sizeof(odp))) { - ret = -EFAULT; - goto bail; - } - dp.len = odp.len; + } else if (copy_from_user(&odp, data, sizeof(odp))) { + ret = -EFAULT; + goto bail; + } + + /* + * Due to padding/alignment issues (lessened with new struct) + * the old and new structs are the same length. We need to + * disambiguate them, which we can do because odp.len has never + * been less than the total of LRH+BTH+DETH so far, while + * dp.unit (same offset) unit is unlikely to get that high. + * Similarly, dp.data, the pointer to user at the same offset + * as odp.unit, is almost certainly at least one (512byte)page + * "above" NULL. The if-block below can be omitted if compatibility + * between a new driver and older diagnostic code is unimportant. + * compatibility the other direction (new diags, old driver) is + * handled in the diagnostic code, with a warning. + */ + if (dp.unit >= 20 && dp.data < 512) { + /* very probable version mismatch. Fix it up */ + memcpy(&odp, &dp, sizeof(odp)); + /* We got a legacy dp, copy elements to dp */ dp.unit = odp.unit; dp.data = odp.data; - dp.pbc_wd = 0; - } else { - ret = -EINVAL; - goto bail; + dp.len = odp.len; + dp.pbc_wd = 0; /* Indicate we need to compute PBC wd */ } /* send count must be an exact number of dwords */ @@ -361,7 +379,7 @@ static ssize_t ipath_diagpkt_write(struct file *fp, goto bail; } - plen = dp.len >> 2; + clen = dp.len >> 2; dd = ipath_lookup(dp.unit); if (!dd || !(dd->ipath_flags & IPATH_PRESENT) || @@ -404,22 +422,16 @@ static ssize_t ipath_diagpkt_write(struct file *fp, goto bail; } - /* - * need total length before first word written, plus 2 Dwords. One Dword - * is for padding so we get the full user data when not aligned on - * a word boundary. The other Dword is to make sure we have room for the - * ICRC which gets tacked on later. - */ - maxlen_reserve = 2 * sizeof(u32); - if (dp.len > dd->ipath_ibmaxlen - maxlen_reserve) { + /* need total length before first word written */ + /* +1 word is for the qword padding */ + plen = sizeof(u32) + dp.len; + + if ((plen + 4) > dd->ipath_ibmaxlen) { ipath_dbg("Pkt len 0x%x > ibmaxlen %x\n", - dp.len, dd->ipath_ibmaxlen); + plen - 4, dd->ipath_ibmaxlen); ret = -EINVAL; - goto bail; + goto bail; /* before writing pbc */ } - - plen = sizeof(u32) + dp.len; - tmpbuf = vmalloc(plen); if (!tmpbuf) { dev_info(&dd->pcidev->dev, "Unable to allocate tmp buffer, " @@ -461,11 +473,11 @@ static ssize_t ipath_diagpkt_write(struct file *fp, */ if (dd->ipath_flags & IPATH_PIO_FLUSH_WC) { ipath_flush_wc(); - __iowrite32_copy(piobuf + 2, tmpbuf, plen - 1); + __iowrite32_copy(piobuf + 2, tmpbuf, clen - 1); ipath_flush_wc(); - __raw_writel(tmpbuf[plen - 1], piobuf + plen + 1); + __raw_writel(tmpbuf[clen - 1], piobuf + clen + 1); } else - __iowrite32_copy(piobuf + 2, tmpbuf, plen); + __iowrite32_copy(piobuf + 2, tmpbuf, clen); ipath_flush_wc(); diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c index 42dde06fdb9..5b71d43bd89 100644 --- a/drivers/infiniband/hw/mthca/mthca_provider.c +++ b/drivers/infiniband/hw/mthca/mthca_provider.c @@ -695,7 +695,6 @@ static struct ib_cq *mthca_create_cq(struct ib_device *ibdev, int entries, if (context && ib_copy_to_udata(udata, &cq->cqn, sizeof (__u32))) { mthca_free_cq(to_mdev(ibdev), cq); - err = -EFAULT; goto err_free; } diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c index 7510a3c8075..8f67fe2e91e 100644 --- a/drivers/infiniband/hw/nes/nes_verbs.c +++ b/drivers/infiniband/hw/nes/nes_verbs.c @@ -1186,7 +1186,7 @@ static struct ib_qp *nes_create_qp(struct ib_pd *ibpd, nes_free_resource(nesadapter, nesadapter->allocated_qps, qp_num); kfree(nesqp->allocated_buffer); nes_debug(NES_DBG_QP, "ib_copy_from_udata() Failed \n"); - return ERR_PTR(-EFAULT); + return NULL; } if (req.user_wqe_buffers) { virt_wqs = 1; diff --git a/drivers/infiniband/hw/qib/qib_mad.c b/drivers/infiniband/hw/qib/qib_mad.c index 1dd9fcbb7c9..ccb119143d2 100644 --- a/drivers/infiniband/hw/qib/qib_mad.c +++ b/drivers/infiniband/hw/qib/qib_mad.c @@ -1028,7 +1028,7 @@ static int set_pkeys(struct qib_devdata *dd, u8 port, u16 *pkeys) event.event = IB_EVENT_PKEY_CHANGE; event.device = &dd->verbs_dev.ibdev; - event.element.port_num = port; + event.element.port_num = 1; ib_dispatch_event(&event); } return 0; diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c index acb3865710c..6fc283a041d 100644 --- a/drivers/infiniband/ulp/isert/ib_isert.c +++ b/drivers/infiniband/ulp/isert/ib_isert.c @@ -27,7 +27,6 @@ #include <target/target_core_base.h> #include <target/target_core_fabric.h> #include <target/iscsi/iscsi_transport.h> -#include <linux/semaphore.h> #include "isert_proto.h" #include "ib_isert.h" @@ -382,14 +381,6 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event) struct ib_device *ib_dev = cma_id->device; int ret = 0; - spin_lock_bh(&np->np_thread_lock); - if (!np->enabled) { - spin_unlock_bh(&np->np_thread_lock); - pr_debug("iscsi_np is not enabled, reject connect request\n"); - return rdma_reject(cma_id, NULL, 0); - } - spin_unlock_bh(&np->np_thread_lock); - pr_debug("Entering isert_connect_request cma_id: %p, context: %p\n", cma_id, cma_id->context); @@ -401,9 +392,10 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event) isert_conn->state = ISER_CONN_INIT; INIT_LIST_HEAD(&isert_conn->conn_accept_node); init_completion(&isert_conn->conn_login_comp); - init_completion(&isert_conn->conn_wait); - init_completion(&isert_conn->conn_wait_comp_err); + init_waitqueue_head(&isert_conn->conn_wait); + init_waitqueue_head(&isert_conn->conn_wait_comp_err); kref_init(&isert_conn->conn_kref); + kref_get(&isert_conn->conn_kref); mutex_init(&isert_conn->conn_mutex); cma_id->context = isert_conn; @@ -467,11 +459,11 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event) goto out_conn_dev; mutex_lock(&isert_np->np_accept_mutex); - list_add_tail(&isert_conn->conn_accept_node, &isert_np->np_accept_list); + list_add_tail(&isert_np->np_accept_list, &isert_conn->conn_accept_node); mutex_unlock(&isert_np->np_accept_mutex); - pr_debug("isert_connect_request() up np_sem np: %p\n", np); - up(&isert_np->np_sem); + pr_debug("isert_connect_request() waking up np_accept_wq: %p\n", np); + wake_up(&isert_np->np_accept_wq); return 0; out_conn_dev: @@ -529,9 +521,7 @@ isert_connect_release(struct isert_conn *isert_conn) static void isert_connected_handler(struct rdma_cm_id *cma_id) { - struct isert_conn *isert_conn = cma_id->context; - - kref_get(&isert_conn->conn_kref); + return; } static void @@ -560,11 +550,11 @@ isert_disconnect_work(struct work_struct *work) pr_debug("isert_disconnect_work(): >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n"); mutex_lock(&isert_conn->conn_mutex); - if (isert_conn->state == ISER_CONN_UP) - isert_conn->state = ISER_CONN_TERMINATING; + isert_conn->state = ISER_CONN_DOWN; if (isert_conn->post_recv_buf_count == 0 && atomic_read(&isert_conn->post_send_buf_count) == 0) { + pr_debug("Calling wake_up(&isert_conn->conn_wait);\n"); mutex_unlock(&isert_conn->conn_mutex); goto wake_up; } @@ -573,24 +563,26 @@ isert_disconnect_work(struct work_struct *work) isert_put_conn(isert_conn); return; } - - if (isert_conn->disconnect) { - /* Send DREQ/DREP towards our initiator */ + if (!isert_conn->logout_posted) { + pr_debug("Calling rdma_disconnect for !logout_posted from" + " isert_disconnect_work\n"); rdma_disconnect(isert_conn->conn_cm_id); + mutex_unlock(&isert_conn->conn_mutex); + iscsit_cause_connection_reinstatement(isert_conn->conn, 0); + goto wake_up; } - mutex_unlock(&isert_conn->conn_mutex); wake_up: - complete(&isert_conn->conn_wait); + wake_up(&isert_conn->conn_wait); + isert_put_conn(isert_conn); } static void -isert_disconnected_handler(struct rdma_cm_id *cma_id, bool disconnect) +isert_disconnected_handler(struct rdma_cm_id *cma_id) { struct isert_conn *isert_conn = (struct isert_conn *)cma_id->context; - isert_conn->disconnect = disconnect; INIT_WORK(&isert_conn->conn_logout_work, isert_disconnect_work); schedule_work(&isert_conn->conn_logout_work); } @@ -599,28 +591,29 @@ static int isert_cma_handler(struct rdma_cm_id *cma_id, struct rdma_cm_event *event) { int ret = 0; - bool disconnect = false; pr_debug("isert_cma_handler: event %d status %d conn %p id %p\n", event->event, event->status, cma_id->context, cma_id); switch (event->event) { case RDMA_CM_EVENT_CONNECT_REQUEST: + pr_debug("RDMA_CM_EVENT_CONNECT_REQUEST: >>>>>>>>>>>>>>>\n"); ret = isert_connect_request(cma_id, event); break; case RDMA_CM_EVENT_ESTABLISHED: + pr_debug("RDMA_CM_EVENT_ESTABLISHED >>>>>>>>>>>>>>\n"); isert_connected_handler(cma_id); break; - case RDMA_CM_EVENT_ADDR_CHANGE: /* FALLTHRU */ - case RDMA_CM_EVENT_DISCONNECTED: /* FALLTHRU */ - case RDMA_CM_EVENT_DEVICE_REMOVAL: /* FALLTHRU */ - disconnect = true; - case RDMA_CM_EVENT_TIMEWAIT_EXIT: /* FALLTHRU */ - isert_disconnected_handler(cma_id, disconnect); + case RDMA_CM_EVENT_DISCONNECTED: + pr_debug("RDMA_CM_EVENT_DISCONNECTED: >>>>>>>>>>>>>>\n"); + isert_disconnected_handler(cma_id); + break; + case RDMA_CM_EVENT_DEVICE_REMOVAL: + case RDMA_CM_EVENT_ADDR_CHANGE: break; case RDMA_CM_EVENT_CONNECT_ERROR: default: - pr_err("Unhandled RDMA CMA event: %d\n", event->event); + pr_err("Unknown RDMA CMA event: %d\n", event->event); break; } @@ -971,8 +964,6 @@ sequence_cmd: if (!rc && dump_payload == false && unsol_data) iscsit_set_unsoliticed_dataout(cmd); - else if (dump_payload && imm_data) - target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd); return 0; } @@ -1210,7 +1201,7 @@ isert_unmap_cmd(struct isert_cmd *isert_cmd, struct isert_conn *isert_conn) } static void -isert_put_cmd(struct isert_cmd *isert_cmd, bool comp_err) +isert_put_cmd(struct isert_cmd *isert_cmd) { struct iscsi_cmd *cmd = &isert_cmd->iscsi_cmd; struct isert_conn *isert_conn = isert_cmd->conn; @@ -1222,24 +1213,11 @@ isert_put_cmd(struct isert_cmd *isert_cmd, bool comp_err) case ISCSI_OP_SCSI_CMD: spin_lock_bh(&conn->cmd_lock); if (!list_empty(&cmd->i_conn_node)) - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); - if (cmd->data_direction == DMA_TO_DEVICE) { + if (cmd->data_direction == DMA_TO_DEVICE) iscsit_stop_dataout_timer(cmd); - /* - * Check for special case during comp_err where - * WRITE_PENDING has been handed off from core, - * but requires an extra target_put_sess_cmd() - * before transport_generic_free_cmd() below. - */ - if (comp_err && - cmd->se_cmd.t_state == TRANSPORT_WRITE_PENDING) { - struct se_cmd *se_cmd = &cmd->se_cmd; - - target_put_sess_cmd(se_cmd->se_sess, se_cmd); - } - } isert_unmap_cmd(isert_cmd, isert_conn); transport_generic_free_cmd(&cmd->se_cmd, 0); @@ -1247,7 +1225,7 @@ isert_put_cmd(struct isert_cmd *isert_cmd, bool comp_err) case ISCSI_OP_SCSI_TMFUNC: spin_lock_bh(&conn->cmd_lock); if (!list_empty(&cmd->i_conn_node)) - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); transport_generic_free_cmd(&cmd->se_cmd, 0); @@ -1256,7 +1234,7 @@ isert_put_cmd(struct isert_cmd *isert_cmd, bool comp_err) case ISCSI_OP_NOOP_OUT: spin_lock_bh(&conn->cmd_lock); if (!list_empty(&cmd->i_conn_node)) - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); /* @@ -1293,7 +1271,7 @@ isert_unmap_tx_desc(struct iser_tx_desc *tx_desc, struct ib_device *ib_dev) static void isert_completion_put(struct iser_tx_desc *tx_desc, struct isert_cmd *isert_cmd, - struct ib_device *ib_dev, bool comp_err) + struct ib_device *ib_dev) { if (isert_cmd->sense_buf_dma != 0) { pr_debug("Calling ib_dma_unmap_single for isert_cmd->sense_buf_dma\n"); @@ -1303,7 +1281,7 @@ isert_completion_put(struct iser_tx_desc *tx_desc, struct isert_cmd *isert_cmd, } isert_unmap_tx_desc(tx_desc, ib_dev); - isert_put_cmd(isert_cmd, comp_err); + isert_put_cmd(isert_cmd); } static void @@ -1330,7 +1308,6 @@ isert_completion_rdma_read(struct iser_tx_desc *tx_desc, } cmd->write_data_done = se_cmd->data_length; - wr->send_wr_num = 0; pr_debug("isert_do_rdma_read_comp, calling target_execute_cmd\n"); spin_lock_bh(&cmd->istate_lock); @@ -1358,19 +1335,22 @@ isert_do_control_comp(struct work_struct *work) iscsit_tmr_post_handler(cmd, cmd->conn); cmd->i_state = ISTATE_SENT_STATUS; - isert_completion_put(&isert_cmd->tx_desc, isert_cmd, ib_dev, false); + isert_completion_put(&isert_cmd->tx_desc, isert_cmd, ib_dev); break; case ISTATE_SEND_REJECT: pr_debug("Got isert_do_control_comp ISTATE_SEND_REJECT: >>>\n"); atomic_dec(&isert_conn->post_send_buf_count); cmd->i_state = ISTATE_SENT_STATUS; - isert_completion_put(&isert_cmd->tx_desc, isert_cmd, ib_dev, false); + isert_completion_put(&isert_cmd->tx_desc, isert_cmd, ib_dev); break; case ISTATE_SEND_LOGOUTRSP: pr_debug("Calling iscsit_logout_post_handler >>>>>>>>>>>>>>\n"); - - atomic_dec(&isert_conn->post_send_buf_count); + /* + * Call atomic_dec(&isert_conn->post_send_buf_count) + * from isert_free_conn() + */ + isert_conn->logout_posted = true; iscsit_logout_post_handler(cmd, cmd->conn); break; default: @@ -1387,7 +1367,6 @@ isert_response_completion(struct iser_tx_desc *tx_desc, struct ib_device *ib_dev) { struct iscsi_cmd *cmd = &isert_cmd->iscsi_cmd; - struct isert_rdma_wr *wr = &isert_cmd->rdma_wr; if (cmd->i_state == ISTATE_SEND_TASKMGTRSP || cmd->i_state == ISTATE_SEND_LOGOUTRSP || @@ -1398,10 +1377,10 @@ isert_response_completion(struct iser_tx_desc *tx_desc, queue_work(isert_comp_wq, &isert_cmd->comp_work); return; } - atomic_sub(wr->send_wr_num + 1, &isert_conn->post_send_buf_count); + atomic_dec(&isert_conn->post_send_buf_count); cmd->i_state = ISTATE_SENT_STATUS; - isert_completion_put(tx_desc, isert_cmd, ib_dev, false); + isert_completion_put(tx_desc, isert_cmd, ib_dev); } static void @@ -1436,7 +1415,7 @@ isert_send_completion(struct iser_tx_desc *tx_desc, case ISER_IB_RDMA_READ: pr_debug("isert_send_completion: Got ISER_IB_RDMA_READ:\n"); - atomic_sub(wr->send_wr_num, &isert_conn->post_send_buf_count); + atomic_dec(&isert_conn->post_send_buf_count); isert_completion_rdma_read(tx_desc, isert_cmd); break; default: @@ -1447,40 +1426,31 @@ isert_send_completion(struct iser_tx_desc *tx_desc, } static void -isert_cq_tx_comp_err(struct iser_tx_desc *tx_desc, struct isert_conn *isert_conn) +isert_cq_comp_err(struct iser_tx_desc *tx_desc, struct isert_conn *isert_conn) { struct ib_device *ib_dev = isert_conn->conn_cm_id->device; - struct isert_cmd *isert_cmd = tx_desc->isert_cmd; - - if (!isert_cmd) - isert_unmap_tx_desc(tx_desc, ib_dev); - else - isert_completion_put(tx_desc, isert_cmd, ib_dev, true); -} -static void -isert_cq_rx_comp_err(struct isert_conn *isert_conn) -{ - struct iscsi_conn *conn = isert_conn->conn; - - if (isert_conn->post_recv_buf_count) - return; + if (tx_desc) { + struct isert_cmd *isert_cmd = tx_desc->isert_cmd; - if (conn->sess) { - target_sess_cmd_list_set_waiting(conn->sess->se_sess); - target_wait_for_sess_cmds(conn->sess->se_sess); + if (!isert_cmd) + isert_unmap_tx_desc(tx_desc, ib_dev); + else + isert_completion_put(tx_desc, isert_cmd, ib_dev); } - while (atomic_read(&isert_conn->post_send_buf_count)) - msleep(3000); - - mutex_lock(&isert_conn->conn_mutex); - isert_conn->state = ISER_CONN_DOWN; - mutex_unlock(&isert_conn->conn_mutex); + if (isert_conn->post_recv_buf_count == 0 && + atomic_read(&isert_conn->post_send_buf_count) == 0) { + pr_debug("isert_cq_comp_err >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n"); + pr_debug("Calling wake_up from isert_cq_comp_err\n"); - iscsit_cause_connection_reinstatement(isert_conn->conn, 0); + mutex_lock(&isert_conn->conn_mutex); + if (isert_conn->state != ISER_CONN_DOWN) + isert_conn->state = ISER_CONN_TERMINATING; + mutex_unlock(&isert_conn->conn_mutex); - complete(&isert_conn->conn_wait_comp_err); + wake_up(&isert_conn->conn_wait_comp_err); + } } static void @@ -1505,7 +1475,7 @@ isert_cq_tx_work(struct work_struct *work) pr_debug("TX wc.status != IB_WC_SUCCESS >>>>>>>>>>>>>>\n"); pr_debug("TX wc.status: 0x%08x\n", wc.status); atomic_dec(&isert_conn->post_send_buf_count); - isert_cq_tx_comp_err(tx_desc, isert_conn); + isert_cq_comp_err(tx_desc, isert_conn); } } @@ -1547,7 +1517,7 @@ isert_cq_rx_work(struct work_struct *work) pr_debug("RX wc.status: 0x%08x\n", wc.status); isert_conn->post_recv_buf_count--; - isert_cq_rx_comp_err(isert_conn); + isert_cq_comp_err(NULL, isert_conn); } } @@ -1857,12 +1827,12 @@ isert_put_datain(struct iscsi_conn *conn, struct iscsi_cmd *cmd) isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc); isert_init_send_wr(isert_cmd, &isert_cmd->tx_desc.send_wr); - atomic_add(wr->send_wr_num + 1, &isert_conn->post_send_buf_count); + atomic_inc(&isert_conn->post_send_buf_count); rc = ib_post_send(isert_conn->conn_qp, wr->send_wr, &wr_failed); if (rc) { pr_warn("ib_post_send() failed for IB_WR_RDMA_WRITE\n"); - atomic_sub(wr->send_wr_num + 1, &isert_conn->post_send_buf_count); + atomic_dec(&isert_conn->post_send_buf_count); } pr_debug("Posted RDMA_WRITE + Response for iSER Data READ\n"); return 1; @@ -1965,12 +1935,12 @@ isert_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd, bool recovery) data_left -= data_len; } - atomic_add(wr->send_wr_num, &isert_conn->post_send_buf_count); + atomic_inc(&isert_conn->post_send_buf_count); rc = ib_post_send(isert_conn->conn_qp, wr->send_wr, &wr_failed); if (rc) { pr_warn("ib_post_send() failed for IB_WR_RDMA_READ\n"); - atomic_sub(wr->send_wr_num, &isert_conn->post_send_buf_count); + atomic_dec(&isert_conn->post_send_buf_count); } pr_debug("Posted RDMA_READ memory for ISER Data WRITE\n"); return 0; @@ -2050,7 +2020,7 @@ isert_setup_np(struct iscsi_np *np, pr_err("Unable to allocate struct isert_np\n"); return -ENOMEM; } - sema_init(&isert_np->np_sem, 0); + init_waitqueue_head(&isert_np->np_accept_wq); mutex_init(&isert_np->np_accept_mutex); INIT_LIST_HEAD(&isert_np->np_accept_list); init_completion(&isert_np->np_login_comp); @@ -2099,6 +2069,18 @@ out: } static int +isert_check_accept_queue(struct isert_np *isert_np) +{ + int empty; + + mutex_lock(&isert_np->np_accept_mutex); + empty = list_empty(&isert_np->np_accept_list); + mutex_unlock(&isert_np->np_accept_mutex); + + return empty; +} + +static int isert_rdma_accept(struct isert_conn *isert_conn) { struct rdma_cm_id *cm_id = isert_conn->conn_cm_id; @@ -2182,19 +2164,16 @@ isert_accept_np(struct iscsi_np *np, struct iscsi_conn *conn) int max_accept = 0, ret; accept_wait: - ret = down_interruptible(&isert_np->np_sem); + ret = wait_event_interruptible(isert_np->np_accept_wq, + !isert_check_accept_queue(isert_np) || + np->np_thread_state == ISCSI_NP_THREAD_RESET); if (max_accept > 5) return -ENODEV; spin_lock_bh(&np->np_thread_lock); - if (np->np_thread_state >= ISCSI_NP_THREAD_RESET) { + if (np->np_thread_state == ISCSI_NP_THREAD_RESET) { spin_unlock_bh(&np->np_thread_lock); - pr_debug("np_thread_state %d for isert_accept_np\n", - np->np_thread_state); - /** - * No point in stalling here when np_thread - * is in state RESET/SHUTDOWN/EXIT - bail - **/ + pr_err("ISCSI_NP_THREAD_RESET for isert_accept_np\n"); return -ENODEV; } spin_unlock_bh(&np->np_thread_lock); @@ -2239,38 +2218,63 @@ isert_free_np(struct iscsi_np *np) kfree(isert_np); } -static void isert_wait_conn(struct iscsi_conn *conn) +static int isert_check_state(struct isert_conn *isert_conn, int state) { - struct isert_conn *isert_conn = conn->context; + int ret; + + mutex_lock(&isert_conn->conn_mutex); + ret = (isert_conn->state == state); + mutex_unlock(&isert_conn->conn_mutex); - pr_debug("isert_wait_conn: Starting \n"); + return ret; +} +static void isert_free_conn(struct iscsi_conn *conn) +{ + struct isert_conn *isert_conn = conn->context; + + pr_debug("isert_free_conn: Starting \n"); + /* + * Decrement post_send_buf_count for special case when called + * from isert_do_control_comp() -> iscsit_logout_post_handler() + */ mutex_lock(&isert_conn->conn_mutex); - if (isert_conn->conn_cm_id) { - pr_debug("Calling rdma_disconnect from isert_wait_conn\n"); + if (isert_conn->logout_posted) + atomic_dec(&isert_conn->post_send_buf_count); + + if (isert_conn->conn_cm_id && isert_conn->state != ISER_CONN_DOWN) { + pr_debug("Calling rdma_disconnect from isert_free_conn\n"); rdma_disconnect(isert_conn->conn_cm_id); } /* * Only wait for conn_wait_comp_err if the isert_conn made it * into full feature phase.. */ + if (isert_conn->state == ISER_CONN_UP) { + pr_debug("isert_free_conn: Before wait_event comp_err %d\n", + isert_conn->state); + mutex_unlock(&isert_conn->conn_mutex); + + wait_event(isert_conn->conn_wait_comp_err, + (isert_check_state(isert_conn, ISER_CONN_TERMINATING))); + + wait_event(isert_conn->conn_wait, + (isert_check_state(isert_conn, ISER_CONN_DOWN))); + + isert_put_conn(isert_conn); + return; + } if (isert_conn->state == ISER_CONN_INIT) { mutex_unlock(&isert_conn->conn_mutex); + isert_put_conn(isert_conn); return; } - if (isert_conn->state == ISER_CONN_UP) - isert_conn->state = ISER_CONN_TERMINATING; + pr_debug("isert_free_conn: wait_event conn_wait %d\n", + isert_conn->state); mutex_unlock(&isert_conn->conn_mutex); - wait_for_completion(&isert_conn->conn_wait_comp_err); - - wait_for_completion(&isert_conn->conn_wait); - isert_put_conn(isert_conn); -} - -static void isert_free_conn(struct iscsi_conn *conn) -{ - struct isert_conn *isert_conn = conn->context; + wait_event(isert_conn->conn_wait, + (isert_check_state(isert_conn, ISER_CONN_DOWN))); isert_put_conn(isert_conn); } @@ -2282,7 +2286,6 @@ static struct iscsit_transport iser_target_transport = { .iscsit_setup_np = isert_setup_np, .iscsit_accept_np = isert_accept_np, .iscsit_free_np = isert_free_np, - .iscsit_wait_conn = isert_wait_conn, .iscsit_free_conn = isert_free_conn, .iscsit_alloc_cmd = isert_alloc_cmd, .iscsit_get_login_rx = isert_get_login_rx, @@ -2333,7 +2336,6 @@ destroy_rx_wq: static void __exit isert_exit(void) { - flush_scheduled_work(); kmem_cache_destroy(isert_cmd_cache); destroy_workqueue(isert_comp_wq); destroy_workqueue(isert_rx_wq); diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h index 032f65abee3..5795c82a230 100644 --- a/drivers/infiniband/ulp/isert/ib_isert.h +++ b/drivers/infiniband/ulp/isert/ib_isert.h @@ -78,6 +78,7 @@ struct isert_device; struct isert_conn { enum iser_conn_state state; + bool logout_posted; int post_recv_buf_count; atomic_t post_send_buf_count; u32 responder_resources; @@ -102,10 +103,9 @@ struct isert_conn { struct isert_device *conn_device; struct work_struct conn_logout_work; struct mutex conn_mutex; - struct completion conn_wait; - struct completion conn_wait_comp_err; + wait_queue_head_t conn_wait; + wait_queue_head_t conn_wait_comp_err; struct kref conn_kref; - bool disconnect; }; #define ISERT_MAX_CQ 64 @@ -131,7 +131,7 @@ struct isert_device { }; struct isert_np { - struct semaphore np_sem; + wait_queue_head_t np_accept_wq; struct rdma_cm_id *np_cm_id; struct mutex np_accept_mutex; struct list_head np_accept_list; diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 35dd5ff662f..793ac5dcee7 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -93,7 +93,6 @@ static void srp_send_completion(struct ib_cq *cq, void *target_ptr); static int srp_cm_handler(struct ib_cm_id *cm_id, struct ib_cm_event *event); static struct scsi_transport_template *ib_srp_transport_template; -static struct workqueue_struct *srp_remove_wq; static struct ib_client srp_client = { .name = "srp", @@ -457,7 +456,7 @@ static bool srp_queue_remove_work(struct srp_target_port *target) spin_unlock_irq(&target->lock); if (changed) - queue_work(srp_remove_wq, &target->remove_work); + queue_work(system_long_wq, &target->remove_work); return changed; } @@ -1410,12 +1409,6 @@ err_unmap: err_iu: srp_put_tx_iu(target, iu, SRP_IU_CMD); - /* - * Avoid that the loops that iterate over the request ring can - * encounter a dangling SCSI command pointer. - */ - req->scmnd = NULL; - spin_lock_irqsave(&target->lock, flags); list_add(&req->list, &target->free_reqs); @@ -2531,10 +2524,9 @@ static void srp_remove_one(struct ib_device *device) spin_unlock(&host->target_lock); /* - * Wait for tl_err and target port removal tasks. + * Wait for target port removal tasks. */ flush_workqueue(system_long_wq); - flush_workqueue(srp_remove_wq); kfree(host); } @@ -2579,22 +2571,16 @@ static int __init srp_init_module(void) indirect_sg_entries = cmd_sg_entries; } - srp_remove_wq = create_workqueue("srp_remove"); - if (IS_ERR(srp_remove_wq)) { - ret = PTR_ERR(srp_remove_wq); - goto out; - } - - ret = -ENOMEM; ib_srp_transport_template = srp_attach_transport(&ib_srp_transport_functions); if (!ib_srp_transport_template) - goto destroy_wq; + return -ENOMEM; ret = class_register(&srp_class); if (ret) { pr_err("couldn't register class infiniband_srp\n"); - goto release_tr; + srp_release_transport(ib_srp_transport_template); + return ret; } ib_sa_register_client(&srp_sa_client); @@ -2602,22 +2588,13 @@ static int __init srp_init_module(void) ret = ib_register_client(&srp_client); if (ret) { pr_err("couldn't register IB client\n"); - goto unreg_sa; + srp_release_transport(ib_srp_transport_template); + ib_sa_unregister_client(&srp_sa_client); + class_unregister(&srp_class); + return ret; } -out: - return ret; - -unreg_sa: - ib_sa_unregister_client(&srp_sa_client); - class_unregister(&srp_class); - -release_tr: - srp_release_transport(ib_srp_transport_template); - -destroy_wq: - destroy_workqueue(srp_remove_wq); - goto out; + return 0; } static void __exit srp_cleanup_module(void) @@ -2626,7 +2603,6 @@ static void __exit srp_cleanup_module(void) ib_sa_unregister_client(&srp_sa_client); class_unregister(&srp_class); srp_release_transport(ib_srp_transport_template); - destroy_workqueue(srp_remove_wq); } module_init(srp_init_module); diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c index 64953dfa9d8..6c66a728a37 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.c +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c @@ -1078,7 +1078,6 @@ static void srpt_unmap_sg_to_ib_sge(struct srpt_rdma_ch *ch, static int srpt_map_sg_to_ib_sge(struct srpt_rdma_ch *ch, struct srpt_send_ioctx *ioctx) { - struct ib_device *dev = ch->sport->sdev->device; struct se_cmd *cmd; struct scatterlist *sg, *sg_orig; int sg_cnt; @@ -1125,7 +1124,7 @@ static int srpt_map_sg_to_ib_sge(struct srpt_rdma_ch *ch, db = ioctx->rbufs; tsize = cmd->data_length; - dma_len = ib_sg_dma_len(dev, &sg[0]); + dma_len = sg_dma_len(&sg[0]); riu = ioctx->rdma_ius; /* @@ -1156,8 +1155,7 @@ static int srpt_map_sg_to_ib_sge(struct srpt_rdma_ch *ch, ++j; if (j < count) { sg = sg_next(sg); - dma_len = ib_sg_dma_len( - dev, sg); + dma_len = sg_dma_len(sg); } } } else { @@ -1194,8 +1192,8 @@ static int srpt_map_sg_to_ib_sge(struct srpt_rdma_ch *ch, tsize = cmd->data_length; riu = ioctx->rdma_ius; sg = sg_orig; - dma_len = ib_sg_dma_len(dev, &sg[0]); - dma_addr = ib_sg_dma_address(dev, &sg[0]); + dma_len = sg_dma_len(&sg[0]); + dma_addr = sg_dma_address(&sg[0]); /* this second loop is really mapped sg_addres to rdma_iu->ib_sge */ for (i = 0, j = 0; @@ -1218,10 +1216,8 @@ static int srpt_map_sg_to_ib_sge(struct srpt_rdma_ch *ch, ++j; if (j < count) { sg = sg_next(sg); - dma_len = ib_sg_dma_len( - dev, sg); - dma_addr = ib_sg_dma_address( - dev, sg); + dma_len = sg_dma_len(sg); + dma_addr = sg_dma_address(sg); } } } else { diff --git a/drivers/input/input.c b/drivers/input/input.c index 2a7caab5431..f2bec0536b2 100644 --- a/drivers/input/input.c +++ b/drivers/input/input.c @@ -257,10 +257,9 @@ static int input_handle_abs_event(struct input_dev *dev, } static int input_get_disposition(struct input_dev *dev, - unsigned int type, unsigned int code, int *pval) + unsigned int type, unsigned int code, int value) { int disposition = INPUT_IGNORE_EVENT; - int value = *pval; switch (type) { @@ -358,7 +357,6 @@ static int input_get_disposition(struct input_dev *dev, break; } - *pval = value; return disposition; } @@ -367,7 +365,7 @@ static void input_handle_event(struct input_dev *dev, { int disposition; - disposition = input_get_disposition(dev, type, code, &value); + disposition = input_get_disposition(dev, type, code, value); if ((disposition & INPUT_PASS_TO_DEVICE) && dev->event) dev->event(dev, type, code, value); diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c index 6f5d7956913..2626773ff29 100644 --- a/drivers/input/keyboard/atkbd.c +++ b/drivers/input/keyboard/atkbd.c @@ -243,12 +243,6 @@ static void (*atkbd_platform_fixup)(struct atkbd *, const void *data); static void *atkbd_platform_fixup_data; static unsigned int (*atkbd_platform_scancode_fixup)(struct atkbd *, unsigned int); -/* - * Certain keyboards to not like ATKBD_CMD_RESET_DIS and stop responding - * to many commands until full reset (ATKBD_CMD_RESET_BAT) is performed. - */ -static bool atkbd_skip_deactivate; - static ssize_t atkbd_attr_show_helper(struct device *dev, char *buf, ssize_t (*handler)(struct atkbd *, char *)); static ssize_t atkbd_attr_set_helper(struct device *dev, const char *buf, size_t count, @@ -774,8 +768,7 @@ static int atkbd_probe(struct atkbd *atkbd) * Make sure nothing is coming from the keyboard and disturbs our * internal state. */ - if (!atkbd_skip_deactivate) - atkbd_deactivate(atkbd); + atkbd_deactivate(atkbd); return 0; } @@ -1645,12 +1638,6 @@ static int __init atkbd_setup_scancode_fixup(const struct dmi_system_id *id) return 1; } -static int __init atkbd_deactivate_fixup(const struct dmi_system_id *id) -{ - atkbd_skip_deactivate = true; - return 1; -} - static const struct dmi_system_id atkbd_dmi_quirk_table[] __initconst = { { .matches = { @@ -1788,12 +1775,6 @@ static const struct dmi_system_id atkbd_dmi_quirk_table[] __initconst = { .callback = atkbd_setup_scancode_fixup, .driver_data = atkbd_oqo_01plus_scancode_fixup, }, - { - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"), - }, - .callback = atkbd_deactivate_fixup, - }, { } }; diff --git a/drivers/input/mouse/cypress_ps2.c b/drivers/input/mouse/cypress_ps2.c index 0aaea7ad6ce..888a81a7ea3 100644 --- a/drivers/input/mouse/cypress_ps2.c +++ b/drivers/input/mouse/cypress_ps2.c @@ -410,6 +410,7 @@ static int cypress_set_input_params(struct input_dev *input, __clear_bit(REL_X, input->relbit); __clear_bit(REL_Y, input->relbit); + __set_bit(INPUT_PROP_BUTTONPAD, input->propbit); __set_bit(EV_KEY, input->evbit); __set_bit(BTN_LEFT, input->keybit); __set_bit(BTN_RIGHT, input->keybit); diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c index 85e75239c81..1e8e42fb03a 100644 --- a/drivers/input/mouse/elantech.c +++ b/drivers/input/mouse/elantech.c @@ -11,7 +11,6 @@ */ #include <linux/delay.h> -#include <linux/dmi.h> #include <linux/slab.h> #include <linux/module.h> #include <linux/input.h> @@ -473,15 +472,8 @@ static void elantech_report_absolute_v3(struct psmouse *psmouse, input_report_key(dev, BTN_TOOL_FINGER, fingers == 1); input_report_key(dev, BTN_TOOL_DOUBLETAP, fingers == 2); input_report_key(dev, BTN_TOOL_TRIPLETAP, fingers == 3); - - /* For clickpads map both buttons to BTN_LEFT */ - if (etd->fw_version & 0x001000) { - input_report_key(dev, BTN_LEFT, packet[0] & 0x03); - } else { - input_report_key(dev, BTN_LEFT, packet[0] & 0x01); - input_report_key(dev, BTN_RIGHT, packet[0] & 0x02); - } - + input_report_key(dev, BTN_LEFT, packet[0] & 0x01); + input_report_key(dev, BTN_RIGHT, packet[0] & 0x02); input_report_abs(dev, ABS_PRESSURE, pres); input_report_abs(dev, ABS_TOOL_WIDTH, width); @@ -491,17 +483,9 @@ static void elantech_report_absolute_v3(struct psmouse *psmouse, static void elantech_input_sync_v4(struct psmouse *psmouse) { struct input_dev *dev = psmouse->dev; - struct elantech_data *etd = psmouse->private; unsigned char *packet = psmouse->packet; - /* For clickpads map both buttons to BTN_LEFT */ - if (etd->fw_version & 0x001000) { - input_report_key(dev, BTN_LEFT, packet[0] & 0x03); - } else { - input_report_key(dev, BTN_LEFT, packet[0] & 0x01); - input_report_key(dev, BTN_RIGHT, packet[0] & 0x02); - } - + input_report_key(dev, BTN_LEFT, packet[0] & 0x01); input_mt_report_pointer_emulation(dev, true); input_sync(dev); } @@ -816,11 +800,7 @@ static int elantech_set_absolute_mode(struct psmouse *psmouse) break; case 3: - if (etd->set_hw_resolution) - etd->reg_10 = 0x0b; - else - etd->reg_10 = 0x01; - + etd->reg_10 = 0x0b; if (elantech_write_reg(psmouse, 0x10, etd->reg_10)) rc = -1; @@ -974,44 +954,6 @@ static int elantech_get_resolution_v4(struct psmouse *psmouse, } /* - * Advertise INPUT_PROP_BUTTONPAD for clickpads. The testing of bit 12 in - * fw_version for this is based on the following fw_version & caps table: - * - * Laptop-model: fw_version: caps: buttons: - * Acer S3 0x461f00 10, 13, 0e clickpad - * Acer S7-392 0x581f01 50, 17, 0d clickpad - * Acer V5-131 0x461f02 01, 16, 0c clickpad - * Acer V5-551 0x461f00 ? clickpad - * Asus K53SV 0x450f01 78, 15, 0c 2 hw buttons - * Asus G46VW 0x460f02 00, 18, 0c 2 hw buttons - * Asus G750JX 0x360f00 00, 16, 0c 2 hw buttons - * Asus UX31 0x361f00 20, 15, 0e clickpad - * Asus UX32VD 0x361f02 00, 15, 0e clickpad - * Avatar AVIU-145A2 0x361f00 ? clickpad - * Gigabyte U2442 0x450f01 58, 17, 0c 2 hw buttons - * Lenovo L430 0x350f02 b9, 15, 0c 2 hw buttons (*) - * Samsung NF210 0x150b00 78, 14, 0a 2 hw buttons - * Samsung NP770Z5E 0x575f01 10, 15, 0f clickpad - * Samsung NP700Z5B 0x361f06 21, 15, 0f clickpad - * Samsung NP900X3E-A02 0x575f03 ? clickpad - * Samsung NP-QX410 0x851b00 19, 14, 0c clickpad - * Samsung RC512 0x450f00 08, 15, 0c 2 hw buttons - * Samsung RF710 0x450f00 ? 2 hw buttons - * System76 Pangolin 0x250f01 ? 2 hw buttons - * (*) + 3 trackpoint buttons - */ -static void elantech_set_buttonpad_prop(struct psmouse *psmouse) -{ - struct input_dev *dev = psmouse->dev; - struct elantech_data *etd = psmouse->private; - - if (etd->fw_version & 0x001000) { - __set_bit(INPUT_PROP_BUTTONPAD, dev->propbit); - __clear_bit(BTN_RIGHT, dev->keybit); - } -} - -/* * Set the appropriate event bits for the input subsystem */ static int elantech_set_input_params(struct psmouse *psmouse) @@ -1054,8 +996,6 @@ static int elantech_set_input_params(struct psmouse *psmouse) __set_bit(INPUT_PROP_SEMI_MT, dev->propbit); /* fall through */ case 3: - if (etd->hw_version == 3) - elantech_set_buttonpad_prop(psmouse); input_set_abs_params(dev, ABS_X, x_min, x_max, 0, 0); input_set_abs_params(dev, ABS_Y, y_min, y_max, 0, 0); if (etd->reports_pressure) { @@ -1077,7 +1017,9 @@ static int elantech_set_input_params(struct psmouse *psmouse) */ psmouse_warn(psmouse, "couldn't query resolution data.\n"); } - elantech_set_buttonpad_prop(psmouse); + /* v4 is clickpad, with only one button. */ + __set_bit(INPUT_PROP_BUTTONPAD, dev->propbit); + __clear_bit(BTN_RIGHT, dev->keybit); __set_bit(BTN_TOOL_QUADTAP, dev->keybit); /* For X to recognize me as touchpad. */ input_set_abs_params(dev, ABS_X, x_min, x_max, 0, 0); @@ -1223,13 +1165,6 @@ static bool elantech_is_signature_valid(const unsigned char *param) if (param[1] == 0) return true; - /* - * Some models have a revision higher then 20. Meaning param[2] may - * be 10 or 20, skip the rates check for these. - */ - if (param[0] == 0x46 && (param[1] & 0xef) == 0x0f && param[2] < 40) - return true; - for (i = 0; i < ARRAY_SIZE(rates); i++) if (param[2] == rates[i]) return false; @@ -1327,23 +1262,6 @@ static int elantech_reconnect(struct psmouse *psmouse) } /* - * Some hw_version 3 models go into error state when we try to set - * bit 3 and/or bit 1 of r10. - */ -static const struct dmi_system_id no_hw_res_dmi_table[] = { -#if defined(CONFIG_DMI) && defined(CONFIG_X86) - { - /* Gigabyte U2442 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"), - DMI_MATCH(DMI_PRODUCT_NAME, "U2442"), - }, - }, -#endif - { } -}; - -/* * determine hardware version and set some properties according to it. */ static int elantech_set_properties(struct elantech_data *etd) @@ -1394,9 +1312,6 @@ static int elantech_set_properties(struct elantech_data *etd) etd->reports_pressure = true; } - /* Enable real hardware resolution on hw_version 3 ? */ - etd->set_hw_resolution = !dmi_check_system(no_hw_res_dmi_table); - return 0; } diff --git a/drivers/input/mouse/elantech.h b/drivers/input/mouse/elantech.h index c1c15ab6872..46db3be45ac 100644 --- a/drivers/input/mouse/elantech.h +++ b/drivers/input/mouse/elantech.h @@ -129,7 +129,6 @@ struct elantech_data { bool paritycheck; bool jumpy_cursor; bool reports_pressure; - bool set_hw_resolution; unsigned char hw_version; unsigned int fw_version; unsigned int single_finger_reports; diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c index d1c47d135c0..b2420ae19e1 100644 --- a/drivers/input/mouse/synaptics.c +++ b/drivers/input/mouse/synaptics.c @@ -265,22 +265,11 @@ static int synaptics_identify(struct psmouse *psmouse) * Read touchpad resolution and maximum reported coordinates * Resolution is left zero if touchpad does not support the query */ - -static const int *quirk_min_max; - static int synaptics_resolution(struct psmouse *psmouse) { struct synaptics_data *priv = psmouse->private; unsigned char resp[3]; - if (quirk_min_max) { - priv->x_min = quirk_min_max[0]; - priv->x_max = quirk_min_max[1]; - priv->y_min = quirk_min_max[2]; - priv->y_max = quirk_min_max[3]; - return 0; - } - if (SYN_ID_MAJOR(priv->identity) < 4) return 0; @@ -549,61 +538,10 @@ static int synaptics_parse_hw_state(const unsigned char buf[], ((buf[0] & 0x04) >> 1) | ((buf[3] & 0x04) >> 2)); - if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) || - SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) && - hw->w == 2) { - synaptics_parse_agm(buf, priv, hw); - return 1; - } - - hw->x = (((buf[3] & 0x10) << 8) | - ((buf[1] & 0x0f) << 8) | - buf[4]); - hw->y = (((buf[3] & 0x20) << 7) | - ((buf[1] & 0xf0) << 4) | - buf[5]); - hw->z = buf[2]; - hw->left = (buf[0] & 0x01) ? 1 : 0; hw->right = (buf[0] & 0x02) ? 1 : 0; - if (SYN_CAP_FORCEPAD(priv->ext_cap_0c)) { - /* - * ForcePads, like Clickpads, use middle button - * bits to report primary button clicks. - * Unfortunately they report primary button not - * only when user presses on the pad above certain - * threshold, but also when there are more than one - * finger on the touchpad, which interferes with - * out multi-finger gestures. - */ - if (hw->z == 0) { - /* No contacts */ - priv->press = priv->report_press = false; - } else if (hw->w >= 4 && ((buf[0] ^ buf[3]) & 0x01)) { - /* - * Single-finger touch with pressure above - * the threshold. If pressure stays long - * enough, we'll start reporting primary - * button. We rely on the device continuing - * sending data even if finger does not - * move. - */ - if (!priv->press) { - priv->press_start = jiffies; - priv->press = true; - } else if (time_after(jiffies, - priv->press_start + - msecs_to_jiffies(50))) { - priv->report_press = true; - } - } else { - priv->press = false; - } - - hw->left = priv->report_press; - - } else if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) { + if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) { /* * Clickpad's button is transmitted as middle button, * however, since it is primary button, we will report @@ -622,6 +560,21 @@ static int synaptics_parse_hw_state(const unsigned char buf[], hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0; } + if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) || + SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) && + hw->w == 2) { + synaptics_parse_agm(buf, priv, hw); + return 1; + } + + hw->x = (((buf[3] & 0x10) << 8) | + ((buf[1] & 0x0f) << 8) | + buf[4]); + hw->y = (((buf[3] & 0x20) << 7) | + ((buf[1] & 0xf0) << 4) | + buf[5]); + hw->z = buf[2]; + if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) && ((buf[0] ^ buf[3]) & 0x02)) { switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) { @@ -1532,112 +1485,10 @@ static const struct dmi_system_id __initconst olpc_dmi_table[] = { { } }; -static const struct dmi_system_id min_max_dmi_table[] __initconst = { -#if defined(CONFIG_DMI) - { - /* Lenovo ThinkPad Helix */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad Helix"), - }, - .driver_data = (int []){1024, 5052, 2258, 4832}, - }, - { - /* Lenovo ThinkPad X240 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X240"), - }, - .driver_data = (int []){1232, 5710, 1156, 4696}, - }, - { - /* Lenovo ThinkPad Edge E431 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad Edge E431"), - }, - .driver_data = (int []){1024, 5022, 2508, 4832}, - }, - { - /* Lenovo ThinkPad T431s */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T431"), - }, - .driver_data = (int []){1024, 5112, 2024, 4832}, - }, - { - /* Lenovo ThinkPad T440s */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T440"), - }, - .driver_data = (int []){1024, 5112, 2024, 4832}, - }, - { - /* Lenovo ThinkPad L440 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L440"), - }, - .driver_data = (int []){1024, 5112, 2024, 4832}, - }, - { - /* Lenovo ThinkPad T540p */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T540"), - }, - .driver_data = (int []){1024, 5112, 2024, 4832}, - }, - { - /* Lenovo ThinkPad L540 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L540"), - }, - .driver_data = (int []){1024, 5112, 2024, 4832}, - }, - { - /* Lenovo ThinkPad W540 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W540"), - }, - .driver_data = (int []){1024, 5112, 2024, 4832}, - }, - { - /* Lenovo Yoga S1 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, - "ThinkPad S1 Yoga"), - }, - .driver_data = (int []){1232, 5710, 1156, 4696}, - }, - { - /* Lenovo ThinkPad X1 Carbon Haswell (3rd generation) */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), - DMI_MATCH(DMI_PRODUCT_VERSION, - "ThinkPad X1 Carbon 2nd"), - }, - .driver_data = (int []){1024, 5112, 2024, 4832}, - }, -#endif - { } -}; - void __init synaptics_module_init(void) { - const struct dmi_system_id *min_max_dmi; - impaired_toshiba_kbc = dmi_check_system(toshiba_dmi_table); broken_olpc_ec = dmi_check_system(olpc_dmi_table); - - min_max_dmi = dmi_first_match(min_max_dmi_table); - if (min_max_dmi) - quirk_min_max = min_max_dmi->driver_data; } static int __synaptics_init(struct psmouse *psmouse, bool absolute_mode) diff --git a/drivers/input/mouse/synaptics.h b/drivers/input/mouse/synaptics.h index fb2e076738a..e594af0b264 100644 --- a/drivers/input/mouse/synaptics.h +++ b/drivers/input/mouse/synaptics.h @@ -78,11 +78,6 @@ * 2 0x08 image sensor image sensor tracks 5 fingers, but only * reports 2. * 2 0x20 report min query 0x0f gives min coord reported - * 2 0x80 forcepad forcepad is a variant of clickpad that - * does not have physical buttons but rather - * uses pressure above certain threshold to - * report primary clicks. Forcepads also have - * clickpad bit set. */ #define SYN_CAP_CLICKPAD(ex0c) ((ex0c) & 0x100000) /* 1-button ClickPad */ #define SYN_CAP_CLICKPAD2BTN(ex0c) ((ex0c) & 0x000100) /* 2-button ClickPad */ @@ -91,7 +86,6 @@ #define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000) #define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400) #define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800) -#define SYN_CAP_FORCEPAD(ex0c) ((ex0c) & 0x008000) /* synaptics modes query bits */ #define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7)) @@ -183,11 +177,6 @@ struct synaptics_data { */ struct synaptics_hw_state agm; bool agm_pending; /* new AGM packet received */ - - /* ForcePad handling */ - unsigned long press_start; - bool press; - bool report_press; }; void synaptics_module_init(void); diff --git a/drivers/input/mousedev.c b/drivers/input/mousedev.c index b604564dec5..4c842c320c2 100644 --- a/drivers/input/mousedev.c +++ b/drivers/input/mousedev.c @@ -67,6 +67,7 @@ struct mousedev { struct device dev; struct cdev cdev; bool exist; + bool is_mixdev; struct list_head mixdev_node; bool opened_by_mixdev; @@ -76,9 +77,6 @@ struct mousedev { int old_x[4], old_y[4]; int frac_dx, frac_dy; unsigned long touch; - - int (*open_device)(struct mousedev *mousedev); - void (*close_device)(struct mousedev *mousedev); }; enum mousedev_emul { @@ -118,6 +116,9 @@ static unsigned char mousedev_imex_seq[] = { 0xf3, 200, 0xf3, 200, 0xf3, 80 }; static struct mousedev *mousedev_mix; static LIST_HEAD(mousedev_mix_list); +static void mixdev_open_devices(void); +static void mixdev_close_devices(void); + #define fx(i) (mousedev->old_x[(mousedev->pkt_count - (i)) & 03]) #define fy(i) (mousedev->old_y[(mousedev->pkt_count - (i)) & 03]) @@ -427,7 +428,9 @@ static int mousedev_open_device(struct mousedev *mousedev) if (retval) return retval; - if (!mousedev->exist) + if (mousedev->is_mixdev) + mixdev_open_devices(); + else if (!mousedev->exist) retval = -ENODEV; else if (!mousedev->open++) { retval = input_open_device(&mousedev->handle); @@ -443,7 +446,9 @@ static void mousedev_close_device(struct mousedev *mousedev) { mutex_lock(&mousedev->mutex); - if (mousedev->exist && !--mousedev->open) + if (mousedev->is_mixdev) + mixdev_close_devices(); + else if (mousedev->exist && !--mousedev->open) input_close_device(&mousedev->handle); mutex_unlock(&mousedev->mutex); @@ -454,29 +459,21 @@ static void mousedev_close_device(struct mousedev *mousedev) * stream. Note that this function is called with mousedev_mix->mutex * held. */ -static int mixdev_open_devices(struct mousedev *mixdev) +static void mixdev_open_devices(void) { - int error; - - error = mutex_lock_interruptible(&mixdev->mutex); - if (error) - return error; + struct mousedev *mousedev; - if (!mixdev->open++) { - struct mousedev *mousedev; + if (mousedev_mix->open++) + return; - list_for_each_entry(mousedev, &mousedev_mix_list, mixdev_node) { - if (!mousedev->opened_by_mixdev) { - if (mousedev_open_device(mousedev)) - continue; + list_for_each_entry(mousedev, &mousedev_mix_list, mixdev_node) { + if (!mousedev->opened_by_mixdev) { + if (mousedev_open_device(mousedev)) + continue; - mousedev->opened_by_mixdev = true; - } + mousedev->opened_by_mixdev = true; } } - - mutex_unlock(&mixdev->mutex); - return 0; } /* @@ -484,22 +481,19 @@ static int mixdev_open_devices(struct mousedev *mixdev) * device. Note that this function is called with mousedev_mix->mutex * held. */ -static void mixdev_close_devices(struct mousedev *mixdev) +static void mixdev_close_devices(void) { - mutex_lock(&mixdev->mutex); + struct mousedev *mousedev; - if (!--mixdev->open) { - struct mousedev *mousedev; + if (--mousedev_mix->open) + return; - list_for_each_entry(mousedev, &mousedev_mix_list, mixdev_node) { - if (mousedev->opened_by_mixdev) { - mousedev->opened_by_mixdev = false; - mousedev_close_device(mousedev); - } + list_for_each_entry(mousedev, &mousedev_mix_list, mixdev_node) { + if (mousedev->opened_by_mixdev) { + mousedev->opened_by_mixdev = false; + mousedev_close_device(mousedev); } } - - mutex_unlock(&mixdev->mutex); } @@ -528,7 +522,7 @@ static int mousedev_release(struct inode *inode, struct file *file) mousedev_detach_client(mousedev, client); kfree(client); - mousedev->close_device(mousedev); + mousedev_close_device(mousedev); return 0; } @@ -556,7 +550,7 @@ static int mousedev_open(struct inode *inode, struct file *file) client->mousedev = mousedev; mousedev_attach_client(mousedev, client); - error = mousedev->open_device(mousedev); + error = mousedev_open_device(mousedev); if (error) goto err_free_client; @@ -867,21 +861,16 @@ static struct mousedev *mousedev_create(struct input_dev *dev, if (mixdev) { dev_set_name(&mousedev->dev, "mice"); - - mousedev->open_device = mixdev_open_devices; - mousedev->close_device = mixdev_close_devices; } else { int dev_no = minor; /* Normalize device number if it falls into legacy range */ if (dev_no < MOUSEDEV_MINOR_BASE + MOUSEDEV_MINORS) dev_no -= MOUSEDEV_MINOR_BASE; dev_set_name(&mousedev->dev, "mouse%d", dev_no); - - mousedev->open_device = mousedev_open_device; - mousedev->close_device = mousedev_close_device; } mousedev->exist = true; + mousedev->is_mixdev = mixdev; mousedev->handle.dev = input_get_device(dev); mousedev->handle.name = dev_name(&mousedev->dev); mousedev->handle.handler = handler; @@ -930,7 +919,7 @@ static void mousedev_destroy(struct mousedev *mousedev) device_del(&mousedev->dev); mousedev_cleanup(mousedev); input_free_minor(MINOR(mousedev->dev.devt)); - if (mousedev != mousedev_mix) + if (!mousedev->is_mixdev) input_unregister_handle(&mousedev->handle); put_device(&mousedev->dev); } diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h index ce715b1bee4..0ec9abbe31f 100644 --- a/drivers/input/serio/i8042-x86ia64io.h +++ b/drivers/input/serio/i8042-x86ia64io.h @@ -101,12 +101,6 @@ static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = { }, { .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), - DMI_MATCH(DMI_PRODUCT_NAME, "X750LN"), - }, - }, - { - .matches = { DMI_MATCH(DMI_SYS_VENDOR, "Compaq"), DMI_MATCH(DMI_PRODUCT_NAME , "ProLiant"), DMI_MATCH(DMI_PRODUCT_VERSION, "8500"), @@ -464,13 +458,6 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = { DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"), }, }, - { - /* Avatar AVIU-145A6 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Intel"), - DMI_MATCH(DMI_PRODUCT_NAME, "IC4I"), - }, - }, { } }; @@ -614,30 +601,6 @@ static const struct dmi_system_id __initconst i8042_dmi_notimeout_table[] = { DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"), }, }, - { - /* Fujitsu A544 laptop */ - /* https://bugzilla.redhat.com/show_bug.cgi?id=1111138 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), - DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK A544"), - }, - }, - { - /* Fujitsu AH544 laptop */ - /* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), - DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"), - }, - }, - { - /* Fujitsu U574 laptop */ - /* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */ - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), - DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U574"), - }, - }, { } }; diff --git a/drivers/input/serio/serport.c b/drivers/input/serio/serport.c index e4ecf3b6479..8755f5f3ad3 100644 --- a/drivers/input/serio/serport.c +++ b/drivers/input/serio/serport.c @@ -21,7 +21,6 @@ #include <linux/init.h> #include <linux/serio.h> #include <linux/tty.h> -#include <linux/compat.h> MODULE_AUTHOR("Vojtech Pavlik <vojtech@ucw.cz>"); MODULE_DESCRIPTION("Input device TTY line discipline"); @@ -197,55 +196,28 @@ static ssize_t serport_ldisc_read(struct tty_struct * tty, struct file * file, u return 0; } -static void serport_set_type(struct tty_struct *tty, unsigned long type) -{ - struct serport *serport = tty->disc_data; - - serport->id.proto = type & 0x000000ff; - serport->id.id = (type & 0x0000ff00) >> 8; - serport->id.extra = (type & 0x00ff0000) >> 16; -} - /* * serport_ldisc_ioctl() allows to set the port protocol, and device ID */ -static int serport_ldisc_ioctl(struct tty_struct *tty, struct file *file, - unsigned int cmd, unsigned long arg) +static int serport_ldisc_ioctl(struct tty_struct * tty, struct file * file, unsigned int cmd, unsigned long arg) { - if (cmd == SPIOCSTYPE) { - unsigned long type; + struct serport *serport = (struct serport*) tty->disc_data; + unsigned long type; + if (cmd == SPIOCSTYPE) { if (get_user(type, (unsigned long __user *) arg)) return -EFAULT; - serport_set_type(tty, type); - return 0; - } - - return -EINVAL; -} - -#ifdef CONFIG_COMPAT -#define COMPAT_SPIOCSTYPE _IOW('q', 0x01, compat_ulong_t) -static long serport_ldisc_compat_ioctl(struct tty_struct *tty, - struct file *file, - unsigned int cmd, unsigned long arg) -{ - if (cmd == COMPAT_SPIOCSTYPE) { - void __user *uarg = compat_ptr(arg); - compat_ulong_t compat_type; - - if (get_user(compat_type, (compat_ulong_t __user *)uarg)) - return -EFAULT; + serport->id.proto = type & 0x000000ff; + serport->id.id = (type & 0x0000ff00) >> 8; + serport->id.extra = (type & 0x00ff0000) >> 16; - serport_set_type(tty, compat_type); return 0; } return -EINVAL; } -#endif static void serport_ldisc_write_wakeup(struct tty_struct * tty) { @@ -269,9 +241,6 @@ static struct tty_ldisc_ops serport_ldisc = { .close = serport_ldisc_close, .read = serport_ldisc_read, .ioctl = serport_ldisc_ioctl, -#ifdef CONFIG_COMPAT - .compat_ioctl = serport_ldisc_compat_ioctl, -#endif .receive_buf = serport_ldisc_receive, .write_wakeup = serport_ldisc_write_wakeup }; diff --git a/drivers/input/tablet/wacom_sys.c b/drivers/input/tablet/wacom_sys.c index 3d838c0b495..aaf23aeae2e 100644 --- a/drivers/input/tablet/wacom_sys.c +++ b/drivers/input/tablet/wacom_sys.c @@ -339,7 +339,7 @@ static int wacom_parse_hid(struct usb_interface *intf, struct usb_device *dev = interface_to_usbdev(intf); char limit = 0; /* result has to be defined as int for some devices */ - int result = 0, touch_max = 0; + int result = 0; int i = 0, usage = WCM_UNDEFINED, finger = 0, pen = 0; unsigned char *report; @@ -386,8 +386,7 @@ static int wacom_parse_hid(struct usb_interface *intf, if (usage == WCM_DESKTOP) { if (finger) { features->device_type = BTN_TOOL_FINGER; - /* touch device at least supports one touch point */ - touch_max = 1; + switch (features->type) { case TABLETPC2FG: features->pktlen = WACOM_PKGLEN_TPC2FG; @@ -540,8 +539,6 @@ static int wacom_parse_hid(struct usb_interface *intf, } out: - if (!features->touch_max && touch_max) - features->touch_max = touch_max; result = 0; kfree(report); return result; diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index dfb401cba73..a3c338942f1 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -3187,16 +3187,14 @@ free_domains: static void cleanup_domain(struct protection_domain *domain) { - struct iommu_dev_data *entry; + struct iommu_dev_data *dev_data, *next; unsigned long flags; write_lock_irqsave(&amd_iommu_devtable_lock, flags); - while (!list_empty(&domain->dev_list)) { - entry = list_first_entry(&domain->dev_list, - struct iommu_dev_data, list); - __detach_device(entry); - atomic_set(&entry->bind, 0); + list_for_each_entry_safe(dev_data, next, &domain->dev_list, list) { + __detach_device(dev_data); + atomic_set(&dev_data->bind, 0); } write_unlock_irqrestore(&amd_iommu_devtable_lock, flags); @@ -3961,7 +3959,7 @@ static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic) iommu_flush_dte(iommu, devid); if (devid != alias) { irq_lookup_table[alias] = table; - set_dte_irq_entry(alias, table); + set_dte_irq_entry(devid, table); iommu_flush_dte(iommu, alias); } diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index 33a19882237..c1e4bcdb076 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -43,7 +43,6 @@ #include <linux/irqchip/chained_irq.h> #include <linux/irqchip/arm-gic.h> -#include <asm/cputype.h> #include <asm/irq.h> #include <asm/exception.h> #include <asm/smp_plat.h> @@ -248,17 +247,13 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, bool force) { void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3); - unsigned int cpu, shift = (gic_irq(d) % 4) * 8; + unsigned int shift = (gic_irq(d) % 4) * 8; + unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask); u32 val, mask, bit; #ifdef CONFIG_GIC_SET_MULTIPLE_CPUS struct irq_desc *desc = irq_to_desc(d->irq); #endif - if (!force) - cpu = cpumask_any_and(mask_val, cpu_online_mask); - else - cpu = cpumask_first(mask_val); - if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) return -EINVAL; @@ -769,9 +764,7 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start, } for_each_possible_cpu(cpu) { - u32 mpidr = cpu_logical_map(cpu); - u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); - unsigned long offset = percpu_offset * core_id; + unsigned long offset = percpu_offset * cpu_logical_map(cpu); *per_cpu_ptr(gic->dist_base.percpu_base, cpu) = dist_base + offset; *per_cpu_ptr(gic->cpu_base.percpu_base, cpu) = cpu_base + offset; } @@ -875,7 +868,6 @@ int __init gic_of_init(struct device_node *node, struct device_node *parent) } IRQCHIP_DECLARE(cortex_a15_gic, "arm,cortex-a15-gic", gic_of_init); IRQCHIP_DECLARE(cortex_a9_gic, "arm,cortex-a9-gic", gic_of_init); -IRQCHIP_DECLARE(cortex_a7_gic, "arm,cortex-a7-gic", gic_of_init); IRQCHIP_DECLARE(msm_8660_qgic, "qcom,msm-8660-qgic", gic_of_init); IRQCHIP_DECLARE(msm_qgic2, "qcom,msm-qgic2", gic_of_init); diff --git a/drivers/irqchip/spear-shirq.c b/drivers/irqchip/spear-shirq.c index 391b9cea73e..8527743b5ce 100644 --- a/drivers/irqchip/spear-shirq.c +++ b/drivers/irqchip/spear-shirq.c @@ -125,7 +125,7 @@ static struct spear_shirq spear320_shirq_ras2 = { }; static struct spear_shirq spear320_shirq_ras3 = { - .irq_nr = 7, + .irq_nr = 3, .irq_bit_off = 0, .invalid_irq = 1, .regs = { diff --git a/drivers/isdn/isdnloop/isdnloop.c b/drivers/isdn/isdnloop/isdnloop.c index 5a4da94aefb..02125e6a910 100644 --- a/drivers/isdn/isdnloop/isdnloop.c +++ b/drivers/isdn/isdnloop/isdnloop.c @@ -518,9 +518,9 @@ static isdnloop_stat isdnloop_cmd_table[] = static void isdnloop_fake_err(isdnloop_card *card) { - char buf[64]; + char buf[60]; - snprintf(buf, sizeof(buf), "E%s", card->omsg); + sprintf(buf, "E%s", card->omsg); isdnloop_fake(card, buf, -1); isdnloop_fake(card, "NAK", -1); } @@ -903,8 +903,6 @@ isdnloop_parse_cmd(isdnloop_card *card) case 7: /* 0x;EAZ */ p += 3; - if (strlen(p) >= sizeof(card->eazlist[0])) - break; strcpy(card->eazlist[ch - 1], p); break; case 8: @@ -1072,12 +1070,6 @@ isdnloop_start(isdnloop_card *card, isdnloop_sdef *sdefp) return -EBUSY; if (copy_from_user((char *) &sdef, (char *) sdefp, sizeof(sdef))) return -EFAULT; - - for (i = 0; i < 3; i++) { - if (!memchr(sdef.num[i], 0, sizeof(sdef.num[i]))) - return -EINVAL; - } - spin_lock_irqsave(&card->isdnloop_lock, flags); switch (sdef.ptype) { case ISDN_PTYPE_EURO: @@ -1135,7 +1127,7 @@ isdnloop_command(isdn_ctrl *c, isdnloop_card *card) { ulong a; int i; - char cbuf[80]; + char cbuf[60]; isdn_ctrl cmd; isdnloop_cdef cdef; @@ -1200,6 +1192,7 @@ isdnloop_command(isdn_ctrl *c, isdnloop_card *card) break; if ((c->arg & 255) < ISDNLOOP_BCH) { char *p; + char dial[50]; char dcode[4]; a = c->arg; @@ -1211,10 +1204,10 @@ isdnloop_command(isdn_ctrl *c, isdnloop_card *card) } else /* Normal Dial */ strcpy(dcode, "CAL"); - snprintf(cbuf, sizeof(cbuf), - "%02d;D%s_R%s,%02d,%02d,%s\n", (int) (a + 1), - dcode, p, c->parm.setup.si1, - c->parm.setup.si2, c->parm.setup.eazmsn); + strcpy(dial, p); + sprintf(cbuf, "%02d;D%s_R%s,%02d,%02d,%s\n", (int) (a + 1), + dcode, dial, c->parm.setup.si1, + c->parm.setup.si2, c->parm.setup.eazmsn); i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card); } break; diff --git a/drivers/leds/leds-pwm.c b/drivers/leds/leds-pwm.c index 5d64b243141..faf52c005e8 100644 --- a/drivers/leds/leds-pwm.c +++ b/drivers/leds/leds-pwm.c @@ -82,15 +82,6 @@ static inline size_t sizeof_pwm_leds_priv(int num_leds) (sizeof(struct led_pwm_data) * num_leds); } -static void led_pwm_cleanup(struct led_pwm_priv *priv) -{ - while (priv->num_leds--) { - led_classdev_unregister(&priv->leds[priv->num_leds].cdev); - if (priv->leds[priv->num_leds].can_sleep) - cancel_work_sync(&priv->leds[priv->num_leds].work); - } -} - static struct led_pwm_priv *led_pwm_create_of(struct platform_device *pdev) { struct device_node *node = pdev->dev.of_node; @@ -148,7 +139,8 @@ static struct led_pwm_priv *led_pwm_create_of(struct platform_device *pdev) return priv; err: - led_pwm_cleanup(priv); + while (priv->num_leds--) + led_classdev_unregister(&priv->leds[priv->num_leds].cdev); return NULL; } @@ -208,8 +200,8 @@ static int led_pwm_probe(struct platform_device *pdev) return 0; err: - priv->num_leds = i; - led_pwm_cleanup(priv); + while (i--) + led_classdev_unregister(&priv->leds[i].cdev); return ret; } @@ -217,8 +209,13 @@ err: static int led_pwm_remove(struct platform_device *pdev) { struct led_pwm_priv *priv = platform_get_drvdata(pdev); + int i; - led_pwm_cleanup(priv); + for (i = 0; i < priv->num_leds; i++) { + led_classdev_unregister(&priv->leds[i].cdev); + if (priv->leds[i].can_sleep) + cancel_work_sync(&priv->leds[i].work); + } return 0; } diff --git a/drivers/lguest/x86/core.c b/drivers/lguest/x86/core.c index 51692392633..f0a3347b644 100644 --- a/drivers/lguest/x86/core.c +++ b/drivers/lguest/x86/core.c @@ -700,7 +700,7 @@ void lguest_arch_setup_regs(struct lg_cpu *cpu, unsigned long start) * interrupts are enabled. We always leave interrupts enabled while * running the Guest. */ - regs->eflags = X86_EFLAGS_IF | X86_EFLAGS_FIXED; + regs->eflags = X86_EFLAGS_IF | X86_EFLAGS_BIT1; /* * The "Extended Instruction Pointer" register says where the Guest is diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index c9b4ca9e069..a6e985fcceb 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -462,7 +462,6 @@ static void __relink_lru(struct dm_buffer *b, int dirty) c->n_buffers[dirty]++; b->list_mode = dirty; list_move(&b->lru_list, &c->lru[dirty]); - b->last_accessed = jiffies; } /*---------------------------------------------------------------- diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c index de737ba1d35..1af7255bbff 100644 --- a/drivers/md/dm-cache-metadata.c +++ b/drivers/md/dm-cache-metadata.c @@ -384,15 +384,6 @@ static int __open_metadata(struct dm_cache_metadata *cmd) disk_super = dm_block_data(sblock); - /* Verify the data block size hasn't changed */ - if (le32_to_cpu(disk_super->data_block_size) != cmd->data_block_size) { - DMERR("changing the data block size (from %u to %llu) is not supported", - le32_to_cpu(disk_super->data_block_size), - (unsigned long long)cmd->data_block_size); - r = -EINVAL; - goto bad; - } - r = __check_incompat_features(disk_super, cmd); if (r < 0) goto bad; @@ -520,9 +511,8 @@ static int __begin_transaction_flags(struct dm_cache_metadata *cmd, disk_super = dm_block_data(sblock); update_flags(disk_super, mutator); read_superblock_fields(cmd, disk_super); - dm_bm_unlock(sblock); - return dm_bm_flush(cmd->bm); + return dm_bm_flush_and_unlock(cmd->bm, sblock); } static int __begin_transaction(struct dm_cache_metadata *cmd) diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index 677973641d2..516f9c922bb 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -861,13 +861,12 @@ static void issue_copy_real(struct dm_cache_migration *mg) int r; struct dm_io_region o_region, c_region; struct cache *cache = mg->cache; - sector_t cblock = from_cblock(mg->cblock); o_region.bdev = cache->origin_dev->bdev; o_region.count = cache->sectors_per_block; c_region.bdev = cache->cache_dev->bdev; - c_region.sector = cblock * cache->sectors_per_block; + c_region.sector = from_cblock(mg->cblock) * cache->sectors_per_block; c_region.count = cache->sectors_per_block; if (mg->writeback || mg->demote) { @@ -1954,8 +1953,6 @@ static int cache_create(struct cache_args *ca, struct cache **result) ti->num_discard_bios = 1; ti->discards_supported = true; ti->discard_zeroes_data_unsupported = true; - /* Discard bios must be split on a block boundary */ - ti->split_discard_bios = true; cache->features = ca->features; ti->per_bio_data_size = get_per_bio_data_size(cache); @@ -2177,18 +2174,20 @@ static int cache_map(struct dm_target *ti, struct bio *bio) bool discarded_block; struct dm_bio_prison_cell *cell; struct policy_result lookup_result; - struct per_bio_data *pb = init_per_bio_data(bio, pb_data_size); + struct per_bio_data *pb; - if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) { + if (from_oblock(block) > from_oblock(cache->origin_blocks)) { /* * This can only occur if the io goes to a partial block at * the end of the origin device. We don't cache these. * Just remap to the origin and carry on. */ - remap_to_origin(cache, bio); + remap_to_origin_clear_discard(cache, bio, block); return DM_MAPIO_REMAPPED; } + pb = init_per_bio_data(bio, pb_data_size); + if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) { defer_bio(cache, bio); return DM_MAPIO_SUBMITTED; diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 51d58890af1..0981b23f212 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -41,7 +41,6 @@ struct convert_context { unsigned int idx_out; sector_t cc_sector; atomic_t cc_pending; - struct ablkcipher_request *req; }; /* @@ -96,10 +95,6 @@ struct iv_benbi_private { * and encrypts / decrypts at the same time. */ enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID }; - -/* - * The fields in here must be read only after initialization. - */ struct crypt_config { struct dm_dev *dev; sector_t start; @@ -165,14 +160,6 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io); static u8 *iv_of_dmreq(struct crypt_config *cc, struct dm_crypt_request *dmreq); /* - * Use this to access cipher attributes that are the same for each CPU. - */ -static struct crypto_ablkcipher *any_tfm(struct crypt_config *cc) -{ - return cc->tfms[0]; -} - -/* * Different IV generation algorithms: * * plain: the initial vector is the 32-bit little-endian version of the sector @@ -494,15 +481,13 @@ static void kcryptd_async_done(struct crypto_async_request *async_req, static void crypt_alloc_req(struct crypt_config *cc, struct convert_context *ctx) { - unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1); - - if (!ctx->req) - ctx->req = mempool_alloc(cc->req_pool, GFP_NOIO); - - ablkcipher_request_set_tfm(ctx->req, cc->tfms[key_index]); - ablkcipher_request_set_callback(ctx->req, - CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, - kcryptd_async_done, dmreq_of_req(cc, ctx->req)); + if (!cc->req) + cc->req = mempool_alloc(cc->req_pool, GFP_NOIO); + ablkcipher_request_set_tfm(cc->req, cc->tfm); + ablkcipher_request_set_callback(cc->req, CRYPTO_TFM_REQ_MAY_BACKLOG | + CRYPTO_TFM_REQ_MAY_SLEEP, + kcryptd_async_done, + dmreq_of_req(cc, cc->req)); } /* @@ -522,7 +507,7 @@ static int crypt_convert(struct crypt_config *cc, atomic_inc(&ctx->cc_pending); - r = crypt_convert_block(cc, ctx, ctx->req); + r = crypt_convert_block(cc, ctx, cc->req); switch (r) { /* async */ @@ -531,7 +516,7 @@ static int crypt_convert(struct crypt_config *cc, INIT_COMPLETION(ctx->restart); /* fall through*/ case -EINPROGRESS: - ctx->req = NULL; + cc->req = NULL; ctx->cc_sector++; continue; @@ -630,7 +615,6 @@ static struct dm_crypt_io *crypt_io_alloc(struct crypt_config *cc, io->sector = sector; io->error = 0; io->base_io = NULL; - io->ctx.req = NULL; atomic_set(&io->io_pending, 0); return io; @@ -656,8 +640,6 @@ static void crypt_dec_pending(struct dm_crypt_io *io) if (!atomic_dec_and_test(&io->io_pending)) return; - if (io->ctx.req) - mempool_free(io->ctx.req, cc->req_pool); mempool_free(io, cc->io_pool); if (likely(!base_io)) @@ -1226,7 +1208,6 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) unsigned int key_size, opt_params; unsigned long long tmpll; int ret; - size_t iv_size_padding; struct dm_arg_set as; const char *opt_string; char dummy; @@ -1268,7 +1249,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) ~(crypto_tfm_ctx_alignment() - 1); cc->req_pool = mempool_create_kmalloc_pool(MIN_IOS, cc->dmreq_start + - sizeof(struct dm_crypt_request) + iv_size_padding + cc->iv_size); + sizeof(struct dm_crypt_request) + cc->iv_size); if (!cc->req_pool) { ti->error = "Cannot allocate crypt request mempool"; goto bad; diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c index d1de1626a9d..ea49834377c 100644 --- a/drivers/md/dm-io.c +++ b/drivers/md/dm-io.c @@ -10,7 +10,6 @@ #include <linux/device-mapper.h> #include <linux/bio.h> -#include <linux/completion.h> #include <linux/mempool.h> #include <linux/module.h> #include <linux/sched.h> @@ -35,7 +34,7 @@ struct dm_io_client { struct io { unsigned long error_bits; atomic_t count; - struct completion *wait; + struct task_struct *sleeper; struct dm_io_client *client; io_notify_fn callback; void *context; @@ -123,8 +122,8 @@ static void dec_count(struct io *io, unsigned int region, int error) invalidate_kernel_vmap_range(io->vma_invalidate_address, io->vma_invalidate_size); - if (io->wait) - complete(io->wait); + if (io->sleeper) + wake_up_process(io->sleeper); else { unsigned long r = io->error_bits; @@ -387,7 +386,6 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, */ volatile char io_[sizeof(struct io) + __alignof__(struct io) - 1]; struct io *io = (struct io *)PTR_ALIGN(&io_, __alignof__(struct io)); - DECLARE_COMPLETION_ONSTACK(wait); if (num_regions > 1 && (rw & RW_MASK) != WRITE) { WARN_ON(1); @@ -396,7 +394,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, io->error_bits = 0; atomic_set(&io->count, 1); /* see dispatch_io() */ - io->wait = &wait; + io->sleeper = current; io->client = client; io->vma_invalidate_address = dp->vma_invalidate_address; @@ -404,7 +402,15 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, dispatch_io(rw, num_regions, where, dp, io, 1); - wait_for_completion_io(&wait); + while (1) { + set_current_state(TASK_UNINTERRUPTIBLE); + + if (!atomic_read(&io->count)) + break; + + io_schedule(); + } + set_current_state(TASK_RUNNING); if (error_bits) *error_bits = io->error_bits; @@ -427,7 +433,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions, io = mempool_alloc(client->pool, GFP_NOIO); io->error_bits = 0; atomic_set(&io->count, 1); /* see dispatch_io() */ - io->wait = NULL; + io->sleeper = NULL; io->client = client; io->callback = fn; io->context = context; diff --git a/drivers/md/dm-log-userspace-transfer.c b/drivers/md/dm-log-userspace-transfer.c index c69d0b78774..08d9a207259 100644 --- a/drivers/md/dm-log-userspace-transfer.c +++ b/drivers/md/dm-log-userspace-transfer.c @@ -272,7 +272,7 @@ int dm_ulog_tfr_init(void) r = cn_add_callback(&ulog_cn_id, "dmlogusr", cn_ulog_callback); if (r) { - kfree(prealloced_cn_msg); + cn_del_callback(&ulog_cn_id); return r; } diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c index 3b1503dc1f1..5f49d704f27 100644 --- a/drivers/md/dm-thin-metadata.c +++ b/drivers/md/dm-thin-metadata.c @@ -591,15 +591,6 @@ static int __open_metadata(struct dm_pool_metadata *pmd) disk_super = dm_block_data(sblock); - /* Verify the data block size hasn't changed */ - if (le32_to_cpu(disk_super->data_block_size) != pmd->data_block_size) { - DMERR("changing the data block size (from %u to %llu) is not supported", - le32_to_cpu(disk_super->data_block_size), - (unsigned long long)pmd->data_block_size); - r = -EINVAL; - goto bad_unlock_sblock; - } - r = __check_incompat_features(disk_super, pmd); if (r < 0) goto bad_unlock_sblock; diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index 86a2a5e3b26..901aac27e52 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -1322,9 +1322,9 @@ static void process_deferred_bios(struct pool *pool) */ if (ensure_next_mapping(pool)) { spin_lock_irqsave(&pool->lock, flags); - bio_list_add(&pool->deferred_bios, bio); bio_list_merge(&pool->deferred_bios, &bios); spin_unlock_irqrestore(&pool->lock, flags); + break; } @@ -2647,8 +2647,7 @@ static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits) */ if (pt->adjusted_pf.discard_passdown) { data_limits = &bdev_get_queue(pt->data_dev->bdev)->limits; - limits->discard_granularity = max(data_limits->discard_granularity, - pool->sectors_per_block << SECTOR_SHIFT); + limits->discard_granularity = data_limits->discard_granularity; } else limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT; } diff --git a/drivers/md/md.c b/drivers/md/md.c index aaf77b07bb7..a2dda416c9c 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -7338,10 +7338,8 @@ void md_do_sync(struct md_thread *thread) /* just incase thread restarts... */ if (test_bit(MD_RECOVERY_DONE, &mddev->recovery)) return; - if (mddev->ro) {/* never try to sync a read-only array */ - set_bit(MD_RECOVERY_INTR, &mddev->recovery); + if (mddev->ro) /* never try to sync a read-only array */ return; - } if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) { if (test_bit(MD_RECOVERY_CHECK, &mddev->recovery)) @@ -7447,19 +7445,6 @@ void md_do_sync(struct md_thread *thread) rdev->recovery_offset < j) j = rdev->recovery_offset; rcu_read_unlock(); - - /* If there is a bitmap, we need to make sure all - * writes that started before we added a spare - * complete before we start doing a recovery. - * Otherwise the write might complete and (via - * bitmap_endwrite) set a bit in the bitmap after the - * recovery has checked that bit and skipped that - * region. - */ - if (mddev->bitmap) { - mddev->pers->quiesce(mddev, 1); - mddev->pers->quiesce(mddev, 0); - } } printk(KERN_INFO "md: %s of RAID array %s\n", desc, mdname(mddev)); @@ -7803,7 +7788,6 @@ void md_check_recovery(struct mddev *mddev) /* There is no thread, but we need to call * ->spare_active and clear saved_raid_disk */ - set_bit(MD_RECOVERY_INTR, &mddev->recovery); md_reap_sync_thread(mddev); clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); goto unlock; @@ -8497,8 +8481,7 @@ static int md_notify_reboot(struct notifier_block *this, if (mddev_trylock(mddev)) { if (mddev->pers) __md_stop_writes(mddev); - if (mddev->persistent) - mddev->safemode = 2; + mddev->safemode = 2; mddev_unlock(mddev); } need_delay = 1; diff --git a/drivers/md/persistent-data/dm-block-manager.c b/drivers/md/persistent-data/dm-block-manager.c index 6372d0bea53..81b513890e2 100644 --- a/drivers/md/persistent-data/dm-block-manager.c +++ b/drivers/md/persistent-data/dm-block-manager.c @@ -595,14 +595,25 @@ int dm_bm_unlock(struct dm_block *b) } EXPORT_SYMBOL_GPL(dm_bm_unlock); -int dm_bm_flush(struct dm_block_manager *bm) +int dm_bm_flush_and_unlock(struct dm_block_manager *bm, + struct dm_block *superblock) { + int r; + if (bm->read_only) return -EPERM; + r = dm_bufio_write_dirty_buffers(bm->bufio); + if (unlikely(r)) { + dm_bm_unlock(superblock); + return r; + } + + dm_bm_unlock(superblock); + return dm_bufio_write_dirty_buffers(bm->bufio); } -EXPORT_SYMBOL_GPL(dm_bm_flush); +EXPORT_SYMBOL_GPL(dm_bm_flush_and_unlock); void dm_bm_set_read_only(struct dm_block_manager *bm) { diff --git a/drivers/md/persistent-data/dm-block-manager.h b/drivers/md/persistent-data/dm-block-manager.h index f74c0462e5e..be5bff61be2 100644 --- a/drivers/md/persistent-data/dm-block-manager.h +++ b/drivers/md/persistent-data/dm-block-manager.h @@ -105,7 +105,8 @@ int dm_bm_unlock(struct dm_block *b); * * This method always blocks. */ -int dm_bm_flush(struct dm_block_manager *bm); +int dm_bm_flush_and_unlock(struct dm_block_manager *bm, + struct dm_block *superblock); /* * Switches the bm to a read only mode. Once read-only mode diff --git a/drivers/md/persistent-data/dm-transaction-manager.c b/drivers/md/persistent-data/dm-transaction-manager.c index 3bc30a0ae3d..81da1a26042 100644 --- a/drivers/md/persistent-data/dm-transaction-manager.c +++ b/drivers/md/persistent-data/dm-transaction-manager.c @@ -154,7 +154,7 @@ int dm_tm_pre_commit(struct dm_transaction_manager *tm) if (r < 0) return r; - return dm_bm_flush(tm->bm); + return 0; } EXPORT_SYMBOL_GPL(dm_tm_pre_commit); @@ -164,9 +164,8 @@ int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root) return -EWOULDBLOCK; wipe_shadow_table(tm); - dm_bm_unlock(root); - return dm_bm_flush(tm->bm); + return dm_bm_flush_and_unlock(tm->bm, root); } EXPORT_SYMBOL_GPL(dm_tm_commit); diff --git a/drivers/md/persistent-data/dm-transaction-manager.h b/drivers/md/persistent-data/dm-transaction-manager.h index 2772ed2a781..b5b139076ca 100644 --- a/drivers/md/persistent-data/dm-transaction-manager.h +++ b/drivers/md/persistent-data/dm-transaction-manager.h @@ -38,17 +38,18 @@ struct dm_transaction_manager *dm_tm_create_non_blocking_clone(struct dm_transac /* * We use a 2-phase commit here. * - * i) Make all changes for the transaction *except* for the superblock. - * Then call dm_tm_pre_commit() to flush them to disk. + * i) In the first phase the block manager is told to start flushing, and + * the changes to the space map are written to disk. You should interrogate + * your particular space map to get detail of its root node etc. to be + * included in your superblock. * - * ii) Lock your superblock. Update. Then call dm_tm_commit() which will - * unlock the superblock and flush it. No other blocks should be updated - * during this period. Care should be taken to never unlock a partially - * updated superblock; perform any operations that could fail *before* you - * take the superblock lock. + * ii) @root will be committed last. You shouldn't use more than the + * first 512 bytes of @root if you wish the transaction to survive a power + * failure. You *must* have a write lock held on @root for both stage (i) + * and (ii). The commit will drop the write lock. */ int dm_tm_pre_commit(struct dm_transaction_manager *tm); -int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *superblock); +int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root); /* * These methods are the only way to get hold of a writeable block. diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index e885dbf08c4..e73740b55ae 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -94,7 +94,6 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) struct pool_info *pi = data; struct r1bio *r1_bio; struct bio *bio; - int need_pages; int i, j; r1_bio = r1bio_pool_alloc(gfp_flags, pi); @@ -117,15 +116,15 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) * RESYNC_PAGES for each bio. */ if (test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery)) - need_pages = pi->raid_disks; + j = pi->raid_disks; else - need_pages = 1; - for (j = 0; j < need_pages; j++) { + j = 1; + while(j--) { bio = r1_bio->bios[j]; bio->bi_vcnt = RESYNC_PAGES; if (bio_alloc_pages(bio, gfp_flags)) - goto out_free_pages; + goto out_free_bio; } /* If not user-requests, copy the page pointers to all bios */ if (!test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery)) { @@ -139,14 +138,6 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) return r1_bio; -out_free_pages: - while (--j >= 0) { - struct bio_vec *bv; - - bio_for_each_segment_all(bv, r1_bio->bios[j], i) - __free_page(bv->bv_page); - } - out_free_bio: while (++j < pi->raid_disks) bio_put(r1_bio->bios[j]); @@ -1406,12 +1397,12 @@ static void error(struct mddev *mddev, struct md_rdev *rdev) mddev->degraded++; set_bit(Faulty, &rdev->flags); spin_unlock_irqrestore(&conf->device_lock, flags); + /* + * if recovery is running, make sure it aborts. + */ + set_bit(MD_RECOVERY_INTR, &mddev->recovery); } else set_bit(Faulty, &rdev->flags); - /* - * if recovery is running, make sure it aborts. - */ - set_bit(MD_RECOVERY_INTR, &mddev->recovery); set_bit(MD_CHANGE_DEVS, &mddev->flags); printk(KERN_ALERT "md/raid1:%s: Disk failure on %s, disabling device.\n" @@ -2051,7 +2042,7 @@ static void fix_read_error(struct r1conf *conf, int read_disk, d--; rdev = conf->mirrors[d].rdev; if (rdev && - !test_bit(Faulty, &rdev->flags)) + test_bit(In_sync, &rdev->flags)) r1_sync_page_io(rdev, sect, s, conf->tmppage, WRITE); } @@ -2063,7 +2054,7 @@ static void fix_read_error(struct r1conf *conf, int read_disk, d--; rdev = conf->mirrors[d].rdev; if (rdev && - !test_bit(Faulty, &rdev->flags)) { + test_bit(In_sync, &rdev->flags)) { if (r1_sync_page_io(rdev, sect, s, conf->tmppage, READ)) { atomic_add(s, &rdev->corrected_errors); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index a1ea2a75391..d2f8cd332b4 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1681,11 +1681,11 @@ static void error(struct mddev *mddev, struct md_rdev *rdev) spin_lock_irqsave(&conf->device_lock, flags); mddev->degraded++; spin_unlock_irqrestore(&conf->device_lock, flags); + /* + * if recovery is running, make sure it aborts. + */ + set_bit(MD_RECOVERY_INTR, &mddev->recovery); } - /* - * If recovery is running, make sure it aborts. - */ - set_bit(MD_RECOVERY_INTR, &mddev->recovery); set_bit(Blocked, &rdev->flags); set_bit(Faulty, &rdev->flags); set_bit(MD_CHANGE_DEVS, &mddev->flags); @@ -2948,7 +2948,6 @@ static sector_t sync_request(struct mddev *mddev, sector_t sector_nr, */ if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery)) { end_reshape(conf); - close_sync(conf); return 0; } @@ -4399,7 +4398,7 @@ read_more: read_bio->bi_private = r10_bio; read_bio->bi_end_io = end_sync_read; read_bio->bi_rw = READ; - read_bio->bi_flags &= (~0UL << BIO_RESET_BITS); + read_bio->bi_flags &= ~(BIO_POOL_MASK - 1); read_bio->bi_flags |= 1 << BIO_UPTODATE; read_bio->bi_vcnt = 0; read_bio->bi_size = 0; diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 2332b5ced0d..5e3c25d4562 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -60,10 +60,6 @@ #include "raid0.h" #include "bitmap.h" -static bool devices_handle_discard_safely = false; -module_param(devices_handle_discard_safely, bool, 0644); -MODULE_PARM_DESC(devices_handle_discard_safely, - "Set to Y if all devices in each array reliably return zeroes on reads from discarded regions"); /* * Stripe cache */ @@ -3565,8 +3561,6 @@ static void handle_stripe(struct stripe_head *sh) set_bit(R5_Wantwrite, &dev->flags); if (prexor) continue; - if (s.failed > 1) - continue; if (!test_bit(R5_Insync, &dev->flags) || ((i == sh->pd_idx || i == sh->qd_idx) && s.failed == 0)) @@ -5615,7 +5609,7 @@ static int run(struct mddev *mddev) mddev->queue->limits.discard_granularity = stripe; /* * unaligned part of discard request will be ignored, so can't - * guarantee discard_zeroes_data + * guarantee discard_zerors_data */ mddev->queue->limits.discard_zeroes_data = 0; @@ -5640,18 +5634,6 @@ static int run(struct mddev *mddev) !bdev_get_queue(rdev->bdev)-> limits.discard_zeroes_data) discard_supported = false; - /* Unfortunately, discard_zeroes_data is not currently - * a guarantee - just a hint. So we only allow DISCARD - * if the sysadmin has confirmed that only safe devices - * are in use by setting a module parameter. - */ - if (!devices_handle_discard_safely) { - if (discard_supported) { - pr_info("md/raid456: discard support disabled due to uncertainty.\n"); - pr_info("Set raid456.devices_handle_discard_safely=Y to override.\n"); - } - discard_supported = false; - } } if (discard_supported && diff --git a/drivers/media/dvb-frontends/ds3000.c b/drivers/media/dvb-frontends/ds3000.c index 22e8c2032f6..1e344b03327 100644 --- a/drivers/media/dvb-frontends/ds3000.c +++ b/drivers/media/dvb-frontends/ds3000.c @@ -864,13 +864,6 @@ struct dvb_frontend *ds3000_attach(const struct ds3000_config *config, memcpy(&state->frontend.ops, &ds3000_ops, sizeof(struct dvb_frontend_ops)); state->frontend.demodulator_priv = state; - - /* - * Some devices like T480 starts with voltage on. Be sure - * to turn voltage off during init, as this can otherwise - * interfere with Unicable SCR systems. - */ - ds3000_set_voltage(&state->frontend, SEC_VOLTAGE_OFF); return &state->frontend; error3: diff --git a/drivers/media/dvb-frontends/m88rs2000.c b/drivers/media/dvb-frontends/m88rs2000.c index c7a1c8eba47..02699c11101 100644 --- a/drivers/media/dvb-frontends/m88rs2000.c +++ b/drivers/media/dvb-frontends/m88rs2000.c @@ -712,22 +712,6 @@ static int m88rs2000_get_frontend(struct dvb_frontend *fe) return 0; } -static int m88rs2000_get_tune_settings(struct dvb_frontend *fe, - struct dvb_frontend_tune_settings *tune) -{ - struct dtv_frontend_properties *c = &fe->dtv_property_cache; - - if (c->symbol_rate > 3000000) - tune->min_delay_ms = 2000; - else - tune->min_delay_ms = 3000; - - tune->step_size = c->symbol_rate / 16000; - tune->max_drift = c->symbol_rate / 2000; - - return 0; -} - static int m88rs2000_i2c_gate_ctrl(struct dvb_frontend *fe, int enable) { struct m88rs2000_state *state = fe->demodulator_priv; @@ -759,7 +743,7 @@ static struct dvb_frontend_ops m88rs2000_ops = { .symbol_rate_tolerance = 500, /* ppm */ .caps = FE_CAN_FEC_1_2 | FE_CAN_FEC_2_3 | FE_CAN_FEC_3_4 | FE_CAN_FEC_5_6 | FE_CAN_FEC_7_8 | - FE_CAN_QPSK | FE_CAN_INVERSION_AUTO | + FE_CAN_QPSK | FE_CAN_FEC_AUTO }, @@ -779,7 +763,6 @@ static struct dvb_frontend_ops m88rs2000_ops = { .set_frontend = m88rs2000_set_frontend, .get_frontend = m88rs2000_get_frontend, - .get_tune_settings = m88rs2000_get_tune_settings, }; struct dvb_frontend *m88rs2000_attach(const struct m88rs2000_config *config, diff --git a/drivers/media/dvb-frontends/tda10071.c b/drivers/media/dvb-frontends/tda10071.c index def7812d7b2..36eb27d3fdf 100644 --- a/drivers/media/dvb-frontends/tda10071.c +++ b/drivers/media/dvb-frontends/tda10071.c @@ -667,7 +667,6 @@ static int tda10071_set_frontend(struct dvb_frontend *fe) struct dtv_frontend_properties *c = &fe->dtv_property_cache; int ret, i; u8 mode, rolloff, pilot, inversion, div; - fe_modulation_t modulation; dev_dbg(&priv->i2c->dev, "%s: delivery_system=%d modulation=%d " \ "frequency=%d symbol_rate=%d inversion=%d pilot=%d " \ @@ -702,13 +701,10 @@ static int tda10071_set_frontend(struct dvb_frontend *fe) switch (c->delivery_system) { case SYS_DVBS: - modulation = QPSK; rolloff = 0; pilot = 2; break; case SYS_DVBS2: - modulation = c->modulation; - switch (c->rolloff) { case ROLLOFF_20: rolloff = 2; @@ -753,7 +749,7 @@ static int tda10071_set_frontend(struct dvb_frontend *fe) for (i = 0, mode = 0xff; i < ARRAY_SIZE(TDA10071_MODCOD); i++) { if (c->delivery_system == TDA10071_MODCOD[i].delivery_system && - modulation == TDA10071_MODCOD[i].modulation && + c->modulation == TDA10071_MODCOD[i].modulation && c->fec_inner == TDA10071_MODCOD[i].fec) { mode = TDA10071_MODCOD[i].val; dev_dbg(&priv->i2c->dev, "%s: mode found=%02x\n", diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c index 3ead3a83f04..617ad3fff4a 100644 --- a/drivers/media/i2c/ov7670.c +++ b/drivers/media/i2c/ov7670.c @@ -1110,7 +1110,7 @@ static int ov7670_enum_framesizes(struct v4l2_subdev *sd, * windows that fall outside that. */ for (i = 0; i < n_win_sizes; i++) { - struct ov7670_win_size *win = &info->devtype->win_sizes[i]; + struct ov7670_win_size *win = &info->devtype->win_sizes[index]; if (info->min_width && win->width < info->min_width) continue; if (info->min_height && win->height < info->min_height) diff --git a/drivers/media/i2c/tda7432.c b/drivers/media/i2c/tda7432.c index 09f4387dbc4..28b5121881f 100644 --- a/drivers/media/i2c/tda7432.c +++ b/drivers/media/i2c/tda7432.c @@ -293,7 +293,7 @@ static int tda7432_s_ctrl(struct v4l2_ctrl *ctrl) if (t->mute->val) { lf |= TDA7432_MUTE; lr |= TDA7432_MUTE; - rf |= TDA7432_MUTE; + lf |= TDA7432_MUTE; rr |= TDA7432_MUTE; } /* Mute & update balance*/ diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c index fdb5840f034..1957c0df08f 100644 --- a/drivers/media/media-device.c +++ b/drivers/media/media-device.c @@ -93,7 +93,6 @@ static long media_device_enum_entities(struct media_device *mdev, struct media_entity *ent; struct media_entity_desc u_ent; - memset(&u_ent, 0, sizeof(u_ent)); if (copy_from_user(&u_ent.id, &uent->id, sizeof(u_ent.id))) return -EFAULT; @@ -106,6 +105,8 @@ static long media_device_enum_entities(struct media_device *mdev, if (ent->name) { strncpy(u_ent.name, ent->name, sizeof(u_ent.name)); u_ent.name[sizeof(u_ent.name) - 1] = '\0'; + } else { + memset(u_ent.name, 0, sizeof(u_ent.name)); } u_ent.type = ent->type; u_ent.revision = ent->revision; diff --git a/drivers/media/pci/cx18/cx18-driver.c b/drivers/media/pci/cx18/cx18-driver.c index 018cb904533..13c9718a5ac 100644 --- a/drivers/media/pci/cx18/cx18-driver.c +++ b/drivers/media/pci/cx18/cx18-driver.c @@ -327,16 +327,13 @@ void cx18_read_eeprom(struct cx18 *cx, struct tveeprom *tv) struct i2c_client *c; u8 eedata[256]; - memset(tv, 0, sizeof(*tv)); - c = kzalloc(sizeof(*c), GFP_KERNEL); - if (!c) - return; strlcpy(c->name, "cx18 tveeprom tmp", sizeof(c->name)); c->adapter = &cx->i2c_adap[0]; c->addr = 0xa0 >> 1; + memset(tv, 0, sizeof(*tv)); if (tveeprom_read(c, eedata, sizeof(eedata))) goto ret; @@ -1092,7 +1089,6 @@ static int cx18_probe(struct pci_dev *pci_dev, setup.addr = ADDR_UNSET; setup.type = cx->options.tuner; setup.mode_mask = T_ANALOG_TV; /* matches TV tuners */ - setup.config = NULL; if (cx->options.radio > 0) setup.mode_mask |= T_RADIO; setup.tuner_callback = (setup.type == TUNER_XC2028) ? diff --git a/drivers/media/pci/ivtv/ivtv-alsa-pcm.c b/drivers/media/pci/ivtv/ivtv-alsa-pcm.c index 7a9b98bc208..e1863dbf4ed 100644 --- a/drivers/media/pci/ivtv/ivtv-alsa-pcm.c +++ b/drivers/media/pci/ivtv/ivtv-alsa-pcm.c @@ -159,12 +159,6 @@ static int snd_ivtv_pcm_capture_open(struct snd_pcm_substream *substream) /* Instruct the CX2341[56] to start sending packets */ snd_ivtv_lock(itvsc); - - if (ivtv_init_on_first_open(itv)) { - snd_ivtv_unlock(itvsc); - return -ENXIO; - } - s = &itv->streams[IVTV_ENC_STREAM_TYPE_PCM]; v4l2_fh_init(&item.fh, s->vdev); diff --git a/drivers/media/pci/saa7134/saa7134-cards.c b/drivers/media/pci/saa7134/saa7134-cards.c index e87a734637a..d45e7f6ff33 100644 --- a/drivers/media/pci/saa7134/saa7134-cards.c +++ b/drivers/media/pci/saa7134/saa7134-cards.c @@ -8045,8 +8045,8 @@ int saa7134_board_init2(struct saa7134_dev *dev) break; } /* switch() */ - /* initialize tuner (don't do this when resuming) */ - if (!dev->insuspend && TUNER_ABSENT != dev->tuner_type) { + /* initialize tuner */ + if (TUNER_ABSENT != dev->tuner_type) { int has_demod = (dev->tda9887_conf & TDA9887_PRESENT); /* Note: radio tuner address is always filled in, diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c index 3e8ef11f67a..1d7dbd5c0fb 100644 --- a/drivers/media/platform/omap3isp/isp.c +++ b/drivers/media/platform/omap3isp/isp.c @@ -2249,7 +2249,6 @@ static int isp_probe(struct platform_device *pdev) ret = iommu_attach_device(isp->domain, &pdev->dev); if (ret) { dev_err(&pdev->dev, "can't attach iommu device: %d\n", ret); - ret = -EPROBE_DEFER; goto free_domain; } @@ -2288,7 +2287,6 @@ detach_dev: iommu_detach_device(isp->domain, &pdev->dev); free_domain: iommu_domain_free(isp->domain); - isp->domain = NULL; error_isp: isp_xclk_cleanup(isp); omap3isp_put(isp); diff --git a/drivers/media/platform/omap3isp/isppreview.c b/drivers/media/platform/omap3isp/isppreview.c index e2e4610d555..cd8831aebde 100644 --- a/drivers/media/platform/omap3isp/isppreview.c +++ b/drivers/media/platform/omap3isp/isppreview.c @@ -1079,7 +1079,6 @@ static void preview_config_input_format(struct isp_prev_device *prev, */ static void preview_config_input_size(struct isp_prev_device *prev, u32 active) { - const struct v4l2_mbus_framefmt *format = &prev->formats[PREV_PAD_SINK]; struct isp_device *isp = to_isp_device(prev); unsigned int sph = prev->crop.left; unsigned int eph = prev->crop.left + prev->crop.width - 1; @@ -1087,14 +1086,6 @@ static void preview_config_input_size(struct isp_prev_device *prev, u32 active) unsigned int elv = prev->crop.top + prev->crop.height - 1; u32 features; - if (format->code != V4L2_MBUS_FMT_Y8_1X8 && - format->code != V4L2_MBUS_FMT_Y10_1X10) { - sph -= 2; - eph += 2; - slv -= 2; - elv += 2; - } - features = (prev->params.params[0].features & active) | (prev->params.params[1].features & ~active); diff --git a/drivers/media/tuners/fc2580.c b/drivers/media/tuners/fc2580.c index f0c9c42867d..3aecaf46509 100644 --- a/drivers/media/tuners/fc2580.c +++ b/drivers/media/tuners/fc2580.c @@ -195,7 +195,7 @@ static int fc2580_set_params(struct dvb_frontend *fe) f_ref = 2UL * priv->cfg->clock / r_val; n_val = div_u64_rem(f_vco, f_ref, &k_val); - k_val_reg = div_u64(1ULL * k_val * (1 << 20), f_ref); + k_val_reg = 1UL * k_val * (1 << 20) / f_ref; ret = fc2580_wr_reg(priv, 0x18, r18_val | ((k_val_reg >> 16) & 0xff)); if (ret < 0) @@ -348,8 +348,8 @@ static int fc2580_set_params(struct dvb_frontend *fe) if (ret < 0) goto err; - ret = fc2580_wr_reg(priv, 0x37, div_u64(1ULL * priv->cfg->clock * - fc2580_if_filter_lut[i].mul, 1000000000)); + ret = fc2580_wr_reg(priv, 0x37, 1UL * priv->cfg->clock * \ + fc2580_if_filter_lut[i].mul / 1000000000); if (ret < 0) goto err; diff --git a/drivers/media/tuners/fc2580_priv.h b/drivers/media/tuners/fc2580_priv.h index 646c9945213..be38a9e637e 100644 --- a/drivers/media/tuners/fc2580_priv.h +++ b/drivers/media/tuners/fc2580_priv.h @@ -22,7 +22,6 @@ #define FC2580_PRIV_H #include "fc2580.h" -#include <linux/math64.h> struct fc2580_reg_val { u8 reg; diff --git a/drivers/media/tuners/xc4000.c b/drivers/media/tuners/xc4000.c index e71decbfd0a..2018befabb5 100644 --- a/drivers/media/tuners/xc4000.c +++ b/drivers/media/tuners/xc4000.c @@ -93,7 +93,7 @@ struct xc4000_priv { struct firmware_description *firm; int firm_size; u32 if_khz; - u32 freq_hz, freq_offset; + u32 freq_hz; u32 bandwidth; u8 video_standard; u8 rf_mode; @@ -1157,14 +1157,14 @@ static int xc4000_set_params(struct dvb_frontend *fe) case SYS_ATSC: dprintk(1, "%s() VSB modulation\n", __func__); priv->rf_mode = XC_RF_MODE_AIR; - priv->freq_offset = 1750000; + priv->freq_hz = c->frequency - 1750000; priv->video_standard = XC4000_DTV6; type = DTV6; break; case SYS_DVBC_ANNEX_B: dprintk(1, "%s() QAM modulation\n", __func__); priv->rf_mode = XC_RF_MODE_CABLE; - priv->freq_offset = 1750000; + priv->freq_hz = c->frequency - 1750000; priv->video_standard = XC4000_DTV6; type = DTV6; break; @@ -1173,23 +1173,23 @@ static int xc4000_set_params(struct dvb_frontend *fe) dprintk(1, "%s() OFDM\n", __func__); if (bw == 0) { if (c->frequency < 400000000) { - priv->freq_offset = 2250000; + priv->freq_hz = c->frequency - 2250000; } else { - priv->freq_offset = 2750000; + priv->freq_hz = c->frequency - 2750000; } priv->video_standard = XC4000_DTV7_8; type = DTV78; } else if (bw <= 6000000) { priv->video_standard = XC4000_DTV6; - priv->freq_offset = 1750000; + priv->freq_hz = c->frequency - 1750000; type = DTV6; } else if (bw <= 7000000) { priv->video_standard = XC4000_DTV7; - priv->freq_offset = 2250000; + priv->freq_hz = c->frequency - 2250000; type = DTV7; } else { priv->video_standard = XC4000_DTV8; - priv->freq_offset = 2750000; + priv->freq_hz = c->frequency - 2750000; type = DTV8; } priv->rf_mode = XC_RF_MODE_AIR; @@ -1200,8 +1200,6 @@ static int xc4000_set_params(struct dvb_frontend *fe) goto fail; } - priv->freq_hz = c->frequency - priv->freq_offset; - dprintk(1, "%s() frequency=%d (compensated)\n", __func__, priv->freq_hz); @@ -1522,7 +1520,7 @@ static int xc4000_get_frequency(struct dvb_frontend *fe, u32 *freq) { struct xc4000_priv *priv = fe->tuner_priv; - *freq = priv->freq_hz + priv->freq_offset; + *freq = priv->freq_hz; if (debug) { mutex_lock(&priv->lock); diff --git a/drivers/media/tuners/xc5000.c b/drivers/media/tuners/xc5000.c index b2d9e9cb97f..5cd09a681b6 100644 --- a/drivers/media/tuners/xc5000.c +++ b/drivers/media/tuners/xc5000.c @@ -55,7 +55,7 @@ struct xc5000_priv { u32 if_khz; u16 xtal_khz; - u32 freq_hz, freq_offset; + u32 freq_hz; u32 bandwidth; u8 video_standard; u8 rf_mode; @@ -755,13 +755,13 @@ static int xc5000_set_params(struct dvb_frontend *fe) case SYS_ATSC: dprintk(1, "%s() VSB modulation\n", __func__); priv->rf_mode = XC_RF_MODE_AIR; - priv->freq_offset = 1750000; + priv->freq_hz = freq - 1750000; priv->video_standard = DTV6; break; case SYS_DVBC_ANNEX_B: dprintk(1, "%s() QAM modulation\n", __func__); priv->rf_mode = XC_RF_MODE_CABLE; - priv->freq_offset = 1750000; + priv->freq_hz = freq - 1750000; priv->video_standard = DTV6; break; case SYS_ISDBT: @@ -776,15 +776,15 @@ static int xc5000_set_params(struct dvb_frontend *fe) switch (bw) { case 6000000: priv->video_standard = DTV6; - priv->freq_offset = 1750000; + priv->freq_hz = freq - 1750000; break; case 7000000: priv->video_standard = DTV7; - priv->freq_offset = 2250000; + priv->freq_hz = freq - 2250000; break; case 8000000: priv->video_standard = DTV8; - priv->freq_offset = 2750000; + priv->freq_hz = freq - 2750000; break; default: printk(KERN_ERR "xc5000 bandwidth not set!\n"); @@ -798,15 +798,15 @@ static int xc5000_set_params(struct dvb_frontend *fe) priv->rf_mode = XC_RF_MODE_CABLE; if (bw <= 6000000) { priv->video_standard = DTV6; - priv->freq_offset = 1750000; + priv->freq_hz = freq - 1750000; b = 6; } else if (bw <= 7000000) { priv->video_standard = DTV7; - priv->freq_offset = 2250000; + priv->freq_hz = freq - 2250000; b = 7; } else { priv->video_standard = DTV7_8; - priv->freq_offset = 2750000; + priv->freq_hz = freq - 2750000; b = 8; } dprintk(1, "%s() Bandwidth %dMHz (%d)\n", __func__, @@ -817,8 +817,6 @@ static int xc5000_set_params(struct dvb_frontend *fe) return -EINVAL; } - priv->freq_hz = freq - priv->freq_offset; - dprintk(1, "%s() frequency=%d (compensated to %d)\n", __func__, freq, priv->freq_hz); @@ -1069,7 +1067,7 @@ static int xc5000_get_frequency(struct dvb_frontend *fe, u32 *freq) { struct xc5000_priv *priv = fe->tuner_priv; dprintk(1, "%s()\n", __func__); - *freq = priv->freq_hz + priv->freq_offset; + *freq = priv->freq_hz; return 0; } diff --git a/drivers/media/usb/au0828/au0828-video.c b/drivers/media/usb/au0828/au0828-video.c index 98e1b937b50..75ac9947cda 100644 --- a/drivers/media/usb/au0828/au0828-video.c +++ b/drivers/media/usb/au0828/au0828-video.c @@ -788,27 +788,11 @@ static int au0828_i2s_init(struct au0828_dev *dev) /* * Auvitek au0828 analog stream enable + * Please set interface0 to AS5 before enable the stream */ static int au0828_analog_stream_enable(struct au0828_dev *d) { - struct usb_interface *iface; - int ret; - dprintk(1, "au0828_analog_stream_enable called\n"); - - iface = usb_ifnum_to_if(d->usbdev, 0); - if (iface && iface->cur_altsetting->desc.bAlternateSetting != 5) { - dprintk(1, "Changing intf#0 to alt 5\n"); - /* set au0828 interface0 to AS5 here again */ - ret = usb_set_interface(d->usbdev, 0, 5); - if (ret < 0) { - printk(KERN_INFO "Au0828 can't set alt setting to 5!\n"); - return -EBUSY; - } - } - - /* FIXME: size should be calculated using d->width, d->height */ - au0828_writereg(d, AU0828_SENSORCTRL_VBI_103, 0x00); au0828_writereg(d, 0x106, 0x00); /* set x position */ @@ -1019,6 +1003,15 @@ static int au0828_v4l2_open(struct file *filp) return -ERESTARTSYS; } if (dev->users == 0) { + /* set au0828 interface0 to AS5 here again */ + ret = usb_set_interface(dev->usbdev, 0, 5); + if (ret < 0) { + mutex_unlock(&dev->lock); + printk(KERN_INFO "Au0828 can't set alternate to 5!\n"); + kfree(fh); + return -EBUSY; + } + au0828_analog_stream_enable(dev); au0828_analog_stream_reset(dev); @@ -1260,6 +1253,13 @@ static int au0828_set_format(struct au0828_dev *dev, unsigned int cmd, } } + /* set au0828 interface0 to AS5 here again */ + ret = usb_set_interface(dev->usbdev, 0, 5); + if (ret < 0) { + printk(KERN_INFO "Au0828 can't set alt setting to 5!\n"); + return -EBUSY; + } + au0828_analog_stream_enable(dev); return 0; diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c index a1c641e1836..20e345d9fe8 100644 --- a/drivers/media/usb/dvb-usb/cxusb.c +++ b/drivers/media/usb/dvb-usb/cxusb.c @@ -149,7 +149,6 @@ static int cxusb_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int num) { struct dvb_usb_device *d = i2c_get_adapdata(adap); - int ret; int i; if (mutex_lock_interruptible(&d->i2c_mutex) < 0) @@ -174,8 +173,7 @@ static int cxusb_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], if (1 + msg[i].len > sizeof(ibuf)) { warn("i2c rd: len=%d is too big!\n", msg[i].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = 0; obuf[1] = msg[i].len; @@ -195,14 +193,12 @@ static int cxusb_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], if (3 + msg[i].len > sizeof(obuf)) { warn("i2c wr: len=%d is too big!\n", msg[i].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } if (1 + msg[i + 1].len > sizeof(ibuf)) { warn("i2c rd: len=%d is too big!\n", msg[i + 1].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[i].len; obuf[1] = msg[i+1].len; @@ -227,8 +223,7 @@ static int cxusb_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], if (2 + msg[i].len > sizeof(obuf)) { warn("i2c wr: len=%d is too big!\n", msg[i].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[i].addr; obuf[1] = msg[i].len; @@ -242,14 +237,8 @@ static int cxusb_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], } } - if (i == num) - ret = num; - else - ret = -EREMOTEIO; - -unlock: mutex_unlock(&d->i2c_mutex); - return ret; + return i == num ? num : -EREMOTEIO; } static u32 cxusb_i2c_func(struct i2c_adapter *adapter) diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c index 4170a45d17e..71b22f5a05c 100644 --- a/drivers/media/usb/dvb-usb/dw2102.c +++ b/drivers/media/usb/dvb-usb/dw2102.c @@ -301,7 +301,6 @@ static int dw2102_serit_i2c_transfer(struct i2c_adapter *adap, static int dw2102_earda_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], int num) { struct dvb_usb_device *d = i2c_get_adapdata(adap); - int ret; if (!d) return -ENODEV; @@ -317,8 +316,7 @@ static int dw2102_earda_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg ms if (2 + msg[1].len > sizeof(ibuf)) { warn("i2c rd: len=%d is too big!\n", msg[1].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[0].addr << 1; @@ -342,8 +340,7 @@ static int dw2102_earda_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg ms if (2 + msg[0].len > sizeof(obuf)) { warn("i2c wr: len=%d is too big!\n", msg[1].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[0].addr << 1; @@ -360,8 +357,7 @@ static int dw2102_earda_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg ms if (2 + msg[0].len > sizeof(obuf)) { warn("i2c wr: len=%d is too big!\n", msg[1].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[0].addr << 1; @@ -390,17 +386,15 @@ static int dw2102_earda_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg ms break; } - ret = num; -unlock: mutex_unlock(&d->i2c_mutex); - return ret; + return num; } static int dw2104_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], int num) { struct dvb_usb_device *d = i2c_get_adapdata(adap); - int len, i, j, ret; + int len, i, j; if (!d) return -ENODEV; @@ -436,8 +430,7 @@ static int dw2104_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], i if (2 + msg[j].len > sizeof(ibuf)) { warn("i2c rd: len=%d is too big!\n", msg[j].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } dw210x_op_rw(d->udev, 0xc3, @@ -473,8 +466,7 @@ static int dw2104_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], i if (2 + msg[j].len > sizeof(obuf)) { warn("i2c wr: len=%d is too big!\n", msg[j].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[j].addr << 1; @@ -489,18 +481,15 @@ static int dw2104_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], i } } - ret = num; -unlock: mutex_unlock(&d->i2c_mutex); - return ret; + return num; } static int dw3101_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], int num) { struct dvb_usb_device *d = i2c_get_adapdata(adap); - int ret; int i; if (!d) @@ -517,8 +506,7 @@ static int dw3101_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], if (2 + msg[1].len > sizeof(ibuf)) { warn("i2c rd: len=%d is too big!\n", msg[1].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[0].addr << 1; obuf[1] = msg[0].len; @@ -542,8 +530,7 @@ static int dw3101_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], if (2 + msg[0].len > sizeof(obuf)) { warn("i2c wr: len=%d is too big!\n", msg[0].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[0].addr << 1; obuf[1] = msg[0].len; @@ -569,11 +556,9 @@ static int dw3101_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], msg[i].flags == 0 ? ">>>" : "<<<"); debug_dump(msg[i].buf, msg[i].len, deb_xfer); } - ret = num; -unlock: mutex_unlock(&d->i2c_mutex); - return ret; + return num; } static int s6x0_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], @@ -581,7 +566,7 @@ static int s6x0_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], { struct dvb_usb_device *d = i2c_get_adapdata(adap); struct usb_device *udev; - int len, i, j, ret; + int len, i, j; if (!d) return -ENODEV; @@ -633,8 +618,7 @@ static int s6x0_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], if (msg[j].len > sizeof(ibuf)) { warn("i2c rd: len=%d is too big!\n", msg[j].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } dw210x_op_rw(d->udev, 0x91, 0, 0, @@ -668,8 +652,7 @@ static int s6x0_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], if (2 + msg[j].len > sizeof(obuf)) { warn("i2c wr: len=%d is too big!\n", msg[j].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[j + 1].len; @@ -688,8 +671,7 @@ static int s6x0_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], if (2 + msg[j].len > sizeof(obuf)) { warn("i2c wr: len=%d is too big!\n", msg[j].len); - ret = -EOPNOTSUPP; - goto unlock; + return -EOPNOTSUPP; } obuf[0] = msg[j].len + 1; obuf[1] = (msg[j].addr << 1); @@ -703,11 +685,9 @@ static int s6x0_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], } } } - ret = num; -unlock: mutex_unlock(&d->i2c_mutex); - return ret; + return num; } static int su3000_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[], diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c index c4d669648dc..b22f8fed812 100644 --- a/drivers/media/usb/em28xx/em28xx-dvb.c +++ b/drivers/media/usb/em28xx/em28xx-dvb.c @@ -673,8 +673,7 @@ static void pctv_520e_init(struct em28xx *dev) static int em28xx_pctv_290e_set_lna(struct dvb_frontend *fe) { struct dtv_frontend_properties *c = &fe->dtv_property_cache; - struct em28xx_i2c_bus *i2c_bus = fe->dvb->priv; - struct em28xx *dev = i2c_bus->dev; + struct em28xx *dev = fe->dvb->priv; #ifdef CONFIG_GPIOLIB struct em28xx_dvb *dvb = dev->dvb; int ret; diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c index a2737b4b090..32d60e5546b 100644 --- a/drivers/media/usb/em28xx/em28xx-video.c +++ b/drivers/media/usb/em28xx/em28xx-video.c @@ -696,16 +696,13 @@ static int em28xx_stop_streaming(struct vb2_queue *vq) } spin_lock_irqsave(&dev->slock, flags); - if (dev->usb_ctl.vid_buf != NULL) { - vb2_buffer_done(&dev->usb_ctl.vid_buf->vb, VB2_BUF_STATE_ERROR); - dev->usb_ctl.vid_buf = NULL; - } while (!list_empty(&vidq->active)) { struct em28xx_buffer *buf; buf = list_entry(vidq->active.next, struct em28xx_buffer, list); list_del(&buf->list); vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR); } + dev->usb_ctl.vid_buf = NULL; spin_unlock_irqrestore(&dev->slock, flags); return 0; @@ -727,16 +724,13 @@ int em28xx_stop_vbi_streaming(struct vb2_queue *vq) } spin_lock_irqsave(&dev->slock, flags); - if (dev->usb_ctl.vbi_buf != NULL) { - vb2_buffer_done(&dev->usb_ctl.vbi_buf->vb, VB2_BUF_STATE_ERROR); - dev->usb_ctl.vbi_buf = NULL; - } while (!list_empty(&vbiq->active)) { struct em28xx_buffer *buf; buf = list_entry(vbiq->active.next, struct em28xx_buffer, list); list_del(&buf->list); vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR); } + dev->usb_ctl.vbi_buf = NULL; spin_unlock_irqrestore(&dev->slock, flags); return 0; diff --git a/drivers/media/usb/gspca/pac7302.c b/drivers/media/usb/gspca/pac7302.c index 20d9c15a305..6008c8d546a 100644 --- a/drivers/media/usb/gspca/pac7302.c +++ b/drivers/media/usb/gspca/pac7302.c @@ -945,7 +945,6 @@ static const struct usb_device_id device_table[] = { {USB_DEVICE(0x093a, 0x2620)}, {USB_DEVICE(0x093a, 0x2621)}, {USB_DEVICE(0x093a, 0x2622), .driver_info = FL_VFLIP}, - {USB_DEVICE(0x093a, 0x2623), .driver_info = FL_VFLIP}, {USB_DEVICE(0x093a, 0x2624), .driver_info = FL_VFLIP}, {USB_DEVICE(0x093a, 0x2625)}, {USB_DEVICE(0x093a, 0x2626)}, diff --git a/drivers/media/usb/gspca/sn9c20x.c b/drivers/media/usb/gspca/sn9c20x.c index 8b59e5d37ba..ead9a1f5851 100644 --- a/drivers/media/usb/gspca/sn9c20x.c +++ b/drivers/media/usb/gspca/sn9c20x.c @@ -2394,7 +2394,6 @@ static const struct usb_device_id device_table[] = { {USB_DEVICE(0x045e, 0x00f4), SN9C20X(OV9650, 0x30, 0)}, {USB_DEVICE(0x145f, 0x013d), SN9C20X(OV7660, 0x21, 0)}, {USB_DEVICE(0x0458, 0x7029), SN9C20X(HV7131R, 0x11, 0)}, - {USB_DEVICE(0x0458, 0x7045), SN9C20X(MT9M112, 0x5d, LED_REVERSE)}, {USB_DEVICE(0x0458, 0x704a), SN9C20X(MT9M112, 0x5d, 0)}, {USB_DEVICE(0x0458, 0x704c), SN9C20X(MT9M112, 0x5d, 0)}, {USB_DEVICE(0xa168, 0x0610), SN9C20X(HV7131R, 0x11, 0)}, diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c index eed70a4d24e..774ba0e820b 100644 --- a/drivers/media/usb/hdpvr/hdpvr-video.c +++ b/drivers/media/usb/hdpvr/hdpvr-video.c @@ -81,7 +81,7 @@ static void hdpvr_read_bulk_callback(struct urb *urb) } /*=========================================================================*/ -/* buffer bits */ +/* bufffer bits */ /* function expects dev->io_mutex to be hold by caller */ int hdpvr_cancel_queue(struct hdpvr_device *dev) @@ -921,7 +921,7 @@ static int hdpvr_s_ctrl(struct v4l2_ctrl *ctrl) case V4L2_CID_MPEG_AUDIO_ENCODING: if (dev->flags & HDPVR_FLAG_AC3_CAP) { opt->audio_codec = ctrl->val; - return hdpvr_set_audio(dev, opt->audio_input + 1, + return hdpvr_set_audio(dev, opt->audio_input, opt->audio_codec); } return 0; @@ -1191,7 +1191,7 @@ int hdpvr_register_videodev(struct hdpvr_device *dev, struct device *parent, v4l2_ctrl_new_std_menu(hdl, &hdpvr_ctrl_ops, V4L2_CID_MPEG_AUDIO_ENCODING, ac3 ? V4L2_MPEG_AUDIO_ENCODING_AC3 : V4L2_MPEG_AUDIO_ENCODING_AAC, - 0x7, ac3 ? dev->options.audio_codec : V4L2_MPEG_AUDIO_ENCODING_AAC); + 0x7, V4L2_MPEG_AUDIO_ENCODING_AAC); v4l2_ctrl_new_std_menu(hdl, &hdpvr_ctrl_ops, V4L2_CID_MPEG_VIDEO_ENCODING, V4L2_MPEG_VIDEO_ENCODING_MPEG_4_AVC, 0x3, diff --git a/drivers/media/usb/stk1160/stk1160-core.c b/drivers/media/usb/stk1160/stk1160-core.c index 03504dcf3c5..34a26e0cfe7 100644 --- a/drivers/media/usb/stk1160/stk1160-core.c +++ b/drivers/media/usb/stk1160/stk1160-core.c @@ -67,25 +67,17 @@ int stk1160_read_reg(struct stk1160 *dev, u16 reg, u8 *value) { int ret; int pipe = usb_rcvctrlpipe(dev->udev, 0); - u8 *buf; *value = 0; - - buf = kmalloc(sizeof(u8), GFP_KERNEL); - if (!buf) - return -ENOMEM; ret = usb_control_msg(dev->udev, pipe, 0x00, USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, - 0x00, reg, buf, sizeof(u8), HZ); + 0x00, reg, value, sizeof(u8), HZ); if (ret < 0) { stk1160_err("read failed on reg 0x%x (%d)\n", reg, ret); - kfree(buf); return ret; } - *value = *buf; - kfree(buf); return 0; } diff --git a/drivers/media/usb/stk1160/stk1160.h b/drivers/media/usb/stk1160/stk1160.h index abdea484c99..05b05b160e1 100644 --- a/drivers/media/usb/stk1160/stk1160.h +++ b/drivers/media/usb/stk1160/stk1160.h @@ -143,6 +143,7 @@ struct stk1160 { int num_alt; struct stk1160_isoc_ctl isoc_ctl; + char urb_buf[255]; /* urb control msg buffer */ /* frame properties */ int width; /* current frame width */ diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c index c081812ac5c..3394c343201 100644 --- a/drivers/media/usb/uvc/uvc_video.c +++ b/drivers/media/usb/uvc/uvc_video.c @@ -361,14 +361,6 @@ static int uvc_commit_video(struct uvc_streaming *stream, * Clocks and timestamps */ -static inline void uvc_video_get_ts(struct timespec *ts) -{ - if (uvc_clock_param == CLOCK_MONOTONIC) - ktime_get_ts(ts); - else - ktime_get_real_ts(ts); -} - static void uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf, const __u8 *data, int len) @@ -428,7 +420,7 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf, stream->clock.last_sof = dev_sof; host_sof = usb_get_current_frame_number(stream->dev->udev); - uvc_video_get_ts(&ts); + ktime_get_ts(&ts); /* The UVC specification allows device implementations that can't obtain * the USB frame number to keep their own frame counters as long as they @@ -1018,7 +1010,10 @@ static int uvc_video_decode_start(struct uvc_streaming *stream, return -ENODATA; } - uvc_video_get_ts(&ts); + if (uvc_clock_param == CLOCK_MONOTONIC) + ktime_get_ts(&ts); + else + ktime_get_real_ts(&ts); buf->buf.v4l2_buf.sequence = stream->sequence; buf->buf.v4l2_buf.timestamp.tv_sec = ts.tv_sec; @@ -1851,25 +1846,7 @@ int uvc_video_enable(struct uvc_streaming *stream, int enable) if (!enable) { uvc_uninit_video(stream, 1); - if (stream->intf->num_altsetting > 1) { - usb_set_interface(stream->dev->udev, - stream->intfnum, 0); - } else { - /* UVC doesn't specify how to inform a bulk-based device - * when the video stream is stopped. Windows sends a - * CLEAR_FEATURE(HALT) request to the video streaming - * bulk endpoint, mimic the same behaviour. - */ - unsigned int epnum = stream->header.bEndpointAddress - & USB_ENDPOINT_NUMBER_MASK; - unsigned int dir = stream->header.bEndpointAddress - & USB_ENDPOINT_DIR_MASK; - unsigned int pipe; - - pipe = usb_sndbulkpipe(stream->dev->udev, epnum) | dir; - usb_clear_halt(stream->dev->udev, pipe); - } - + usb_set_interface(stream->dev->udev, stream->intfnum, 0); uvc_queue_enable(&stream->queue, 0); uvc_video_clock_cleanup(stream); return 0; diff --git a/drivers/media/v4l2-core/v4l2-common.c b/drivers/media/v4l2-core/v4l2-common.c index ec9a4fa3bc8..3fed63f4e02 100644 --- a/drivers/media/v4l2-core/v4l2-common.c +++ b/drivers/media/v4l2-core/v4l2-common.c @@ -485,13 +485,16 @@ static unsigned int clamp_align(unsigned int x, unsigned int min, /* Bits that must be zero to be aligned */ unsigned int mask = ~((1 << align) - 1); - /* Clamp to aligned min and max */ - x = clamp(x, (min + ~mask) & mask, max & mask); - /* Round to nearest aligned value */ if (align) x = (x + (1 << (align - 1))) & mask; + /* Clamp to aligned value of min and max */ + if (x < min) + x = (min + ~mask) & mask; + else if (x > max) + x = max & mask; + return x; } diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c index e2b0a0969eb..f1295519f28 100644 --- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c +++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c @@ -178,9 +178,6 @@ struct v4l2_create_buffers32 { static int __get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up) { - if (get_user(kp->type, &up->type)) - return -EFAULT; - switch (kp->type) { case V4L2_BUF_TYPE_VIDEO_CAPTURE: case V4L2_BUF_TYPE_VIDEO_OUTPUT: @@ -207,16 +204,17 @@ static int __get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __us static int get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up) { - if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_format32))) - return -EFAULT; + if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_format32)) || + get_user(kp->type, &up->type)) + return -EFAULT; return __get_v4l2_format32(kp, up); } static int get_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up) { if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_create_buffers32)) || - copy_from_user(kp, up, offsetof(struct v4l2_create_buffers32, format))) - return -EFAULT; + copy_from_user(kp, up, offsetof(struct v4l2_create_buffers32, format.fmt))) + return -EFAULT; return __get_v4l2_format32(&kp->format, &up->format); } @@ -789,8 +787,8 @@ static int put_v4l2_subdev_edid32(struct v4l2_subdev_edid *kp, struct v4l2_subde #define VIDIOC_DQBUF32 _IOWR('V', 17, struct v4l2_buffer32) #define VIDIOC_ENUMSTD32 _IOWR('V', 25, struct v4l2_standard32) #define VIDIOC_ENUMINPUT32 _IOWR('V', 26, struct v4l2_input32) -#define VIDIOC_SUBDEV_G_EDID32 _IOWR('V', 40, struct v4l2_subdev_edid32) -#define VIDIOC_SUBDEV_S_EDID32 _IOWR('V', 41, struct v4l2_subdev_edid32) +#define VIDIOC_SUBDEV_G_EDID32 _IOWR('V', 63, struct v4l2_subdev_edid32) +#define VIDIOC_SUBDEV_S_EDID32 _IOWR('V', 64, struct v4l2_subdev_edid32) #define VIDIOC_TRY_FMT32 _IOWR('V', 64, struct v4l2_format32) #define VIDIOC_G_EXT_CTRLS32 _IOWR('V', 71, struct v4l2_ext_controls32) #define VIDIOC_S_EXT_CTRLS32 _IOWR('V', 72, struct v4l2_ext_controls32) diff --git a/drivers/media/v4l2-core/videobuf2-core.c b/drivers/media/v4l2-core/videobuf2-core.c index 5e47ba479e5..e3bdc3be91e 100644 --- a/drivers/media/v4l2-core/videobuf2-core.c +++ b/drivers/media/v4l2-core/videobuf2-core.c @@ -666,7 +666,6 @@ static int __reqbufs(struct vb2_queue *q, struct v4l2_requestbuffers *req) * to the userspace. */ req->count = allocated_buffers; - q->waiting_for_buffers = !V4L2_TYPE_IS_OUTPUT(q->type); return 0; } @@ -715,7 +714,6 @@ static int __create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create memset(q->plane_sizes, 0, sizeof(q->plane_sizes)); memset(q->alloc_ctx, 0, sizeof(q->alloc_ctx)); q->memory = create->memory; - q->waiting_for_buffers = !V4L2_TYPE_IS_OUTPUT(q->type); } num_buffers = min(create->count, VIDEO_MAX_FRAME - q->num_buffers); @@ -1357,7 +1355,6 @@ int vb2_qbuf(struct vb2_queue *q, struct v4l2_buffer *b) * dequeued in dqbuf. */ list_add_tail(&vb->queued_entry, &q->queued_list); - q->waiting_for_buffers = false; vb->state = VB2_BUF_STATE_QUEUED; /* @@ -1727,7 +1724,6 @@ int vb2_streamoff(struct vb2_queue *q, enum v4l2_buf_type type) * and videobuf, effectively returning control over them to userspace. */ __vb2_queue_cancel(q); - q->waiting_for_buffers = !V4L2_TYPE_IS_OUTPUT(q->type); dprintk(3, "Streamoff successful\n"); return 0; @@ -2013,16 +2009,9 @@ unsigned int vb2_poll(struct vb2_queue *q, struct file *file, poll_table *wait) } /* - * There is nothing to wait for if the queue isn't streaming. + * There is nothing to wait for if no buffers have already been queued. */ - if (!vb2_is_streaming(q)) - return res | POLLERR; - /* - * For compatibility with vb1: if QBUF hasn't been called yet, then - * return POLLERR as well. This only affects capture queues, output - * queues will always initialize waiting_for_buffers to false. - */ - if (q->waiting_for_buffers) + if (list_empty(&q->queued_list)) return res | POLLERR; if (list_empty(&q->done_list)) diff --git a/drivers/message/fusion/mptspi.c b/drivers/message/fusion/mptspi.c index 424f51d1e2c..5653e505f91 100644 --- a/drivers/message/fusion/mptspi.c +++ b/drivers/message/fusion/mptspi.c @@ -1422,11 +1422,6 @@ mptspi_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto out_mptspi_probe; } - /* VMWare emulation doesn't properly implement WRITE_SAME - */ - if (pdev->subsystem_vendor == 0x15AD) - sh->no_write_same = 1; - spin_lock_irqsave(&ioc->FreeQlock, flags); /* Attach the SCSI Host to the IOC structure diff --git a/drivers/mfd/88pm860x-core.c b/drivers/mfd/88pm860x-core.c index 30cf7eef2a8..31ca55548ef 100644 --- a/drivers/mfd/88pm860x-core.c +++ b/drivers/mfd/88pm860x-core.c @@ -1179,18 +1179,12 @@ static int pm860x_probe(struct i2c_client *client, chip->companion_addr = pdata->companion_addr; chip->companion = i2c_new_dummy(chip->client->adapter, chip->companion_addr); - if (!chip->companion) { - dev_err(&client->dev, - "Failed to allocate I2C companion device\n"); - return -ENODEV; - } chip->regmap_companion = regmap_init_i2c(chip->companion, &pm860x_regmap_config); if (IS_ERR(chip->regmap_companion)) { ret = PTR_ERR(chip->regmap_companion); dev_err(&chip->companion->dev, "Failed to allocate register map: %d\n", ret); - i2c_unregister_device(chip->companion); return ret; } i2c_set_clientdata(chip->companion, chip); diff --git a/drivers/mfd/max77686.c b/drivers/mfd/max77686.c index 1b6f45a1410..8290c238239 100644 --- a/drivers/mfd/max77686.c +++ b/drivers/mfd/max77686.c @@ -121,10 +121,6 @@ static int max77686_i2c_probe(struct i2c_client *i2c, dev_info(max77686->dev, "device found\n"); max77686->rtc = i2c_new_dummy(i2c->adapter, I2C_ADDR_RTC); - if (!max77686->rtc) { - dev_err(max77686->dev, "Failed to allocate I2C device for RTC\n"); - return -ENODEV; - } i2c_set_clientdata(max77686->rtc, max77686); max77686_irq_init(max77686); diff --git a/drivers/mfd/max77693.c b/drivers/mfd/max77693.c index 299970f9958..9e60fed5ff8 100644 --- a/drivers/mfd/max77693.c +++ b/drivers/mfd/max77693.c @@ -149,18 +149,9 @@ static int max77693_i2c_probe(struct i2c_client *i2c, dev_info(max77693->dev, "device ID: 0x%x\n", reg_data); max77693->muic = i2c_new_dummy(i2c->adapter, I2C_ADDR_MUIC); - if (!max77693->muic) { - dev_err(max77693->dev, "Failed to allocate I2C device for MUIC\n"); - return -ENODEV; - } i2c_set_clientdata(max77693->muic, max77693); max77693->haptic = i2c_new_dummy(i2c->adapter, I2C_ADDR_HAPTIC); - if (!max77693->haptic) { - dev_err(max77693->dev, "Failed to allocate I2C device for Haptic\n"); - ret = -ENODEV; - goto err_i2c_haptic; - } i2c_set_clientdata(max77693->haptic, max77693); /* @@ -196,9 +187,8 @@ err_mfd: max77693_irq_exit(max77693); err_irq: err_regmap_muic: - i2c_unregister_device(max77693->haptic); -err_i2c_haptic: i2c_unregister_device(max77693->muic); + i2c_unregister_device(max77693->haptic); return ret; } diff --git a/drivers/mfd/max8925-i2c.c b/drivers/mfd/max8925-i2c.c index c94d3337bdf..92bbebd3159 100644 --- a/drivers/mfd/max8925-i2c.c +++ b/drivers/mfd/max8925-i2c.c @@ -180,18 +180,9 @@ static int max8925_probe(struct i2c_client *client, mutex_init(&chip->io_lock); chip->rtc = i2c_new_dummy(chip->i2c->adapter, RTC_I2C_ADDR); - if (!chip->rtc) { - dev_err(chip->dev, "Failed to allocate I2C device for RTC\n"); - return -ENODEV; - } i2c_set_clientdata(chip->rtc, chip); chip->adc = i2c_new_dummy(chip->i2c->adapter, ADC_I2C_ADDR); - if (!chip->adc) { - dev_err(chip->dev, "Failed to allocate I2C device for ADC\n"); - i2c_unregister_device(chip->rtc); - return -ENODEV; - } i2c_set_clientdata(chip->adc, chip); device_init_wakeup(&client->dev, 1); diff --git a/drivers/mfd/max8997.c b/drivers/mfd/max8997.c index ea1defbcf2c..14714058f2d 100644 --- a/drivers/mfd/max8997.c +++ b/drivers/mfd/max8997.c @@ -218,26 +218,10 @@ static int max8997_i2c_probe(struct i2c_client *i2c, mutex_init(&max8997->iolock); max8997->rtc = i2c_new_dummy(i2c->adapter, I2C_ADDR_RTC); - if (!max8997->rtc) { - dev_err(max8997->dev, "Failed to allocate I2C device for RTC\n"); - return -ENODEV; - } i2c_set_clientdata(max8997->rtc, max8997); - max8997->haptic = i2c_new_dummy(i2c->adapter, I2C_ADDR_HAPTIC); - if (!max8997->haptic) { - dev_err(max8997->dev, "Failed to allocate I2C device for Haptic\n"); - ret = -ENODEV; - goto err_i2c_haptic; - } i2c_set_clientdata(max8997->haptic, max8997); - max8997->muic = i2c_new_dummy(i2c->adapter, I2C_ADDR_MUIC); - if (!max8997->muic) { - dev_err(max8997->dev, "Failed to allocate I2C device for MUIC\n"); - ret = -ENODEV; - goto err_i2c_muic; - } i2c_set_clientdata(max8997->muic, max8997); pm_runtime_set_active(max8997->dev); @@ -264,9 +248,7 @@ static int max8997_i2c_probe(struct i2c_client *i2c, err_mfd: mfd_remove_devices(max8997->dev); i2c_unregister_device(max8997->muic); -err_i2c_muic: i2c_unregister_device(max8997->haptic); -err_i2c_haptic: i2c_unregister_device(max8997->rtc); err: kfree(max8997); diff --git a/drivers/mfd/max8998.c b/drivers/mfd/max8998.c index 8381a76c69c..d7218cc9094 100644 --- a/drivers/mfd/max8998.c +++ b/drivers/mfd/max8998.c @@ -152,10 +152,6 @@ static int max8998_i2c_probe(struct i2c_client *i2c, mutex_init(&max8998->iolock); max8998->rtc = i2c_new_dummy(i2c->adapter, RTC_I2C_ADDR); - if (!max8998->rtc) { - dev_err(&i2c->dev, "Failed to allocate I2C device for RTC\n"); - return -ENODEV; - } i2c_set_clientdata(max8998->rtc, max8998); max8998_irq_init(max8998); diff --git a/drivers/mfd/omap-usb-host.c b/drivers/mfd/omap-usb-host.c index a36f3f282ae..759fae3ca7f 100644 --- a/drivers/mfd/omap-usb-host.c +++ b/drivers/mfd/omap-usb-host.c @@ -445,7 +445,7 @@ static unsigned omap_usbhs_rev1_hostconfig(struct usbhs_hcd_omap *omap, for (i = 0; i < omap->nports; i++) { if (is_ehci_phy_mode(pdata->port_mode[i])) { - reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_BYPASS; + reg &= OMAP_UHH_HOSTCONFIG_ULPI_BYPASS; break; } } diff --git a/drivers/mfd/rtsx_pcr.c b/drivers/mfd/rtsx_pcr.c index 7e28bd0de55..45f26be359e 100644 --- a/drivers/mfd/rtsx_pcr.c +++ b/drivers/mfd/rtsx_pcr.c @@ -1137,7 +1137,7 @@ static int rtsx_pci_probe(struct pci_dev *pcidev, pcr->msi_en = msi_en; if (pcr->msi_en) { ret = pci_enable_msi(pcidev); - if (ret) + if (ret < 0) pcr->msi_en = false; } diff --git a/drivers/mfd/sec-core.c b/drivers/mfd/sec-core.c index 81cfe8817fe..77ee26ef594 100644 --- a/drivers/mfd/sec-core.c +++ b/drivers/mfd/sec-core.c @@ -199,10 +199,6 @@ static int sec_pmic_probe(struct i2c_client *i2c, } sec_pmic->rtc = i2c_new_dummy(i2c->adapter, RTC_I2C_ADDR); - if (!sec_pmic->rtc) { - dev_err(&i2c->dev, "Failed to allocate I2C for RTC\n"); - return -ENODEV; - } i2c_set_clientdata(sec_pmic->rtc, sec_pmic); if (pdata && pdata->cfg_pmic_irq) diff --git a/drivers/mfd/tps65910.c b/drivers/mfd/tps65910.c index de87eafbeb0..d7927720483 100644 --- a/drivers/mfd/tps65910.c +++ b/drivers/mfd/tps65910.c @@ -254,10 +254,8 @@ static int tps65910_irq_init(struct tps65910 *tps65910, int irq, ret = regmap_add_irq_chip(tps65910->regmap, tps65910->chip_irq, IRQF_ONESHOT, pdata->irq_base, tps6591x_irqs_chip, &tps65910->irq_data); - if (ret < 0) { + if (ret < 0) dev_warn(tps65910->dev, "Failed to add irq_chip %d\n", ret); - tps65910->chip_irq = 0; - } return ret; } diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c index 07ed4b5b165..0bb2aa2c6fb 100644 --- a/drivers/misc/mei/client.c +++ b/drivers/misc/mei/client.c @@ -405,7 +405,6 @@ int mei_cl_disconnect(struct mei_cl *cl) dev_err(&dev->pdev->dev, "failed to disconnect.\n"); goto free; } - cl->timer_count = MEI_CONNECT_TIMEOUT; mdelay(10); /* Wait for hardware disconnection ready */ list_add_tail(&cb->list, &dev->ctrl_rd_list.list); } else { @@ -512,7 +511,6 @@ int mei_cl_connect(struct mei_cl *cl, struct file *file) cl->timer_count = MEI_CONNECT_TIMEOUT; list_add_tail(&cb->list, &dev->ctrl_rd_list.list); } else { - cl->state = MEI_FILE_INITIALIZING; list_add_tail(&cb->list, &dev->ctrl_wr_list.list); } diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h index cabc0438368..66f411a6e8e 100644 --- a/drivers/misc/mei/hw-me-regs.h +++ b/drivers/misc/mei/hw-me-regs.h @@ -115,11 +115,6 @@ #define MEI_DEV_ID_LPT_HR 0x8CBA /* Lynx Point H Refresh */ #define MEI_DEV_ID_WPT_LP 0x9CBA /* Wildcat Point LP */ - -/* Host Firmware Status Registers in PCI Config Space */ -#define PCI_CFG_HFS_1 0x40 -#define PCI_CFG_HFS_2 0x48 - /* * MEI HW Section */ diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c index 297cc10a26d..1bf3f8b5ce3 100644 --- a/drivers/misc/mei/hw-me.c +++ b/drivers/misc/mei/hw-me.c @@ -164,9 +164,6 @@ static void mei_me_hw_reset_release(struct mei_device *dev) hcsr |= H_IG; hcsr &= ~H_RST; mei_hcsr_set(hw, hcsr); - - /* complete this write before we set host ready on another CPU */ - mmiowb(); } /** * mei_me_hw_reset - resets fw via mei csr register. @@ -186,22 +183,9 @@ static void mei_me_hw_reset(struct mei_device *dev, bool intr_enable) else hcsr &= ~H_IE; - dev->recvd_hw_ready = false; mei_me_reg_write(hw, H_CSR, hcsr); - /* - * Host reads the H_CSR once to ensure that the - * posted write to H_CSR completes. - */ - hcsr = mei_hcsr_read(hw); - - if ((hcsr & H_RST) == 0) - dev_warn(&dev->pdev->dev, "H_RST is not set = 0x%08X", hcsr); - - if ((hcsr & H_RDY) == H_RDY) - dev_warn(&dev->pdev->dev, "H_RDY is not cleared 0x%08X", hcsr); - - if (intr_enable == false) + if (dev->dev_state == MEI_DEV_POWER_DOWN) mei_me_hw_reset_release(dev); dev_dbg(&dev->pdev->dev, "current HCSR = 0x%08x.\n", mei_hcsr_read(hw)); @@ -217,7 +201,6 @@ static void mei_me_hw_reset(struct mei_device *dev, bool intr_enable) static void mei_me_host_set_ready(struct mei_device *dev) { struct mei_me_hw *hw = to_me_hw(dev); - hw->host_hw_state = mei_hcsr_read(hw); hw->host_hw_state |= H_IE | H_IG | H_RDY; mei_hcsr_set(hw, hw->host_hw_state); } @@ -250,7 +233,10 @@ static bool mei_me_hw_is_ready(struct mei_device *dev) static int mei_me_hw_ready_wait(struct mei_device *dev) { int err; + if (mei_me_hw_is_ready(dev)) + return 0; + dev->recvd_hw_ready = false; mutex_unlock(&dev->device_lock); err = wait_event_interruptible_timeout(dev->wait_hw_ready, dev->recvd_hw_ready, @@ -510,15 +496,19 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id) /* check if we need to start the dev */ if (!mei_host_is_ready(dev)) { if (mei_hw_is_ready(dev)) { - mei_me_hw_reset_release(dev); dev_dbg(&dev->pdev->dev, "we need to start the dev.\n"); dev->recvd_hw_ready = true; wake_up_interruptible(&dev->wait_hw_ready); + + mutex_unlock(&dev->device_lock); + return IRQ_HANDLED; } else { - dev_dbg(&dev->pdev->dev, "Spurious Interrupt\n"); + dev_dbg(&dev->pdev->dev, "Reset Completed.\n"); + mei_me_hw_reset_release(dev); + mutex_unlock(&dev->device_lock); + return IRQ_HANDLED; } - goto end; } /* check slots available for reading */ slots = mei_count_full_read_slots(dev); diff --git a/drivers/misc/mei/nfc.c b/drivers/misc/mei/nfc.c index 4b7ea3fb143..994ca4aff1a 100644 --- a/drivers/misc/mei/nfc.c +++ b/drivers/misc/mei/nfc.c @@ -342,10 +342,9 @@ static int mei_nfc_send(struct mei_cl_device *cldev, u8 *buf, size_t length) ndev = (struct mei_nfc_dev *) cldev->priv_data; dev = ndev->cl->dev; - err = -ENOMEM; mei_buf = kzalloc(length + MEI_NFC_HEADER_SIZE, GFP_KERNEL); if (!mei_buf) - goto out; + return -ENOMEM; hdr = (struct mei_nfc_hci_hdr *) mei_buf; hdr->cmd = MEI_NFC_CMD_HCI_SEND; @@ -355,9 +354,12 @@ static int mei_nfc_send(struct mei_cl_device *cldev, u8 *buf, size_t length) hdr->data_size = length; memcpy(mei_buf + MEI_NFC_HEADER_SIZE, buf, length); + err = __mei_cl_send(ndev->cl, mei_buf, length + MEI_NFC_HEADER_SIZE); if (err < 0) - goto out; + return err; + + kfree(mei_buf); if (!wait_event_interruptible_timeout(ndev->send_wq, ndev->recv_req_id == ndev->req_id, HZ)) { @@ -366,8 +368,7 @@ static int mei_nfc_send(struct mei_cl_device *cldev, u8 *buf, size_t length) } else { ndev->req_id++; } -out: - kfree(mei_buf); + return err; } diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c index 3c9e257982e..371c65ae6be 100644 --- a/drivers/misc/mei/pci-me.c +++ b/drivers/misc/mei/pci-me.c @@ -105,31 +105,15 @@ static bool mei_me_quirk_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { u32 reg; - /* Cougar Point || Patsburg */ - if (ent->device == MEI_DEV_ID_CPT_1 || - ent->device == MEI_DEV_ID_PBG_1) { - pci_read_config_dword(pdev, PCI_CFG_HFS_2, ®); - /* make sure that bit 9 (NM) is up and bit 10 (DM) is down */ - if ((reg & 0x600) == 0x200) - goto no_mei; + if (ent->device == MEI_DEV_ID_PBG_1) { + pci_read_config_dword(pdev, 0x48, ®); + /* make sure that bit 9 is up and bit 10 is down */ + if ((reg & 0x600) == 0x200) { + dev_info(&pdev->dev, "Device doesn't have valid ME Interface\n"); + return false; + } } - - /* Lynx Point */ - if (ent->device == MEI_DEV_ID_LPT_H || - ent->device == MEI_DEV_ID_LPT_W || - ent->device == MEI_DEV_ID_LPT_HR) { - /* Read ME FW Status check for SPS Firmware */ - pci_read_config_dword(pdev, PCI_CFG_HFS_1, ®); - /* if bits [19:16] = 15, running SPS Firmware */ - if ((reg & 0xf0000) == 0xf0000) - goto no_mei; - } - return true; - -no_mei: - dev_info(&pdev->dev, "Device doesn't have valid ME Interface\n"); - return false; } /** * mei_probe - Device Initialization Routine diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c index 4c65a5a4d8f..ad13f4240c4 100644 --- a/drivers/mmc/host/rtsx_pci_sdmmc.c +++ b/drivers/mmc/host/rtsx_pci_sdmmc.c @@ -247,9 +247,6 @@ static void sd_send_cmd_get_rsp(struct realtek_pci_sdmmc *host, case MMC_RSP_R1: rsp_type = SD_RSP_TYPE_R1; break; - case MMC_RSP_R1 & ~MMC_RSP_CRC: - rsp_type = SD_RSP_TYPE_R1 | SD_NO_CHECK_CRC7; - break; case MMC_RSP_R1B: rsp_type = SD_RSP_TYPE_R1b; break; @@ -341,13 +338,6 @@ static void sd_send_cmd_get_rsp(struct realtek_pci_sdmmc *host, } if (rsp_type == SD_RSP_TYPE_R2) { - /* - * The controller offloads the last byte {CRC-7, end bit 1'b1} - * of response type R2. Assign dummy CRC, 0, and end bit to the - * byte(ptr[16], goes into the LSB of resp[3] later). - */ - ptr[16] = 1; - for (i = 0; i < 4; i++) { cmd->resp[i] = get_unaligned_be32(ptr + 1 + i * 4); dev_dbg(sdmmc_dev(host), "cmd->resp[%d] = 0x%08x\n", diff --git a/drivers/mtd/ftl.c b/drivers/mtd/ftl.c index 71e4f6ccae2..19d637266fc 100644 --- a/drivers/mtd/ftl.c +++ b/drivers/mtd/ftl.c @@ -1075,6 +1075,7 @@ static void ftl_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) return; } + ftl_freepart(partition); kfree(partition); } diff --git a/drivers/mtd/nand/atmel_nand.c b/drivers/mtd/nand/atmel_nand.c index cc69e415df3..2d23d292943 100644 --- a/drivers/mtd/nand/atmel_nand.c +++ b/drivers/mtd/nand/atmel_nand.c @@ -1096,7 +1096,6 @@ static int __init atmel_pmecc_nand_init_params(struct platform_device *pdev, goto err_pmecc_data_alloc; } - nand_chip->options |= NAND_NO_SUBPAGE_WRITE; nand_chip->ecc.read_page = atmel_nand_pmecc_read_page; nand_chip->ecc.write_page = atmel_nand_pmecc_write_page; diff --git a/drivers/mtd/nand/fsl_elbc_nand.c b/drivers/mtd/nand/fsl_elbc_nand.c index c31d183820c..20657209a47 100644 --- a/drivers/mtd/nand/fsl_elbc_nand.c +++ b/drivers/mtd/nand/fsl_elbc_nand.c @@ -725,19 +725,6 @@ static int fsl_elbc_write_page(struct mtd_info *mtd, struct nand_chip *chip, return 0; } -/* ECC will be calculated automatically, and errors will be detected in - * waitfunc. - */ -static int fsl_elbc_write_subpage(struct mtd_info *mtd, struct nand_chip *chip, - uint32_t offset, uint32_t data_len, - const uint8_t *buf, int oob_required) -{ - fsl_elbc_write_buf(mtd, buf, mtd->writesize); - fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize); - - return 0; -} - static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv) { struct fsl_lbc_ctrl *ctrl = priv->ctrl; @@ -776,7 +763,6 @@ static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv) chip->ecc.read_page = fsl_elbc_read_page; chip->ecc.write_page = fsl_elbc_write_page; - chip->ecc.write_subpage = fsl_elbc_write_subpage; /* If CS Base Register selects full hardware ECC then use it */ if ((in_be32(&lbc->bank[priv->bank].br) & BR_DECC) == diff --git a/drivers/mtd/nand/nuc900_nand.c b/drivers/mtd/nand/nuc900_nand.c index 14203f3bb0c..cd6be2ed53a 100644 --- a/drivers/mtd/nand/nuc900_nand.c +++ b/drivers/mtd/nand/nuc900_nand.c @@ -225,7 +225,7 @@ static void nuc900_nand_enable(struct nuc900_nand *nand) val = __raw_readl(nand->reg + REG_FMICSR); if (!(val & NAND_EN)) - __raw_writel(val | NAND_EN, nand->reg + REG_FMICSR); + __raw_writel(val | NAND_EN, REG_FMICSR); val = __raw_readl(nand->reg + REG_SMCSR); diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c index e9b1797cdb5..81b80af5587 100644 --- a/drivers/mtd/nand/omap2.c +++ b/drivers/mtd/nand/omap2.c @@ -948,7 +948,7 @@ static int omap_calculate_ecc(struct mtd_info *mtd, const u_char *dat, u32 val; val = readl(info->reg.gpmc_ecc_config); - if (((val >> ECC_CONFIG_CS_SHIFT) & CS_MASK) != info->gpmc_cs) + if (((val >> ECC_CONFIG_CS_SHIFT) & ~CS_MASK) != info->gpmc_cs) return -EINVAL; /* read ecc result */ @@ -1463,7 +1463,7 @@ static int omap_elm_correct_data(struct mtd_info *mtd, u_char *data, /* Check if any error reported */ if (!is_error_reported) - return stat; + return 0; /* Decode BCH error using ELM module */ elm_decode_bch_error_page(info->elm_dev, ecc_vec, err_vec); diff --git a/drivers/mtd/sm_ftl.c b/drivers/mtd/sm_ftl.c index 4b55cd45287..f9d5615c572 100644 --- a/drivers/mtd/sm_ftl.c +++ b/drivers/mtd/sm_ftl.c @@ -59,12 +59,15 @@ struct attribute_group *sm_create_sysfs_attributes(struct sm_ftl *ftl) struct attribute_group *attr_group; struct attribute **attributes; struct sm_sysfs_attribute *vendor_attribute; - char *vendor; - vendor = kstrndup(ftl->cis_buffer + SM_CIS_VENDOR_OFFSET, - SM_SMALL_PAGE - SM_CIS_VENDOR_OFFSET, GFP_KERNEL); + int vendor_len = strnlen(ftl->cis_buffer + SM_CIS_VENDOR_OFFSET, + SM_SMALL_PAGE - SM_CIS_VENDOR_OFFSET); + + char *vendor = kmalloc(vendor_len, GFP_KERNEL); if (!vendor) goto error1; + memcpy(vendor, ftl->cis_buffer + SM_CIS_VENDOR_OFFSET, vendor_len); + vendor[vendor_len] = 0; /* Initialize sysfs attributes */ vendor_attribute = @@ -75,7 +78,7 @@ struct attribute_group *sm_create_sysfs_attributes(struct sm_ftl *ftl) sysfs_attr_init(&vendor_attribute->dev_attr.attr); vendor_attribute->data = vendor; - vendor_attribute->len = strlen(vendor); + vendor_attribute->len = vendor_len; vendor_attribute->dev_attr.attr.name = "vendor"; vendor_attribute->dev_attr.attr.mode = S_IRUGO; vendor_attribute->dev_attr.show = sm_attr_show; diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c index bf8108d65b7..0648c6996d4 100644 --- a/drivers/mtd/ubi/fastmap.c +++ b/drivers/mtd/ubi/fastmap.c @@ -330,7 +330,6 @@ static int process_pool_aeb(struct ubi_device *ubi, struct ubi_attach_info *ai, av = tmp_av; else { ubi_err("orphaned volume in fastmap pool!"); - kmem_cache_free(ai->aeb_slab_cache, new_aeb); return UBI_BAD_FASTMAP; } diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 2742d4d6aaf..b984d981499 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -139,7 +139,6 @@ config MACVLAN config MACVTAP tristate "MAC-VLAN based tap driver" depends on MACVLAN - depends on INET help This adds a specialized tap character device driver that is based on the MAC-VLAN network interface, called macvtap. A macvtap device @@ -210,7 +209,6 @@ config RIONET_RX_SIZE config TUN tristate "Universal TUN/TAP device driver support" - depends on INET select CRC32 ---help--- TUN/TAP provides packet reception and transmission for user space diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index b143ce91e08..8395b0992a8 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -4995,7 +4995,6 @@ static int __init bonding_init(void) out: return res; err: - bond_destroy_debugfs(); rtnl_link_unregister(&bond_link_ops); err_link: unregister_pernet_subsys(&bond_net_ops); diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c index 6d388cff845..f63169d6af2 100644 --- a/drivers/net/can/flexcan.c +++ b/drivers/net/can/flexcan.c @@ -862,7 +862,7 @@ static int flexcan_open(struct net_device *dev) /* start chip and queuing */ err = flexcan_chip_start(dev); if (err) - goto out_free_irq; + goto out_close; can_led_event(dev, CAN_LED_EVENT_OPEN); @@ -871,8 +871,6 @@ static int flexcan_open(struct net_device *dev) return 0; - out_free_irq: - free_irq(dev->irq, dev); out_close: close_candev(dev); out: diff --git a/drivers/net/can/sja1000/peak_pci.c b/drivers/net/can/sja1000/peak_pci.c index 7042f5faddd..6b6f0ad7509 100644 --- a/drivers/net/can/sja1000/peak_pci.c +++ b/drivers/net/can/sja1000/peak_pci.c @@ -551,7 +551,7 @@ static int peak_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { struct sja1000_priv *priv; struct peak_pci_chan *chan; - struct net_device *dev, *prev_dev; + struct net_device *dev; void __iomem *cfg_base, *reg_base; u16 sub_sys_id, icr; int i, err, channels; @@ -687,13 +687,11 @@ failure_remove_channels: writew(0x0, cfg_base + PITA_ICR + 2); chan = NULL; - for (dev = pci_get_drvdata(pdev); dev; dev = prev_dev) { - priv = netdev_priv(dev); - chan = priv->priv; - prev_dev = chan->prev_dev; - + for (dev = pci_get_drvdata(pdev); dev; dev = chan->prev_dev) { unregister_sja1000dev(dev); free_sja1000dev(dev); + priv = netdev_priv(dev); + chan = priv->priv; } /* free any PCIeC resources too */ @@ -727,12 +725,10 @@ static void peak_pci_remove(struct pci_dev *pdev) /* Loop over all registered devices */ while (1) { - struct net_device *prev_dev = chan->prev_dev; - dev_info(&pdev->dev, "removing device %s\n", dev->name); unregister_sja1000dev(dev); free_sja1000dev(dev); - dev = prev_dev; + dev = chan->prev_dev; if (!dev) { /* do that only for first channel */ diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h index ec86177be1d..3dba2a70a00 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h @@ -312,7 +312,6 @@ struct sw_tx_bd { u8 flags; /* Set on the first BD descriptor when there is a split BD */ #define BNX2X_TSO_SPLIT_BD (1<<0) -#define BNX2X_HAS_SECOND_PBD (1<<1) }; struct sw_rx_page { diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c index 372a7557e1f..70be100feeb 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c @@ -180,12 +180,6 @@ static u16 bnx2x_free_tx_pkt(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata, --nbd; bd_idx = TX_BD(NEXT_TX_IDX(bd_idx)); - if (tx_buf->flags & BNX2X_HAS_SECOND_PBD) { - /* Skip second parse bd... */ - --nbd; - bd_idx = TX_BD(NEXT_TX_IDX(bd_idx)); - } - /* TSO headers+data bds share a common mapping. See bnx2x_tx_split() */ if (tx_buf->flags & BNX2X_TSO_SPLIT_BD) { tx_data_bd = &txdata->tx_desc_ring[bd_idx].reg_bd; @@ -751,8 +745,7 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp, return; } - if (new_data) - bnx2x_frag_free(fp, new_data); + bnx2x_frag_free(fp, new_data); drop: /* drop the packet and keep the buffer in the bin */ DP(NETIF_MSG_RX_STATUS, @@ -3761,9 +3754,6 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev) /* set encapsulation flag in start BD */ SET_FLAG(tx_start_bd->general_data, ETH_TX_START_BD_TUNNEL_EXIST, 1); - - tx_buf->flags |= BNX2X_HAS_SECOND_PBD; - nbd++; } else if (xmit_type & XMIT_CSUM) { /* Set PBD in checksum offload case w/o encapsulation */ diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c index fd50781e996..32a9609cc98 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c @@ -1038,6 +1038,9 @@ static void bnx2x_set_one_vlan_mac_e1h(struct bnx2x *bp, ETH_VLAN_FILTER_CLASSIFY, config); } +#define list_next_entry(pos, member) \ + list_entry((pos)->member.next, typeof(*(pos)), member) + /** * bnx2x_vlan_mac_restore - reconfigure next MAC/VLAN/VLAN-MAC element * diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index 3de4069f020..68e9dc453e1 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -6687,7 +6687,8 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget) work_mask |= opaque_key; - if (desc->err_vlan & RXD_ERR_MASK) { + if ((desc->err_vlan & RXD_ERR_MASK) != 0 && + (desc->err_vlan != RXD_ERR_ODD_NIBBLE_RCVD_MII)) { drop_it: tg3_recycle_rx(tnapi, tpr, opaque_key, desc_idx, *post_ptr); @@ -6767,8 +6768,7 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget) skb->protocol = eth_type_trans(skb, tp->dev); if (len > (tp->dev->mtu + ETH_HLEN) && - skb->protocol != htons(ETH_P_8021Q) && - skb->protocol != htons(ETH_P_8021AD)) { + skb->protocol != htons(ETH_P_8021Q)) { dev_kfree_skb(skb); goto drop_it_no_recycle; } @@ -7760,6 +7760,8 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev) entry = tnapi->tx_prod; base_flags = 0; + if (skb->ip_summed == CHECKSUM_PARTIAL) + base_flags |= TXD_FLAG_TCPUDP_CSUM; mss = skb_shinfo(skb)->gso_size; if (mss) { @@ -7775,13 +7777,6 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev) hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb) - ETH_HLEN; - /* HW/FW can not correctly segment packets that have been - * vlan encapsulated. - */ - if (skb->protocol == htons(ETH_P_8021Q) || - skb->protocol == htons(ETH_P_8021AD)) - return tg3_tso_bug(tp, skb); - if (!skb_is_gso_v6(skb)) { iph->check = 0; iph->tot_len = htons(mss + hdr_len); @@ -7828,17 +7823,6 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev) base_flags |= tsflags << 12; } } - } else if (skb->ip_summed == CHECKSUM_PARTIAL) { - /* HW/FW can not correctly checksum packets that have been - * vlan encapsulated. - */ - if (skb->protocol == htons(ETH_P_8021Q) || - skb->protocol == htons(ETH_P_8021AD)) { - if (skb_checksum_help(skb)) - goto drop; - } else { - base_flags |= TXD_FLAG_TCPUDP_CSUM; - } } if (tg3_flag(tp, USE_JUMBO_BDFLAG) && @@ -12090,9 +12074,7 @@ static int tg3_set_ringparam(struct net_device *dev, struct ethtool_ringparam *e if (tg3_flag(tp, MAX_RXPEND_64) && tp->rx_pending > 63) tp->rx_pending = 63; - - if (tg3_flag(tp, JUMBO_RING_ENABLE)) - tp->rx_jumbo_pending = ering->rx_jumbo_pending; + tp->rx_jumbo_pending = ering->rx_jumbo_pending; for (i = 0; i < tp->irq_max; i++) tp->napi[i].tx_pending = ering->tx_pending; @@ -17327,6 +17309,8 @@ static int tg3_init_one(struct pci_dev *pdev, tg3_init_bufmgr_config(tp); + features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX; + /* 5700 B0 chips do not support checksumming correctly due * to hardware bugs. */ @@ -17358,8 +17342,7 @@ static int tg3_init_one(struct pci_dev *pdev, features |= NETIF_F_TSO_ECN; } - dev->features |= features | NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_RX; + dev->features |= features; dev->vlan_features |= features; /* diff --git a/drivers/net/ethernet/broadcom/tg3.h b/drivers/net/ethernet/broadcom/tg3.h index 046059c5671..ff6e30eeae3 100644 --- a/drivers/net/ethernet/broadcom/tg3.h +++ b/drivers/net/ethernet/broadcom/tg3.h @@ -2587,11 +2587,7 @@ struct tg3_rx_buffer_desc { #define RXD_ERR_TOO_SMALL 0x00400000 #define RXD_ERR_NO_RESOURCES 0x00800000 #define RXD_ERR_HUGE_FRAME 0x01000000 - -#define RXD_ERR_MASK (RXD_ERR_BAD_CRC | RXD_ERR_COLLISION | \ - RXD_ERR_LINK_LOST | RXD_ERR_PHY_DECODE | \ - RXD_ERR_MAC_ABRT | RXD_ERR_TOO_SMALL | \ - RXD_ERR_NO_RESOURCES | RXD_ERR_HUGE_FRAME) +#define RXD_ERR_MASK 0xffff0000 u32 reserved; u32 opaque; diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index d81a7dbfeef..7371626c56a 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -2663,7 +2663,7 @@ static int be_open(struct net_device *netdev) for_all_evt_queues(adapter, eqo, i) { napi_enable(&eqo->napi); - be_eq_notify(adapter, eqo->q.id, true, true, 0); + be_eq_notify(adapter, eqo->q.id, true, false, 0); } adapter->flags |= BE_FLAGS_NAPI_ENABLED; diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c index 040ecf2027c..70fd5596884 100644 --- a/drivers/net/ethernet/ibm/ibmveth.c +++ b/drivers/net/ethernet/ibm/ibmveth.c @@ -293,18 +293,6 @@ failure: atomic_add(buffers_added, &(pool->available)); } -/* - * The final 8 bytes of the buffer list is a counter of frames dropped - * because there was not a buffer in the buffer list capable of holding - * the frame. - */ -static void ibmveth_update_rx_no_buffer(struct ibmveth_adapter *adapter) -{ - __be64 *p = adapter->buffer_list_addr + 4096 - 8; - - adapter->rx_no_buffer = be64_to_cpup(p); -} - /* replenish routine */ static void ibmveth_replenish_task(struct ibmveth_adapter *adapter) { @@ -320,7 +308,8 @@ static void ibmveth_replenish_task(struct ibmveth_adapter *adapter) ibmveth_replenish_buffer_pool(adapter, pool); } - ibmveth_update_rx_no_buffer(adapter); + adapter->rx_no_buffer = *(u64 *)(((char*)adapter->buffer_list_addr) + + 4096 - 8); } /* empty and free ana buffer pool - also used to do cleanup in error paths */ @@ -700,7 +689,8 @@ static int ibmveth_close(struct net_device *netdev) free_irq(netdev->irq, netdev); - ibmveth_update_rx_no_buffer(adapter); + adapter->rx_no_buffer = *(u64 *)(((char *)adapter->buffer_list_addr) + + 4096 - 8); ibmveth_cleanup(adapter); diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c index 69d3f59f872..d2bea3f07c7 100644 --- a/drivers/net/ethernet/intel/e100.c +++ b/drivers/net/ethernet/intel/e100.c @@ -3039,7 +3039,7 @@ static void __e100_shutdown(struct pci_dev *pdev, bool *enable_wake) *enable_wake = false; } - pci_clear_master(pdev); + pci_disable_device(pdev); } static int __e100_power_off(struct pci_dev *pdev, bool wake) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 4d3c8122e2a..64cbe0dfe04 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -7229,8 +7229,6 @@ static int igb_sriov_reinit(struct pci_dev *dev) if (netif_running(netdev)) igb_close(netdev); - else - igb_reset(adapter); igb_clear_interrupt_scheme(adapter); diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index f8821ce2780..254f255204f 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -99,56 +99,16 @@ #define MVNETA_CPU_RXQ_ACCESS_ALL_MASK 0x000000ff #define MVNETA_CPU_TXQ_ACCESS_ALL_MASK 0x0000ff00 #define MVNETA_RXQ_TIME_COAL_REG(q) (0x2580 + ((q) << 2)) - -/* Exception Interrupt Port/Queue Cause register */ - #define MVNETA_INTR_NEW_CAUSE 0x25a0 -#define MVNETA_INTR_NEW_MASK 0x25a4 - -/* bits 0..7 = TXQ SENT, one bit per queue. - * bits 8..15 = RXQ OCCUP, one bit per queue. - * bits 16..23 = RXQ FREE, one bit per queue. - * bit 29 = OLD_REG_SUM, see old reg ? - * bit 30 = TX_ERR_SUM, one bit for 4 ports - * bit 31 = MISC_SUM, one bit for 4 ports - */ -#define MVNETA_TX_INTR_MASK(nr_txqs) (((1 << nr_txqs) - 1) << 0) -#define MVNETA_TX_INTR_MASK_ALL (0xff << 0) #define MVNETA_RX_INTR_MASK(nr_rxqs) (((1 << nr_rxqs) - 1) << 8) -#define MVNETA_RX_INTR_MASK_ALL (0xff << 8) - +#define MVNETA_INTR_NEW_MASK 0x25a4 #define MVNETA_INTR_OLD_CAUSE 0x25a8 #define MVNETA_INTR_OLD_MASK 0x25ac - -/* Data Path Port/Queue Cause Register */ #define MVNETA_INTR_MISC_CAUSE 0x25b0 #define MVNETA_INTR_MISC_MASK 0x25b4 - -#define MVNETA_CAUSE_PHY_STATUS_CHANGE BIT(0) -#define MVNETA_CAUSE_LINK_CHANGE BIT(1) -#define MVNETA_CAUSE_PTP BIT(4) - -#define MVNETA_CAUSE_INTERNAL_ADDR_ERR BIT(7) -#define MVNETA_CAUSE_RX_OVERRUN BIT(8) -#define MVNETA_CAUSE_RX_CRC_ERROR BIT(9) -#define MVNETA_CAUSE_RX_LARGE_PKT BIT(10) -#define MVNETA_CAUSE_TX_UNDERUN BIT(11) -#define MVNETA_CAUSE_PRBS_ERR BIT(12) -#define MVNETA_CAUSE_PSC_SYNC_CHANGE BIT(13) -#define MVNETA_CAUSE_SERDES_SYNC_ERR BIT(14) - -#define MVNETA_CAUSE_BMU_ALLOC_ERR_SHIFT 16 -#define MVNETA_CAUSE_BMU_ALLOC_ERR_ALL_MASK (0xF << MVNETA_CAUSE_BMU_ALLOC_ERR_SHIFT) -#define MVNETA_CAUSE_BMU_ALLOC_ERR_MASK(pool) (1 << (MVNETA_CAUSE_BMU_ALLOC_ERR_SHIFT + (pool))) - -#define MVNETA_CAUSE_TXQ_ERROR_SHIFT 24 -#define MVNETA_CAUSE_TXQ_ERROR_ALL_MASK (0xFF << MVNETA_CAUSE_TXQ_ERROR_SHIFT) -#define MVNETA_CAUSE_TXQ_ERROR_MASK(q) (1 << (MVNETA_CAUSE_TXQ_ERROR_SHIFT + (q))) - #define MVNETA_INTR_ENABLE 0x25b8 #define MVNETA_TXQ_INTR_ENABLE_ALL_MASK 0x0000ff00 -#define MVNETA_RXQ_INTR_ENABLE_ALL_MASK 0xff000000 // note: neta says it's 0x000000FF - +#define MVNETA_RXQ_INTR_ENABLE_ALL_MASK 0xff000000 #define MVNETA_RXQ_CMD 0x2680 #define MVNETA_RXQ_DISABLE_SHIFT 8 #define MVNETA_RXQ_ENABLE_MASK 0x000000ff @@ -159,7 +119,7 @@ #define MVNETA_GMAC_MAX_RX_SIZE_MASK 0x7ffc #define MVNETA_GMAC0_PORT_ENABLE BIT(0) #define MVNETA_GMAC_CTRL_2 0x2c08 -#define MVNETA_GMAC2_PCS_ENABLE BIT(3) +#define MVNETA_GMAC2_PSC_ENABLE BIT(3) #define MVNETA_GMAC2_PORT_RGMII BIT(4) #define MVNETA_GMAC2_PORT_RESET BIT(6) #define MVNETA_GMAC_STATUS 0x2c10 @@ -214,6 +174,9 @@ #define MVNETA_RX_COAL_PKTS 32 #define MVNETA_RX_COAL_USEC 100 +/* Timer */ +#define MVNETA_TX_DONE_TIMER_PERIOD 10 + /* Napi polling weight */ #define MVNETA_RX_POLL_WEIGHT 64 @@ -256,12 +219,10 @@ #define MVNETA_RX_BUF_SIZE(pkt_size) ((pkt_size) + NET_SKB_PAD) -struct mvneta_pcpu_stats { +struct mvneta_stats { struct u64_stats_sync syncp; - u64 rx_packets; - u64 rx_bytes; - u64 tx_packets; - u64 tx_bytes; + u64 packets; + u64 bytes; }; struct mvneta_port { @@ -269,11 +230,16 @@ struct mvneta_port { void __iomem *base; struct mvneta_rx_queue *rxqs; struct mvneta_tx_queue *txqs; + struct timer_list tx_done_timer; struct net_device *dev; u32 cause_rx_tx; struct napi_struct napi; + /* Flags */ + unsigned long flags; +#define MVNETA_F_TX_DONE_TIMER_BIT 0 + /* Napi weight */ int weight; @@ -282,7 +248,8 @@ struct mvneta_port { u8 mcast_count[256]; u16 tx_ring_size; u16 rx_ring_size; - struct mvneta_pcpu_stats *stats; + struct mvneta_stats tx_stats; + struct mvneta_stats rx_stats; struct mii_bus *mii_bus; struct phy_device *phy_dev; @@ -461,29 +428,21 @@ struct rtnl_link_stats64 *mvneta_get_stats64(struct net_device *dev, { struct mvneta_port *pp = netdev_priv(dev); unsigned int start; - int cpu; - for_each_possible_cpu(cpu) { - struct mvneta_pcpu_stats *cpu_stats; - u64 rx_packets; - u64 rx_bytes; - u64 tx_packets; - u64 tx_bytes; + memset(stats, 0, sizeof(struct rtnl_link_stats64)); - cpu_stats = per_cpu_ptr(pp->stats, cpu); - do { - start = u64_stats_fetch_begin_bh(&cpu_stats->syncp); - rx_packets = cpu_stats->rx_packets; - rx_bytes = cpu_stats->rx_bytes; - tx_packets = cpu_stats->tx_packets; - tx_bytes = cpu_stats->tx_bytes; - } while (u64_stats_fetch_retry_bh(&cpu_stats->syncp, start)); + do { + start = u64_stats_fetch_begin_bh(&pp->rx_stats.syncp); + stats->rx_packets = pp->rx_stats.packets; + stats->rx_bytes = pp->rx_stats.bytes; + } while (u64_stats_fetch_retry_bh(&pp->rx_stats.syncp, start)); - stats->rx_packets += rx_packets; - stats->rx_bytes += rx_bytes; - stats->tx_packets += tx_packets; - stats->tx_bytes += tx_bytes; - } + + do { + start = u64_stats_fetch_begin_bh(&pp->tx_stats.syncp); + stats->tx_packets = pp->tx_stats.packets; + stats->tx_bytes = pp->tx_stats.bytes; + } while (u64_stats_fetch_retry_bh(&pp->tx_stats.syncp, start)); stats->rx_errors = dev->stats.rx_errors; stats->rx_dropped = dev->stats.rx_dropped; @@ -696,7 +655,7 @@ static void mvneta_port_sgmii_config(struct mvneta_port *pp) u32 val; val = mvreg_read(pp, MVNETA_GMAC_CTRL_2); - val |= MVNETA_GMAC2_PCS_ENABLE; + val |= MVNETA_GMAC2_PSC_ENABLE; mvreg_write(pp, MVNETA_GMAC_CTRL_2, val); } @@ -1104,6 +1063,17 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp, txq->done_pkts_coal = value; } +/* Trigger tx done timer in MVNETA_TX_DONE_TIMER_PERIOD msecs */ +static void mvneta_add_tx_done_timer(struct mvneta_port *pp) +{ + if (test_and_set_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags) == 0) { + pp->tx_done_timer.expires = jiffies + + msecs_to_jiffies(MVNETA_TX_DONE_TIMER_PERIOD); + add_timer(&pp->tx_done_timer); + } +} + + /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */ static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc, u32 phys_addr, u32 cookie) @@ -1175,7 +1145,7 @@ static u32 mvneta_txq_desc_csum(int l3_offs, int l3_proto, command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; - if (l3_proto == htons(ETH_P_IP)) + if (l3_proto == swab16(ETH_P_IP)) command |= MVNETA_TXD_IP_CSUM; else command |= MVNETA_TX_L3_IP6; @@ -1384,8 +1354,6 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo, { struct net_device *dev = pp->dev; int rx_done, rx_filled; - u32 rcvd_pkts = 0; - u32 rcvd_bytes = 0; /* Get number of received packets */ rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq); @@ -1423,8 +1391,10 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo, rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); - rcvd_pkts++; - rcvd_bytes += rx_bytes; + u64_stats_update_begin(&pp->rx_stats.syncp); + pp->rx_stats.packets++; + pp->rx_stats.bytes += rx_bytes; + u64_stats_update_end(&pp->rx_stats.syncp); /* Linux processing */ skb_reserve(skb, MVNETA_MH_SIZE); @@ -1445,15 +1415,6 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo, } } - if (rcvd_pkts) { - struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); - - u64_stats_update_begin(&stats->syncp); - stats->rx_packets += rcvd_pkts; - stats->rx_bytes += rcvd_bytes; - u64_stats_update_end(&stats->syncp); - } - /* Update rxq management counters */ mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_filled); @@ -1584,17 +1545,25 @@ static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) out: if (frags > 0) { - struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); + u64_stats_update_begin(&pp->tx_stats.syncp); + pp->tx_stats.packets++; + pp->tx_stats.bytes += skb->len; + u64_stats_update_end(&pp->tx_stats.syncp); - u64_stats_update_begin(&stats->syncp); - stats->tx_packets++; - stats->tx_bytes += skb->len; - u64_stats_update_end(&stats->syncp); } else { dev->stats.tx_dropped++; dev_kfree_skb_any(skb); } + if (txq->count >= MVNETA_TXDONE_COAL_PKTS) + mvneta_txq_done(pp, txq); + + /* If after calling mvneta_txq_done, count equals + * frags, we need to set the timer + */ + if (txq->count == frags && frags > 0) + mvneta_add_tx_done_timer(pp); + return NETDEV_TX_OK; } @@ -1870,22 +1839,14 @@ static int mvneta_poll(struct napi_struct *napi, int budget) /* Read cause register */ cause_rx_tx = mvreg_read(pp, MVNETA_INTR_NEW_CAUSE) & - (MVNETA_RX_INTR_MASK(rxq_number) | MVNETA_TX_INTR_MASK(txq_number)); - - /* Release Tx descriptors */ - if (cause_rx_tx & MVNETA_TX_INTR_MASK_ALL) { - int tx_todo = 0; - - mvneta_tx_done_gbe(pp, (cause_rx_tx & MVNETA_TX_INTR_MASK_ALL), &tx_todo); - cause_rx_tx &= ~MVNETA_TX_INTR_MASK_ALL; - } + MVNETA_RX_INTR_MASK(rxq_number); /* For the case where the last mvneta_poll did not process all * RX packets */ cause_rx_tx |= pp->cause_rx_tx; if (rxq_number > 1) { - while ((cause_rx_tx & MVNETA_RX_INTR_MASK_ALL) && (budget > 0)) { + while ((cause_rx_tx != 0) && (budget > 0)) { int count; struct mvneta_rx_queue *rxq; /* get rx queue number from cause_rx_tx */ @@ -1917,7 +1878,7 @@ static int mvneta_poll(struct napi_struct *napi, int budget) napi_complete(napi); local_irq_save(flags); mvreg_write(pp, MVNETA_INTR_NEW_MASK, - MVNETA_RX_INTR_MASK(rxq_number) | MVNETA_TX_INTR_MASK(txq_number)); + MVNETA_RX_INTR_MASK(rxq_number)); local_irq_restore(flags); } @@ -1925,6 +1886,26 @@ static int mvneta_poll(struct napi_struct *napi, int budget) return rx_done; } +/* tx done timer callback */ +static void mvneta_tx_done_timer_callback(unsigned long data) +{ + struct net_device *dev = (struct net_device *)data; + struct mvneta_port *pp = netdev_priv(dev); + int tx_done = 0, tx_todo = 0; + + if (!netif_running(dev)) + return ; + + clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); + + tx_done = mvneta_tx_done_gbe(pp, + (((1 << txq_number) - 1) & + MVNETA_CAUSE_TXQ_SENT_DESC_ALL_MASK), + &tx_todo); + if (tx_todo > 0) + mvneta_add_tx_done_timer(pp); +} + /* Handle rxq fill: allocates rxq skbs; called when initializing a port */ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, int num) @@ -2174,7 +2155,7 @@ static void mvneta_start_dev(struct mvneta_port *pp) /* Unmask interrupts */ mvreg_write(pp, MVNETA_INTR_NEW_MASK, - MVNETA_RX_INTR_MASK(rxq_number) | MVNETA_TX_INTR_MASK(txq_number)); + MVNETA_RX_INTR_MASK(rxq_number)); phy_start(pp->phy_dev); netif_tx_start_all_queues(pp->dev); @@ -2207,6 +2188,16 @@ static void mvneta_stop_dev(struct mvneta_port *pp) mvneta_rx_reset(pp); } +/* tx timeout callback - display a message and stop/start the network device */ +static void mvneta_tx_timeout(struct net_device *dev) +{ + struct mvneta_port *pp = netdev_priv(dev); + + netdev_info(dev, "tx timeout\n"); + mvneta_stop_dev(pp); + mvneta_start_dev(pp); +} + /* Return positive if MTU is valid */ static int mvneta_check_mtu_valid(struct net_device *dev, int mtu) { @@ -2315,7 +2306,7 @@ static void mvneta_adjust_link(struct net_device *ndev) if (phydev->speed == SPEED_1000) val |= MVNETA_GMAC_CONFIG_GMII_SPEED; - else if (phydev->speed == SPEED_100) + else val |= MVNETA_GMAC_CONFIG_MII_SPEED; mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val); @@ -2435,6 +2426,8 @@ static int mvneta_stop(struct net_device *dev) free_irq(dev->irq, pp); mvneta_cleanup_rxqs(pp); mvneta_cleanup_txqs(pp); + del_timer(&pp->tx_done_timer); + clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); return 0; } @@ -2555,6 +2548,7 @@ static const struct net_device_ops mvneta_netdev_ops = { .ndo_set_rx_mode = mvneta_set_rx_mode, .ndo_set_mac_address = mvneta_set_mac_addr, .ndo_change_mtu = mvneta_change_mtu, + .ndo_tx_timeout = mvneta_tx_timeout, .ndo_get_stats64 = mvneta_get_stats64, }; @@ -2735,6 +2729,10 @@ static int mvneta_probe(struct platform_device *pdev) pp = netdev_priv(dev); + pp->tx_done_timer.function = mvneta_tx_done_timer_callback; + init_timer(&pp->tx_done_timer); + clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); + pp->weight = MVNETA_RX_POLL_WEIGHT; pp->phy_node = phy_node; pp->phy_interface = phy_mode; @@ -2753,12 +2751,7 @@ static int mvneta_probe(struct platform_device *pdev) clk_prepare_enable(pp->clk); - /* Alloc per-cpu stats */ - pp->stats = alloc_percpu(struct mvneta_pcpu_stats); - if (!pp->stats) { - err = -ENOMEM; - goto err_clk; - } + pp->tx_done_timer.data = (unsigned long)dev; pp->tx_ring_size = MVNETA_MAX_TXD; pp->rx_ring_size = MVNETA_MAX_RXD; @@ -2769,7 +2762,7 @@ static int mvneta_probe(struct platform_device *pdev) err = mvneta_init(pp, phy_addr); if (err < 0) { dev_err(&pdev->dev, "can't init eth hal\n"); - goto err_free_stats; + goto err_clk; } mvneta_port_power_up(pp, phy_mode); @@ -2798,8 +2791,6 @@ static int mvneta_probe(struct platform_device *pdev) err_deinit: mvneta_deinit(pp); -err_free_stats: - free_percpu(pp->stats); err_clk: clk_disable_unprepare(pp->clk); err_unmap: @@ -2820,7 +2811,6 @@ static int mvneta_remove(struct platform_device *pdev) unregister_netdev(dev); mvneta_deinit(pp); clk_disable_unprepare(pp->clk); - free_percpu(pp->stats); iounmap(pp->base); irq_dispose_mapping(dev->irq); free_netdev(dev); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c index 58c18d3a488..1e6c594d6d0 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c @@ -55,6 +55,7 @@ int mlx4_en_create_cq(struct mlx4_en_priv *priv, cq->ring = ring; cq->is_tx = mode; + spin_lock_init(&cq->lock); err = mlx4_alloc_hwq_res(mdev->dev, &cq->wqres, cq->buf_size, 2 * PAGE_SIZE); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index 063f3f4d486..89c47ea84b5 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -1190,11 +1190,15 @@ static void mlx4_en_netpoll(struct net_device *dev) { struct mlx4_en_priv *priv = netdev_priv(dev); struct mlx4_en_cq *cq; + unsigned long flags; int i; for (i = 0; i < priv->rx_ring_num; i++) { cq = &priv->rx_cq[i]; - napi_schedule(&cq->napi); + spin_lock_irqsave(&cq->lock, flags); + napi_synchronize(&cq->napi); + mlx4_en_process_rx_cq(dev, cq, 0); + spin_unlock_irqrestore(&cq->lock, flags); } } #endif diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c index 3fb2643d05b..1b195fc7f41 100644 --- a/drivers/net/ethernet/mellanox/mlx4/main.c +++ b/drivers/net/ethernet/mellanox/mlx4/main.c @@ -2129,8 +2129,13 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data) /* Allow large DMA segments, up to the firmware limit of 1 GB */ dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024); - dev = pci_get_drvdata(pdev); - priv = mlx4_priv(dev); + priv = kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) { + err = -ENOMEM; + goto err_release_regions; + } + + dev = &priv->dev; dev->pdev = pdev; INIT_LIST_HEAD(&priv->ctx_list); spin_lock_init(&priv->ctx_lock); @@ -2295,7 +2300,8 @@ slave_start: mlx4_sense_init(dev); mlx4_start_sense(dev); - priv->removed = 0; + priv->pci_dev_data = pci_dev_data; + pci_set_drvdata(pdev, dev); return 0; @@ -2361,110 +2367,84 @@ err_disable_pdev: static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id) { - struct mlx4_priv *priv; - struct mlx4_dev *dev; - printk_once(KERN_INFO "%s", mlx4_version); - priv = kzalloc(sizeof(*priv), GFP_KERNEL); - if (!priv) - return -ENOMEM; - - dev = &priv->dev; - pci_set_drvdata(pdev, dev); - priv->pci_dev_data = id->driver_data; - return __mlx4_init_one(pdev, id->driver_data); } -static void __mlx4_remove_one(struct pci_dev *pdev) +static void mlx4_remove_one(struct pci_dev *pdev) { struct mlx4_dev *dev = pci_get_drvdata(pdev); struct mlx4_priv *priv = mlx4_priv(dev); - int pci_dev_data; int p; - if (priv->removed) - return; - - pci_dev_data = priv->pci_dev_data; - - /* in SRIOV it is not allowed to unload the pf's - * driver while there are alive vf's */ - if (mlx4_is_master(dev)) { - if (mlx4_how_many_lives_vf(dev)) - printk(KERN_ERR "Removing PF when there are assigned VF's !!!\n"); - } - mlx4_stop_sense(dev); - mlx4_unregister_device(dev); + if (dev) { + /* in SRIOV it is not allowed to unload the pf's + * driver while there are alive vf's */ + if (mlx4_is_master(dev)) { + if (mlx4_how_many_lives_vf(dev)) + printk(KERN_ERR "Removing PF when there are assigned VF's !!!\n"); + } + mlx4_stop_sense(dev); + mlx4_unregister_device(dev); - for (p = 1; p <= dev->caps.num_ports; p++) { - mlx4_cleanup_port_info(&priv->port[p]); - mlx4_CLOSE_PORT(dev, p); - } + for (p = 1; p <= dev->caps.num_ports; p++) { + mlx4_cleanup_port_info(&priv->port[p]); + mlx4_CLOSE_PORT(dev, p); + } - if (mlx4_is_master(dev)) - mlx4_free_resource_tracker(dev, - RES_TR_FREE_SLAVES_ONLY); + if (mlx4_is_master(dev)) + mlx4_free_resource_tracker(dev, + RES_TR_FREE_SLAVES_ONLY); + + mlx4_cleanup_counters_table(dev); + mlx4_cleanup_mcg_table(dev); + mlx4_cleanup_qp_table(dev); + mlx4_cleanup_srq_table(dev); + mlx4_cleanup_cq_table(dev); + mlx4_cmd_use_polling(dev); + mlx4_cleanup_eq_table(dev); + mlx4_cleanup_mr_table(dev); + mlx4_cleanup_xrcd_table(dev); + mlx4_cleanup_pd_table(dev); - mlx4_cleanup_counters_table(dev); - mlx4_cleanup_qp_table(dev); - mlx4_cleanup_srq_table(dev); - mlx4_cleanup_cq_table(dev); - mlx4_cmd_use_polling(dev); - mlx4_cleanup_eq_table(dev); - mlx4_cleanup_mcg_table(dev); - mlx4_cleanup_mr_table(dev); - mlx4_cleanup_xrcd_table(dev); - mlx4_cleanup_pd_table(dev); + if (mlx4_is_master(dev)) + mlx4_free_resource_tracker(dev, + RES_TR_FREE_STRUCTS_ONLY); + + iounmap(priv->kar); + mlx4_uar_free(dev, &priv->driver_uar); + mlx4_cleanup_uar_table(dev); + if (!mlx4_is_slave(dev)) + mlx4_clear_steering(dev); + mlx4_free_eq_table(dev); + if (mlx4_is_master(dev)) + mlx4_multi_func_cleanup(dev); + mlx4_close_hca(dev); + if (mlx4_is_slave(dev)) + mlx4_multi_func_cleanup(dev); + mlx4_cmd_cleanup(dev); + + if (dev->flags & MLX4_FLAG_MSI_X) + pci_disable_msix(pdev); + if (dev->flags & MLX4_FLAG_SRIOV) { + mlx4_warn(dev, "Disabling SR-IOV\n"); + pci_disable_sriov(pdev); + } - if (mlx4_is_master(dev)) - mlx4_free_resource_tracker(dev, - RES_TR_FREE_STRUCTS_ONLY); + if (!mlx4_is_slave(dev)) + mlx4_free_ownership(dev); - iounmap(priv->kar); - mlx4_uar_free(dev, &priv->driver_uar); - mlx4_cleanup_uar_table(dev); - if (!mlx4_is_slave(dev)) - mlx4_clear_steering(dev); - mlx4_free_eq_table(dev); - if (mlx4_is_master(dev)) - mlx4_multi_func_cleanup(dev); - mlx4_close_hca(dev); - if (mlx4_is_slave(dev)) - mlx4_multi_func_cleanup(dev); - mlx4_cmd_cleanup(dev); + kfree(dev->caps.qp0_tunnel); + kfree(dev->caps.qp0_proxy); + kfree(dev->caps.qp1_tunnel); + kfree(dev->caps.qp1_proxy); - if (dev->flags & MLX4_FLAG_MSI_X) - pci_disable_msix(pdev); - if (dev->flags & MLX4_FLAG_SRIOV) { - mlx4_warn(dev, "Disabling SR-IOV\n"); - pci_disable_sriov(pdev); + kfree(priv); + pci_release_regions(pdev); + pci_disable_device(pdev); + pci_set_drvdata(pdev, NULL); } - - if (!mlx4_is_slave(dev)) - mlx4_free_ownership(dev); - - kfree(dev->caps.qp0_tunnel); - kfree(dev->caps.qp0_proxy); - kfree(dev->caps.qp1_tunnel); - kfree(dev->caps.qp1_proxy); - - pci_release_regions(pdev); - pci_disable_device(pdev); - memset(priv, 0, sizeof(*priv)); - priv->pci_dev_data = pci_dev_data; - priv->removed = 1; -} - -static void mlx4_remove_one(struct pci_dev *pdev) -{ - struct mlx4_dev *dev = pci_get_drvdata(pdev); - struct mlx4_priv *priv = mlx4_priv(dev); - - __mlx4_remove_one(pdev); - kfree(priv); - pci_set_drvdata(pdev, NULL); } int mlx4_restart_one(struct pci_dev *pdev) @@ -2474,7 +2454,7 @@ int mlx4_restart_one(struct pci_dev *pdev) int pci_dev_data; pci_dev_data = priv->pci_dev_data; - __mlx4_remove_one(pdev); + mlx4_remove_one(pdev); return __mlx4_init_one(pdev, pci_dev_data); } @@ -2529,7 +2509,7 @@ MODULE_DEVICE_TABLE(pci, mlx4_pci_table); static pci_ers_result_t mlx4_pci_err_detected(struct pci_dev *pdev, pci_channel_state_t state) { - __mlx4_remove_one(pdev); + mlx4_remove_one(pdev); return state == pci_channel_io_perm_failure ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET; @@ -2537,11 +2517,7 @@ static pci_ers_result_t mlx4_pci_err_detected(struct pci_dev *pdev, static pci_ers_result_t mlx4_pci_slot_reset(struct pci_dev *pdev) { - struct mlx4_dev *dev = pci_get_drvdata(pdev); - struct mlx4_priv *priv = mlx4_priv(dev); - int ret; - - ret = __mlx4_init_one(pdev, priv->pci_dev_data); + int ret = __mlx4_init_one(pdev, 0); return ret ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; } diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h index da4f0002fd2..df15bb6631c 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h @@ -743,7 +743,6 @@ struct mlx4_priv { spinlock_t ctx_lock; int pci_dev_data; - int removed; struct list_head pgdir_list; struct mutex pgdir_mutex; diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h index 117315da57c..12a7b2bec65 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h @@ -298,6 +298,7 @@ struct mlx4_en_cq { struct mlx4_cq mcq; struct mlx4_hwq_resources wqres; int ring; + spinlock_t lock; struct net_device *dev; struct napi_struct napi; int size; diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c index 4fb93c5b556..7be9788ed0f 100644 --- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c +++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c @@ -856,10 +856,6 @@ static int myri10ge_dma_test(struct myri10ge_priv *mgp, int test_type) return -ENOMEM; dmatest_bus = pci_map_page(mgp->pdev, dmatest_page, 0, PAGE_SIZE, DMA_BIDIRECTIONAL); - if (unlikely(pci_dma_mapping_error(mgp->pdev, dmatest_bus))) { - __free_page(dmatest_page); - return -ENOMEM; - } /* Run a small DMA test. * The magic multipliers to the length tell the firmware @@ -1195,7 +1191,6 @@ myri10ge_alloc_rx_pages(struct myri10ge_priv *mgp, struct myri10ge_rx_buf *rx, int bytes, int watchdog) { struct page *page; - dma_addr_t bus; int idx; #if MYRI10GE_ALLOC_SIZE > 4096 int end_offset; @@ -1220,21 +1215,11 @@ myri10ge_alloc_rx_pages(struct myri10ge_priv *mgp, struct myri10ge_rx_buf *rx, rx->watchdog_needed = 1; return; } - - bus = pci_map_page(mgp->pdev, page, 0, - MYRI10GE_ALLOC_SIZE, - PCI_DMA_FROMDEVICE); - if (unlikely(pci_dma_mapping_error(mgp->pdev, bus))) { - __free_pages(page, MYRI10GE_ALLOC_ORDER); - if (rx->fill_cnt - rx->cnt < 16) - rx->watchdog_needed = 1; - return; - } - rx->page = page; rx->page_offset = 0; - rx->bus = bus; - + rx->bus = pci_map_page(mgp->pdev, page, 0, + MYRI10GE_ALLOC_SIZE, + PCI_DMA_FROMDEVICE); } rx->info[idx].page = rx->page; rx->info[idx].page_offset = rx->page_offset; @@ -2591,35 +2576,6 @@ myri10ge_submit_req(struct myri10ge_tx_buf *tx, struct mcp_kreq_ether_send *src, mb(); } -static void myri10ge_unmap_tx_dma(struct myri10ge_priv *mgp, - struct myri10ge_tx_buf *tx, int idx) -{ - unsigned int len; - int last_idx; - - /* Free any DMA resources we've alloced and clear out the skb slot */ - last_idx = (idx + 1) & tx->mask; - idx = tx->req & tx->mask; - do { - len = dma_unmap_len(&tx->info[idx], len); - if (len) { - if (tx->info[idx].skb != NULL) - pci_unmap_single(mgp->pdev, - dma_unmap_addr(&tx->info[idx], - bus), len, - PCI_DMA_TODEVICE); - else - pci_unmap_page(mgp->pdev, - dma_unmap_addr(&tx->info[idx], - bus), len, - PCI_DMA_TODEVICE); - dma_unmap_len_set(&tx->info[idx], len, 0); - tx->info[idx].skb = NULL; - } - idx = (idx + 1) & tx->mask; - } while (idx != last_idx); -} - /* * Transmit a packet. We need to split the packet so that a single * segment does not cross myri10ge->tx_boundary, so this makes segment @@ -2643,7 +2599,7 @@ static netdev_tx_t myri10ge_xmit(struct sk_buff *skb, u32 low; __be32 high_swapped; unsigned int len; - int idx, avail, frag_cnt, frag_idx, count, mss, max_segments; + int idx, last_idx, avail, frag_cnt, frag_idx, count, mss, max_segments; u16 pseudo_hdr_offset, cksum_offset, queue; int cum_len, seglen, boundary, rdma_count; u8 flags, odd_flag; @@ -2740,12 +2696,9 @@ again: /* map the skb for DMA */ len = skb_headlen(skb); - bus = pci_map_single(mgp->pdev, skb->data, len, PCI_DMA_TODEVICE); - if (unlikely(pci_dma_mapping_error(mgp->pdev, bus))) - goto drop; - idx = tx->req & tx->mask; tx->info[idx].skb = skb; + bus = pci_map_single(mgp->pdev, skb->data, len, PCI_DMA_TODEVICE); dma_unmap_addr_set(&tx->info[idx], bus, bus); dma_unmap_len_set(&tx->info[idx], len, len); @@ -2844,16 +2797,12 @@ again: break; /* map next fragment for DMA */ + idx = (count + tx->req) & tx->mask; frag = &skb_shinfo(skb)->frags[frag_idx]; frag_idx++; len = skb_frag_size(frag); bus = skb_frag_dma_map(&mgp->pdev->dev, frag, 0, len, DMA_TO_DEVICE); - if (unlikely(pci_dma_mapping_error(mgp->pdev, bus))) { - myri10ge_unmap_tx_dma(mgp, tx, idx); - goto drop; - } - idx = (count + tx->req) & tx->mask; dma_unmap_addr_set(&tx->info[idx], bus, bus); dma_unmap_len_set(&tx->info[idx], len, len); } @@ -2884,8 +2833,31 @@ again: return NETDEV_TX_OK; abort_linearize: - myri10ge_unmap_tx_dma(mgp, tx, idx); + /* Free any DMA resources we've alloced and clear out the skb + * slot so as to not trip up assertions, and to avoid a + * double-free if linearizing fails */ + last_idx = (idx + 1) & tx->mask; + idx = tx->req & tx->mask; + tx->info[idx].skb = NULL; + do { + len = dma_unmap_len(&tx->info[idx], len); + if (len) { + if (tx->info[idx].skb != NULL) + pci_unmap_single(mgp->pdev, + dma_unmap_addr(&tx->info[idx], + bus), len, + PCI_DMA_TODEVICE); + else + pci_unmap_page(mgp->pdev, + dma_unmap_addr(&tx->info[idx], + bus), len, + PCI_DMA_TODEVICE); + dma_unmap_len_set(&tx->info[idx], len, 0); + tx->info[idx].skb = NULL; + } + idx = (idx + 1) & tx->mask; + } while (idx != last_idx); if (skb_is_gso(skb)) { netdev_err(mgp->dev, "TSO but wanted to linearize?!?!?\n"); goto drop; diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c index 540ad16d780..9a95abf2ded 100644 --- a/drivers/net/ethernet/sfc/ptp.c +++ b/drivers/net/ethernet/sfc/ptp.c @@ -1319,13 +1319,6 @@ void efx_ptp_event(struct efx_nic *efx, efx_qword_t *ev) struct efx_ptp_data *ptp = efx->ptp_data; int code = EFX_QWORD_FIELD(*ev, MCDI_EVENT_CODE); - if (!ptp) { - if (net_ratelimit()) - netif_warn(efx, drv, efx->net_dev, - "Received PTP event but PTP not set up\n"); - return; - } - if (!ptp->enabled) return; diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c index 398faff8be7..3df56840a3b 100644 --- a/drivers/net/ethernet/sun/sunvnet.c +++ b/drivers/net/ethernet/sun/sunvnet.c @@ -1083,24 +1083,6 @@ static struct vnet *vnet_find_or_create(const u64 *local_mac) return vp; } -static void vnet_cleanup(void) -{ - struct vnet *vp; - struct net_device *dev; - - mutex_lock(&vnet_list_mutex); - while (!list_empty(&vnet_list)) { - vp = list_first_entry(&vnet_list, struct vnet, list); - list_del(&vp->list); - dev = vp->dev; - /* vio_unregister_driver() should have cleaned up port_list */ - BUG_ON(!list_empty(&vp->port_list)); - unregister_netdev(dev); - free_netdev(dev); - } - mutex_unlock(&vnet_list_mutex); -} - static const char *local_mac_prop = "local-mac-address"; static struct vnet *vnet_find_parent(struct mdesc_handle *hp, @@ -1258,6 +1240,7 @@ static int vnet_port_remove(struct vio_dev *vdev) kfree(port); + unregister_netdev(vp->dev); } return 0; } @@ -1285,7 +1268,6 @@ static int __init vnet_init(void) static void __exit vnet_exit(void) { vio_unregister_driver(&vnet_port_driver); - vnet_cleanup(); } module_init(vnet_init); diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index b1ab3a4956a..d1a769f35f9 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -1547,10 +1547,6 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data, mdio_node = of_find_node_by_phandle(be32_to_cpup(parp)); phyid = be32_to_cpup(parp+1); mdio = of_find_device_by_node(mdio_node); - if (!mdio) { - pr_err("Missing mdio platform device\n"); - return -EINVAL; - } snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), PHY_ID_FMT, mdio->name, phyid); diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index 59e9c56e5b8..aea78fc2e48 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -138,7 +138,6 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net) struct hv_netvsc_packet *packet; int ret; unsigned int i, num_pages, npg_data; - u32 skb_length = skb->len; /* Add multipages for skb->data and additional 2 for RNDIS */ npg_data = (((unsigned long)skb->data + skb_headlen(skb) - 1) @@ -209,7 +208,7 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net) ret = rndis_filter_send(net_device_ctx->device_ctx, packet); if (ret == 0) { - net->stats.tx_bytes += skb_length; + net->stats.tx_bytes += skb->len; net->stats.tx_packets++; } else { kfree(packet); diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c index 9be91cb4f4a..06eba6e480c 100644 --- a/drivers/net/macvlan.c +++ b/drivers/net/macvlan.c @@ -261,9 +261,11 @@ static int macvlan_queue_xmit(struct sk_buff *skb, struct net_device *dev) const struct macvlan_dev *vlan = netdev_priv(dev); const struct macvlan_port *port = vlan->port; const struct macvlan_dev *dest; + __u8 ip_summed = skb->ip_summed; if (vlan->mode == MACVLAN_MODE_BRIDGE) { const struct ethhdr *eth = (void *)skb->data; + skb->ip_summed = CHECKSUM_UNNECESSARY; /* send to other bridge ports directly */ if (is_multicast_ether_addr(eth->h_dest)) { @@ -281,6 +283,7 @@ static int macvlan_queue_xmit(struct sk_buff *skb, struct net_device *dev) } xmit_world: + skb->ip_summed = ip_summed; skb->dev = vlan->lowerdev; return dev_queue_xmit(skb); } @@ -420,10 +423,8 @@ static void macvlan_change_rx_flags(struct net_device *dev, int change) struct macvlan_dev *vlan = netdev_priv(dev); struct net_device *lowerdev = vlan->lowerdev; - if (dev->flags & IFF_UP) { - if (change & IFF_ALLMULTI) - dev_set_allmulti(lowerdev, dev->flags & IFF_ALLMULTI ? 1 : -1); - } + if (change & IFF_ALLMULTI) + dev_set_allmulti(lowerdev, dev->flags & IFF_ALLMULTI ? 1 : -1); } static void macvlan_set_mac_lists(struct net_device *dev) @@ -500,7 +501,6 @@ static int macvlan_init(struct net_device *dev) (lowerdev->state & MACVLAN_STATE_MASK); dev->features = lowerdev->features & MACVLAN_FEATURES; dev->features |= NETIF_F_LLTX; - dev->vlan_features = lowerdev->vlan_features & MACVLAN_FEATURES; dev->gso_max_size = lowerdev->gso_max_size; dev->iflink = lowerdev->ifindex; dev->hard_header_len = lowerdev->hard_header_len; @@ -962,6 +962,7 @@ static int macvlan_device_event(struct notifier_block *unused, list_for_each_entry_safe(vlan, next, &port->vlans, list) vlan->dev->rtnl_link_ops->dellink(vlan->dev, &list_kill); unregister_netdevice_many(&list_kill); + list_del(&list_kill); break; case NETDEV_PRE_TYPE_CHANGE: /* Forbid underlaying device to change its type. */ diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c index 5a1897d86e9..72ff14b811c 100644 --- a/drivers/net/ppp/ppp_generic.c +++ b/drivers/net/ppp/ppp_generic.c @@ -601,7 +601,7 @@ static long ppp_ioctl(struct file *file, unsigned int cmd, unsigned long arg) if (file == ppp->owner) ppp_shutdown_interface(ppp); } - if (atomic_long_read(&file->f_count) < 2) { + if (atomic_long_read(&file->f_count) <= 2) { ppp_release(NULL, file); err = 0; } else diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c index becfa3ef7fd..6839fb07a4c 100644 --- a/drivers/net/ppp/pppoe.c +++ b/drivers/net/ppp/pppoe.c @@ -675,7 +675,7 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr, po->chan.hdrlen = (sizeof(struct pppoe_hdr) + dev->hard_header_len); - po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr) - 2; + po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr); po->chan.private = sk; po->chan.ops = &pppoe_chan_ops; diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c index 8161c3f066a..7f10588fe66 100644 --- a/drivers/net/ppp/pptp.c +++ b/drivers/net/ppp/pptp.c @@ -281,7 +281,7 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb) nf_reset(skb); skb->ip_summed = CHECKSUM_NONE; - ip_select_ident(skb, NULL); + ip_select_ident(skb, &rt->dst, NULL); ip_send_check(iph); ip_local_out(skb); diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index 12222290c80..fe3fd77821b 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -1542,7 +1542,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu) * to traverse list in reverse under rcu_read_lock */ mutex_lock(&team->lock); - team->port_mtu_change_allowed = true; list_for_each_entry(port, &team->port_list, list) { err = dev_set_mtu(port->dev, new_mtu); if (err) { @@ -1551,7 +1550,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu) goto unwind; } } - team->port_mtu_change_allowed = false; mutex_unlock(&team->lock); dev->mtu = new_mtu; @@ -1561,7 +1559,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu) unwind: list_for_each_entry_continue_reverse(port, &team->port_list, list) dev_set_mtu(port->dev, dev->mtu); - team->port_mtu_change_allowed = false; mutex_unlock(&team->lock); return err; @@ -2681,9 +2678,7 @@ static int team_device_event(struct notifier_block *unused, break; case NETDEV_CHANGEMTU: /* Forbid to change mtu of underlaying device */ - if (!port->team->port_mtu_change_allowed) - return NOTIFY_BAD; - break; + return NOTIFY_BAD; case NETDEV_PRE_TYPE_CHANGE: /* Forbid to change type of underlaying device */ return NOTIFY_BAD; diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c index 3b449c4ecf7..d33c3ae2fce 100644 --- a/drivers/net/usb/ax88179_178a.c +++ b/drivers/net/usb/ax88179_178a.c @@ -695,7 +695,6 @@ static int ax88179_set_mac_addr(struct net_device *net, void *p) { struct usbnet *dev = netdev_priv(net); struct sockaddr *addr = p; - int ret; if (netif_running(net)) return -EBUSY; @@ -705,12 +704,8 @@ static int ax88179_set_mac_addr(struct net_device *net, void *p) memcpy(net->dev_addr, addr->sa_data, ETH_ALEN); /* Set the MAC address */ - ret = ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_NODE_ID, ETH_ALEN, + return ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_NODE_ID, ETH_ALEN, ETH_ALEN, net->dev_addr); - if (ret < 0) - return ret; - - return 0; } static const struct net_device_ops ax88179_netdev_ops = { diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c index 7cabe458390..25ba7eca9a1 100644 --- a/drivers/net/usb/cdc_mbim.c +++ b/drivers/net/usb/cdc_mbim.c @@ -120,16 +120,6 @@ static void cdc_mbim_unbind(struct usbnet *dev, struct usb_interface *intf) cdc_ncm_unbind(dev, intf); } -/* verify that the ethernet protocol is IPv4 or IPv6 */ -static bool is_ip_proto(__be16 proto) -{ - switch (proto) { - case htons(ETH_P_IP): - case htons(ETH_P_IPV6): - return true; - } - return false; -} static struct sk_buff *cdc_mbim_tx_fixup(struct usbnet *dev, struct sk_buff *skb, gfp_t flags) { @@ -138,7 +128,6 @@ static struct sk_buff *cdc_mbim_tx_fixup(struct usbnet *dev, struct sk_buff *skb struct cdc_ncm_ctx *ctx = info->ctx; __le32 sign = cpu_to_le32(USB_CDC_MBIM_NDP16_IPS_SIGN); u16 tci = 0; - bool is_ip; u8 *c; if (!ctx) @@ -148,32 +137,25 @@ static struct sk_buff *cdc_mbim_tx_fixup(struct usbnet *dev, struct sk_buff *skb if (skb->len <= ETH_HLEN) goto error; - /* Some applications using e.g. packet sockets will - * bypass the VLAN acceleration and create tagged - * ethernet frames directly. We primarily look for - * the accelerated out-of-band tag, but fall back if - * required - */ - skb_reset_mac_header(skb); - if (vlan_get_tag(skb, &tci) < 0 && skb->len > VLAN_ETH_HLEN && - __vlan_get_tag(skb, &tci) == 0) { - is_ip = is_ip_proto(vlan_eth_hdr(skb)->h_vlan_encapsulated_proto); - skb_pull(skb, VLAN_ETH_HLEN); - } else { - is_ip = is_ip_proto(eth_hdr(skb)->h_proto); - skb_pull(skb, ETH_HLEN); - } - /* mapping VLANs to MBIM sessions: * no tag => IPS session <0> * 1 - 255 => IPS session <vlanid> * 256 - 511 => DSS session <vlanid - 256> * 512 - 4095 => unsupported, drop */ + vlan_get_tag(skb, &tci); + switch (tci & 0x0f00) { case 0x0000: /* VLAN ID 0 - 255 */ - if (!is_ip) + /* verify that datagram is IPv4 or IPv6 */ + skb_reset_mac_header(skb); + switch (eth_hdr(skb)->h_proto) { + case htons(ETH_P_IP): + case htons(ETH_P_IPV6): + break; + default: goto error; + } c = (u8 *)&sign; c[3] = tci; break; @@ -187,6 +169,7 @@ static struct sk_buff *cdc_mbim_tx_fixup(struct usbnet *dev, struct sk_buff *skb "unsupported tci=0x%04x\n", tci); goto error; } + skb_pull(skb, ETH_HLEN); } spin_lock_bh(&ctx->mtx); diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index 6c584f8a226..37d9785974f 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -647,29 +647,11 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, - {QMI_FIXED_INTF(0x0846, 0x68a2, 8)}, {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ - {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ - {QMI_FIXED_INTF(0x16d8, 0x6007, 0)}, /* CMOTech CHE-628S */ - {QMI_FIXED_INTF(0x16d8, 0x6008, 0)}, /* CMOTech CMU-301 */ - {QMI_FIXED_INTF(0x16d8, 0x6280, 0)}, /* CMOTech CHU-628 */ - {QMI_FIXED_INTF(0x16d8, 0x7001, 0)}, /* CMOTech CHU-720S */ - {QMI_FIXED_INTF(0x16d8, 0x7002, 0)}, /* CMOTech 7002 */ - {QMI_FIXED_INTF(0x16d8, 0x7003, 4)}, /* CMOTech CHU-629K */ - {QMI_FIXED_INTF(0x16d8, 0x7004, 3)}, /* CMOTech 7004 */ - {QMI_FIXED_INTF(0x16d8, 0x7006, 5)}, /* CMOTech CGU-629 */ - {QMI_FIXED_INTF(0x16d8, 0x700a, 4)}, /* CMOTech CHU-629S */ - {QMI_FIXED_INTF(0x16d8, 0x7211, 0)}, /* CMOTech CHU-720I */ - {QMI_FIXED_INTF(0x16d8, 0x7212, 0)}, /* CMOTech 7212 */ - {QMI_FIXED_INTF(0x16d8, 0x7213, 0)}, /* CMOTech 7213 */ - {QMI_FIXED_INTF(0x16d8, 0x7251, 1)}, /* CMOTech 7251 */ - {QMI_FIXED_INTF(0x16d8, 0x7252, 1)}, /* CMOTech 7252 */ - {QMI_FIXED_INTF(0x16d8, 0x7253, 1)}, /* CMOTech 7253 */ {QMI_FIXED_INTF(0x19d2, 0x0002, 1)}, {QMI_FIXED_INTF(0x19d2, 0x0012, 1)}, {QMI_FIXED_INTF(0x19d2, 0x0017, 3)}, - {QMI_FIXED_INTF(0x19d2, 0x0019, 3)}, /* ONDA MT689DC */ {QMI_FIXED_INTF(0x19d2, 0x0021, 4)}, {QMI_FIXED_INTF(0x19d2, 0x0025, 1)}, {QMI_FIXED_INTF(0x19d2, 0x0031, 4)}, @@ -716,46 +698,22 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x19d2, 0x1255, 3)}, {QMI_FIXED_INTF(0x19d2, 0x1255, 4)}, {QMI_FIXED_INTF(0x19d2, 0x1256, 4)}, - {QMI_FIXED_INTF(0x19d2, 0x1270, 5)}, /* ZTE MF667 */ {QMI_FIXED_INTF(0x19d2, 0x1401, 2)}, {QMI_FIXED_INTF(0x19d2, 0x1402, 2)}, /* ZTE MF60 */ {QMI_FIXED_INTF(0x19d2, 0x1424, 2)}, {QMI_FIXED_INTF(0x19d2, 0x1425, 2)}, {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ - {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */ {QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */ - {QMI_FIXED_INTF(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC73xx */ - {QMI_FIXED_INTF(0x1199, 0x68c0, 10)}, /* Sierra Wireless MC73xx */ - {QMI_FIXED_INTF(0x1199, 0x68c0, 11)}, /* Sierra Wireless MC73xx */ {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */ - {QMI_FIXED_INTF(0x1199, 0x901f, 8)}, /* Sierra Wireless EM7355 */ - {QMI_FIXED_INTF(0x1199, 0x9041, 8)}, /* Sierra Wireless MC7305/MC7355 */ {QMI_FIXED_INTF(0x1199, 0x9051, 8)}, /* Netgear AirCard 340U */ - {QMI_FIXED_INTF(0x1199, 0x9057, 8)}, {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ - {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ - {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ - {QMI_FIXED_INTF(0x1bc7, 0x1201, 2)}, /* Telit LE920 */ - {QMI_FIXED_INTF(0x0b3c, 0xc000, 4)}, /* Olivetti Olicard 100 */ - {QMI_FIXED_INTF(0x0b3c, 0xc001, 4)}, /* Olivetti Olicard 120 */ - {QMI_FIXED_INTF(0x0b3c, 0xc002, 4)}, /* Olivetti Olicard 140 */ - {QMI_FIXED_INTF(0x0b3c, 0xc004, 6)}, /* Olivetti Olicard 155 */ - {QMI_FIXED_INTF(0x0b3c, 0xc005, 6)}, /* Olivetti Olicard 200 */ - {QMI_FIXED_INTF(0x0b3c, 0xc00a, 6)}, /* Olivetti Olicard 160 */ - {QMI_FIXED_INTF(0x0b3c, 0xc00b, 4)}, /* Olivetti Olicard 500 */ - {QMI_FIXED_INTF(0x1e2d, 0x0060, 4)}, /* Cinterion PLxx */ - {QMI_FIXED_INTF(0x1e2d, 0x0053, 4)}, /* Cinterion PHxx,PXxx */ - {QMI_FIXED_INTF(0x413c, 0x81a2, 8)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */ - {QMI_FIXED_INTF(0x413c, 0x81a3, 8)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */ - {QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */ - {QMI_FIXED_INTF(0x413c, 0x81a8, 8)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card */ - {QMI_FIXED_INTF(0x413c, 0x81a9, 8)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card */ + {QMI_FIXED_INTF(0x1e2d, 0x12d1, 4)}, /* Cinterion PLxx */ /* 4. Gobi 1000 devices */ {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */ @@ -789,7 +747,6 @@ static const struct usb_device_id products[] = { {QMI_GOBI_DEVICE(0x05c6, 0x9265)}, /* Asus Gobi 2000 Modem device (VR305) */ {QMI_GOBI_DEVICE(0x05c6, 0x9235)}, /* Top Global Gobi 2000 Modem device (VR306) */ {QMI_GOBI_DEVICE(0x05c6, 0x9275)}, /* iRex Technologies Gobi 2000 Modem device (VR307) */ - {QMI_GOBI_DEVICE(0x0af0, 0x8120)}, /* Option GTM681W */ {QMI_GOBI_DEVICE(0x1199, 0x68a5)}, /* Sierra Wireless Modem */ {QMI_GOBI_DEVICE(0x1199, 0x68a9)}, /* Sierra Wireless Modem */ {QMI_GOBI_DEVICE(0x1199, 0x9001)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */ @@ -803,6 +760,7 @@ static const struct usb_device_id products[] = { {QMI_GOBI_DEVICE(0x1199, 0x9009)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */ {QMI_GOBI_DEVICE(0x1199, 0x900a)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */ {QMI_GOBI_DEVICE(0x1199, 0x9011)}, /* Sierra Wireless Gobi 2000 Modem device (MC8305) */ + {QMI_FIXED_INTF(0x1199, 0x9011, 5)}, /* alternate interface number!? */ {QMI_GOBI_DEVICE(0x16d8, 0x8002)}, /* CMDTech Gobi 2000 Modem device (VU922) */ {QMI_GOBI_DEVICE(0x05c6, 0x9205)}, /* Gobi 2000 Modem device */ {QMI_GOBI_DEVICE(0x1199, 0x9013)}, /* Sierra Wireless Gobi 3000 Modem device (MC8355) */ diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c index 0e746f01025..42f15952025 100644 --- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -727,12 +727,14 @@ EXPORT_SYMBOL_GPL(usbnet_unlink_rx_urbs); // precondition: never called in_interrupt static void usbnet_terminate_urbs(struct usbnet *dev) { + DECLARE_WAIT_QUEUE_HEAD_ONSTACK(unlink_wakeup); DECLARE_WAITQUEUE(wait, current); int temp; /* ensure there are no more active urbs */ - add_wait_queue(&dev->wait, &wait); + add_wait_queue(&unlink_wakeup, &wait); set_current_state(TASK_UNINTERRUPTIBLE); + dev->wait = &unlink_wakeup; temp = unlink_urbs(dev, &dev->txq) + unlink_urbs(dev, &dev->rxq); @@ -746,14 +748,15 @@ static void usbnet_terminate_urbs(struct usbnet *dev) "waited for %d urb completions\n", temp); } set_current_state(TASK_RUNNING); - remove_wait_queue(&dev->wait, &wait); + dev->wait = NULL; + remove_wait_queue(&unlink_wakeup, &wait); } int usbnet_stop (struct net_device *net) { struct usbnet *dev = netdev_priv(net); struct driver_info *info = dev->driver_info; - int retval, pm; + int retval; clear_bit(EVENT_DEV_OPEN, &dev->flags); netif_stop_queue (net); @@ -763,8 +766,6 @@ int usbnet_stop (struct net_device *net) net->stats.rx_packets, net->stats.tx_packets, net->stats.rx_errors, net->stats.tx_errors); - /* to not race resume */ - pm = usb_autopm_get_interface(dev->intf); /* allow minidriver to stop correctly (wireless devices to turn off * radio etc) */ if (info->stop) { @@ -791,9 +792,6 @@ int usbnet_stop (struct net_device *net) dev->flags = 0; del_timer_sync (&dev->delay); tasklet_kill (&dev->bh); - if (!pm) - usb_autopm_put_interface(dev->intf); - if (info->manage_power && !test_and_clear_bit(EVENT_NO_RUNTIME_PM, &dev->flags)) info->manage_power(dev, 0); @@ -1362,12 +1360,11 @@ static void usbnet_bh (unsigned long param) /* restart RX again after disabling due to high error rate */ clear_bit(EVENT_RX_KILL, &dev->flags); - /* waiting for all pending urbs to complete? - * only then can we forgo submitting anew - */ - if (waitqueue_active(&dev->wait)) { - if (dev->txq.qlen + dev->rxq.qlen + dev->done.qlen == 0) - wake_up_all(&dev->wait); + // waiting for all pending urbs to complete? + if (dev->wait) { + if ((dev->txq.qlen + dev->rxq.qlen + dev->done.qlen) == 0) { + wake_up (dev->wait); + } // or are we maybe short a few urbs? } else if (netif_running (dev->net) && @@ -1505,7 +1502,6 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod) dev->driver_name = name; dev->msg_enable = netif_msg_init (msg_level, NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK); - init_waitqueue_head(&dev->wait); skb_queue_head_init (&dev->rxq); skb_queue_head_init (&dev->txq); skb_queue_head_init (&dev->done); @@ -1701,10 +1697,9 @@ int usbnet_resume (struct usb_interface *intf) spin_unlock_irq(&dev->txq.lock); if (test_bit(EVENT_DEV_OPEN, &dev->flags)) { - /* handle remote wakeup ASAP - * we cannot race against stop - */ - if (netif_device_present(dev->net) && + /* handle remote wakeup ASAP */ + if (!dev->wait && + netif_device_present(dev->net) && !timer_pending(&dev->delay) && !test_bit(EVENT_RX_HALT, &dev->flags)) rx_alloc_submit(dev, GFP_NOIO); diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 2835bfe151b..a0c05e07fee 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1597,8 +1597,7 @@ static int virtnet_probe(struct virtio_device *vdev) /* If we can receive ANY GSO packets, we must allocate large ones. */ if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) || - virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN) || - virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO)) + virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN)) vi->big_packets = true; if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index d0815855d87..55a62cae2cb 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1741,20 +1741,11 @@ vmxnet3_netpoll(struct net_device *netdev) { struct vmxnet3_adapter *adapter = netdev_priv(netdev); - switch (adapter->intr.type) { -#ifdef CONFIG_PCI_MSI - case VMXNET3_IT_MSIX: { - int i; - for (i = 0; i < adapter->num_rx_queues; i++) - vmxnet3_msix_rx(0, &adapter->rx_queue[i]); - break; - } -#endif - case VMXNET3_IT_MSI: - default: - vmxnet3_intr(0, adapter->netdev); - break; - } + if (adapter->intr.mask_mode == VMXNET3_IMM_ACTIVE) + vmxnet3_disable_all_intrs(adapter); + + vmxnet3_do_poll(adapter, adapter->rx_queue[0].rx_ring[0].size); + vmxnet3_enable_all_intrs(adapter); } #endif /* CONFIG_NET_POLL_CONTROLLER */ diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c index a1dc186c6f6..054489fdf54 100644 --- a/drivers/net/vxlan.c +++ b/drivers/net/vxlan.c @@ -845,9 +845,6 @@ static int arp_reduce(struct net_device *dev, struct sk_buff *skb) neigh_release(n); - if (reply == NULL) - goto out; - skb_reset_mac_header(reply); __skb_pull(reply, skb_network_offset(reply)); reply->ip_summed = CHECKSUM_UNNECESSARY; @@ -1093,7 +1090,7 @@ static netdev_tx_t vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, iph->daddr = dst; iph->saddr = fl4.saddr; iph->ttl = ttl ? : ip4_dst_hoplimit(&rt->dst); - __ip_select_ident(iph, skb_shinfo(skb)->gso_segs ?: 1); + __ip_select_ident(iph, &rt->dst, (skb_shinfo(skb)->gso_segs ?: 1) - 1); nf_reset(skb); @@ -1314,7 +1311,7 @@ static void vxlan_setup(struct net_device *dev) eth_hw_addr_random(dev); ether_setup(dev); - dev->needed_headroom = ETH_HLEN + VXLAN_HEADROOM; + dev->hard_header_len = ETH_HLEN + VXLAN_HEADROOM; dev->netdev_ops = &vxlan_netdev_ops; dev->destructor = vxlan_free; @@ -1454,7 +1451,7 @@ static int vxlan_newlink(struct net *net, struct net_device *dev, dev->mtu = lowerdev->mtu - VXLAN_HEADROOM; /* update header length based on lower device */ - dev->needed_headroom = lowerdev->hard_header_len + + dev->hard_header_len = lowerdev->hard_header_len + VXLAN_HEADROOM; } diff --git a/drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h b/drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h index 4ae3cf7283e..999ab08c34e 100644 --- a/drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h +++ b/drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h @@ -56,7 +56,7 @@ static const u32 ar9462_2p0_baseband_postamble[][5] = { {0x00009e14, 0x37b95d5e, 0x37b9605e, 0x3236605e, 0x32365a5e}, {0x00009e18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x00009e1c, 0x0001cf9c, 0x0001cf9c, 0x00021f9c, 0x00021f9c}, - {0x00009e20, 0x000003a5, 0x000003a5, 0x000003a5, 0x000003a5}, + {0x00009e20, 0x000003b5, 0x000003b5, 0x000003ce, 0x000003ce}, {0x00009e2c, 0x0000001c, 0x0000001c, 0x00000021, 0x00000021}, {0x00009e3c, 0xcf946220, 0xcf946220, 0xcfd5c782, 0xcfd5c282}, {0x00009e44, 0x62321e27, 0x62321e27, 0xfe291e27, 0xfe291e27}, @@ -95,7 +95,7 @@ static const u32 ar9462_2p0_baseband_postamble[][5] = { {0x0000ae04, 0x001c0000, 0x001c0000, 0x001c0000, 0x00100000}, {0x0000ae18, 0x00000000, 0x00000000, 0x00000000, 0x00000000}, {0x0000ae1c, 0x0000019c, 0x0000019c, 0x0000019c, 0x0000019c}, - {0x0000ae20, 0x000001a6, 0x000001a6, 0x000001aa, 0x000001aa}, + {0x0000ae20, 0x000001b5, 0x000001b5, 0x000001ce, 0x000001ce}, {0x0000b284, 0x00000000, 0x00000000, 0x00000550, 0x00000550}, }; diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c index 0c9b2f1c693..e752f5d4995 100644 --- a/drivers/net/wireless/ath/ath9k/xmit.c +++ b/drivers/net/wireless/ath/ath9k/xmit.c @@ -1255,16 +1255,14 @@ void ath_tx_aggr_sleep(struct ieee80211_sta *sta, struct ath_softc *sc, for (tidno = 0, tid = &an->tid[tidno]; tidno < IEEE80211_NUM_TIDS; tidno++, tid++) { + if (!tid->sched) + continue; + ac = tid->ac; txq = ac->txq; ath_txq_lock(sc, txq); - if (!tid->sched) { - ath_txq_unlock(sc, txq); - continue; - } - buffered = !skb_queue_empty(&tid->buf_q); tid->sched = false; diff --git a/drivers/net/wireless/ath/carl9170/carl9170.h b/drivers/net/wireless/ath/carl9170/carl9170.h index 95a334f0719..9dce106cd6d 100644 --- a/drivers/net/wireless/ath/carl9170/carl9170.h +++ b/drivers/net/wireless/ath/carl9170/carl9170.h @@ -253,7 +253,6 @@ struct ar9170 { atomic_t rx_work_urbs; atomic_t rx_pool_urbs; kernel_ulong_t features; - bool usb_ep_cmd_is_bulk; /* firmware settings */ struct completion fw_load_wait; diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c index 83d20c8b2ad..307bc0ddff9 100644 --- a/drivers/net/wireless/ath/carl9170/usb.c +++ b/drivers/net/wireless/ath/carl9170/usb.c @@ -621,16 +621,9 @@ int __carl9170_exec_cmd(struct ar9170 *ar, struct carl9170_cmd *cmd, goto err_free; } - if (ar->usb_ep_cmd_is_bulk) - usb_fill_bulk_urb(urb, ar->udev, - usb_sndbulkpipe(ar->udev, AR9170_USB_EP_CMD), - cmd, cmd->hdr.len + 4, - carl9170_usb_cmd_complete, ar); - else - usb_fill_int_urb(urb, ar->udev, - usb_sndintpipe(ar->udev, AR9170_USB_EP_CMD), - cmd, cmd->hdr.len + 4, - carl9170_usb_cmd_complete, ar, 1); + usb_fill_int_urb(urb, ar->udev, usb_sndintpipe(ar->udev, + AR9170_USB_EP_CMD), cmd, cmd->hdr.len + 4, + carl9170_usb_cmd_complete, ar, 1); if (free_buf) urb->transfer_flags |= URB_FREE_BUFFER; @@ -1039,10 +1032,9 @@ static void carl9170_usb_firmware_step2(const struct firmware *fw, static int carl9170_usb_probe(struct usb_interface *intf, const struct usb_device_id *id) { - struct usb_endpoint_descriptor *ep; struct ar9170 *ar; struct usb_device *udev; - int i, err; + int err; err = usb_reset_device(interface_to_usbdev(intf)); if (err) @@ -1058,21 +1050,6 @@ static int carl9170_usb_probe(struct usb_interface *intf, ar->intf = intf; ar->features = id->driver_info; - /* We need to remember the type of endpoint 4 because it differs - * between high- and full-speed configuration. The high-speed - * configuration specifies it as interrupt and the full-speed - * configuration as bulk endpoint. This information is required - * later when sending urbs to that endpoint. - */ - for (i = 0; i < intf->cur_altsetting->desc.bNumEndpoints; ++i) { - ep = &intf->cur_altsetting->endpoint[i].desc; - - if (usb_endpoint_num(ep) == AR9170_USB_EP_CMD && - usb_endpoint_dir_out(ep) && - usb_endpoint_type(ep) == USB_ENDPOINT_XFER_BULK) - ar->usb_ep_cmd_is_bulk = true; - } - usb_set_intfdata(intf, ar); SET_IEEE80211_DEV(ar->hw, &intf->dev); diff --git a/drivers/net/wireless/b43/phy_n.c b/drivers/net/wireless/b43/phy_n.c index 80ecca3e146..7c970d3ae35 100644 --- a/drivers/net/wireless/b43/phy_n.c +++ b/drivers/net/wireless/b43/phy_n.c @@ -5175,22 +5175,22 @@ static void b43_nphy_channel_setup(struct b43_wldev *dev, int ch = new_channel->hw_value; u16 old_band_5ghz; - u16 tmp16; + u32 tmp32; old_band_5ghz = b43_phy_read(dev, B43_NPHY_BANDCTL) & B43_NPHY_BANDCTL_5GHZ; if (new_channel->band == IEEE80211_BAND_5GHZ && !old_band_5ghz) { - tmp16 = b43_read16(dev, B43_MMIO_PSM_PHY_HDR); - b43_write16(dev, B43_MMIO_PSM_PHY_HDR, tmp16 | 4); + tmp32 = b43_read32(dev, B43_MMIO_PSM_PHY_HDR); + b43_write32(dev, B43_MMIO_PSM_PHY_HDR, tmp32 | 4); b43_phy_set(dev, B43_PHY_B_BBCFG, 0xC000); - b43_write16(dev, B43_MMIO_PSM_PHY_HDR, tmp16); + b43_write32(dev, B43_MMIO_PSM_PHY_HDR, tmp32); b43_phy_set(dev, B43_NPHY_BANDCTL, B43_NPHY_BANDCTL_5GHZ); } else if (new_channel->band == IEEE80211_BAND_2GHZ && old_band_5ghz) { b43_phy_mask(dev, B43_NPHY_BANDCTL, ~B43_NPHY_BANDCTL_5GHZ); - tmp16 = b43_read16(dev, B43_MMIO_PSM_PHY_HDR); - b43_write16(dev, B43_MMIO_PSM_PHY_HDR, tmp16 | 4); + tmp32 = b43_read32(dev, B43_MMIO_PSM_PHY_HDR); + b43_write32(dev, B43_MMIO_PSM_PHY_HDR, tmp32 | 4); b43_phy_mask(dev, B43_PHY_B_BBCFG, 0x3FFF); - b43_write16(dev, B43_MMIO_PSM_PHY_HDR, tmp16); + b43_write32(dev, B43_MMIO_PSM_PHY_HDR, tmp32); } b43_chantab_phy_upload(dev, e); diff --git a/drivers/net/wireless/b43/xmit.c b/drivers/net/wireless/b43/xmit.c index ebcce00ce06..e85d34b7603 100644 --- a/drivers/net/wireless/b43/xmit.c +++ b/drivers/net/wireless/b43/xmit.c @@ -810,13 +810,9 @@ void b43_rx(struct b43_wldev *dev, struct sk_buff *skb, const void *_rxhdr) break; case B43_PHYTYPE_G: status.band = IEEE80211_BAND_2GHZ; - /* Somewhere between 478.104 and 508.1084 firmware for G-PHY - * has been modified to be compatible with N-PHY and others. - */ - if (dev->fw.rev >= 508) - status.freq = ieee80211_channel_to_frequency(chanid, status.band); - else - status.freq = chanid + 2400; + /* chanid is the radio channel cookie value as used + * to tune the radio. */ + status.freq = chanid + 2400; break; case B43_PHYTYPE_N: case B43_PHYTYPE_LP: diff --git a/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c index 8e8543cfe48..3a6544710c8 100644 --- a/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c +++ b/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c @@ -426,12 +426,6 @@ static int brcms_ops_start(struct ieee80211_hw *hw) bool blocked; int err; - if (!wl->ucode.bcm43xx_bomminor) { - err = brcms_request_fw(wl, wl->wlc->hw->d11core); - if (err) - return -ENOENT; - } - ieee80211_wake_queues(hw); spin_lock_bh(&wl->lock); blocked = brcms_rfkill_set_hw_state(wl); @@ -439,6 +433,14 @@ static int brcms_ops_start(struct ieee80211_hw *hw) if (!blocked) wiphy_rfkill_stop_polling(wl->pub->ieee_hw->wiphy); + if (!wl->ucode.bcm43xx_bomminor) { + err = brcms_request_fw(wl, wl->wlc->hw->d11core); + if (err) { + brcms_remove(wl->wlc->hw->d11core); + return -ENOENT; + } + } + spin_lock_bh(&wl->lock); /* avoid acknowledging frames before a non-monitor device is added */ wl->mute_tx = true; diff --git a/drivers/net/wireless/iwlwifi/dvm/mac80211.c b/drivers/net/wireless/iwlwifi/dvm/mac80211.c index e9d09f19f85..e04f3da1ccb 100644 --- a/drivers/net/wireless/iwlwifi/dvm/mac80211.c +++ b/drivers/net/wireless/iwlwifi/dvm/mac80211.c @@ -739,24 +739,6 @@ static int iwlagn_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, return ret; } -static inline bool iwl_enable_rx_ampdu(const struct iwl_cfg *cfg) -{ - if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_RXAGG) - return false; - return true; -} - -static inline bool iwl_enable_tx_ampdu(const struct iwl_cfg *cfg) -{ - if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_TXAGG) - return false; - if (iwlwifi_mod_params.disable_11n & IWL_ENABLE_HT_TXAGG) - return true; - - /* disabled by default */ - return false; -} - static int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif, enum ieee80211_ampdu_mlme_action action, @@ -778,7 +760,7 @@ static int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, switch (action) { case IEEE80211_AMPDU_RX_START: - if (!iwl_enable_rx_ampdu(priv->cfg)) + if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_RXAGG) break; IWL_DEBUG_HT(priv, "start Rx\n"); ret = iwl_sta_rx_agg_start(priv, sta, tid, *ssn); @@ -790,7 +772,7 @@ static int iwlagn_mac_ampdu_action(struct ieee80211_hw *hw, case IEEE80211_AMPDU_TX_START: if (!priv->trans->ops->txq_enable) break; - if (!iwl_enable_tx_ampdu(priv->cfg)) + if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_TXAGG) break; IWL_DEBUG_HT(priv, "start Tx\n"); ret = iwlagn_tx_agg_start(priv, vif, sta, tid, ssn); diff --git a/drivers/net/wireless/iwlwifi/dvm/main.c b/drivers/net/wireless/iwlwifi/dvm/main.c index c2b8e49d00d..a8afc7bee54 100644 --- a/drivers/net/wireless/iwlwifi/dvm/main.c +++ b/drivers/net/wireless/iwlwifi/dvm/main.c @@ -252,17 +252,13 @@ static void iwl_bg_bt_runtime_config(struct work_struct *work) struct iwl_priv *priv = container_of(work, struct iwl_priv, bt_runtime_config); - mutex_lock(&priv->mutex); if (test_bit(STATUS_EXIT_PENDING, &priv->status)) - goto out; + return; /* dont send host command if rf-kill is on */ if (!iwl_is_ready_rf(priv)) - goto out; - + return; iwlagn_send_advance_bt_config(priv); -out: - mutex_unlock(&priv->mutex); } static void iwl_bg_bt_full_concurrency(struct work_struct *work) diff --git a/drivers/net/wireless/iwlwifi/dvm/sta.c b/drivers/net/wireless/iwlwifi/dvm/sta.c index e800002d615..c3c13ce96eb 100644 --- a/drivers/net/wireless/iwlwifi/dvm/sta.c +++ b/drivers/net/wireless/iwlwifi/dvm/sta.c @@ -590,7 +590,6 @@ void iwl_deactivate_station(struct iwl_priv *priv, const u8 sta_id, sizeof(priv->tid_data[sta_id][tid])); priv->stations[sta_id].used &= ~IWL_STA_DRIVER_ACTIVE; - priv->stations[sta_id].used &= ~IWL_STA_UCODE_INPROGRESS; priv->num_stations--; diff --git a/drivers/net/wireless/iwlwifi/dvm/tx.c b/drivers/net/wireless/iwlwifi/dvm/tx.c index 2b5dbff9ead..20e65d3cc3b 100644 --- a/drivers/net/wireless/iwlwifi/dvm/tx.c +++ b/drivers/net/wireless/iwlwifi/dvm/tx.c @@ -1322,6 +1322,8 @@ int iwlagn_rx_reply_compressed_ba(struct iwl_priv *priv, struct iwl_compressed_ba_resp *ba_resp = (void *)pkt->data; struct iwl_ht_agg *agg; struct sk_buff_head reclaimed_skbs; + struct ieee80211_tx_info *info; + struct ieee80211_hdr *hdr; struct sk_buff *skb; int sta_id; int tid; @@ -1408,28 +1410,22 @@ int iwlagn_rx_reply_compressed_ba(struct iwl_priv *priv, freed = 0; skb_queue_walk(&reclaimed_skbs, skb) { - struct ieee80211_hdr *hdr = (void *)skb->data; - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + hdr = (struct ieee80211_hdr *)skb->data; if (ieee80211_is_data_qos(hdr->frame_control)) freed++; else WARN_ON_ONCE(1); + info = IEEE80211_SKB_CB(skb); iwl_trans_free_tx_cmd(priv->trans, info->driver_data[1]); - memset(&info->status, 0, sizeof(info->status)); - /* Packet was transmitted successfully, failures come as single - * frames because before failing a frame the firmware transmits - * it without aggregation at least once. - */ - info->flags |= IEEE80211_TX_STAT_ACK; - if (freed == 1) { /* this is the first skb we deliver in this batch */ /* put the rate scaling data there */ info = IEEE80211_SKB_CB(skb); memset(&info->status, 0, sizeof(info->status)); + info->flags |= IEEE80211_TX_STAT_ACK; info->flags |= IEEE80211_TX_STAT_AMPDU; info->status.ampdu_ack_len = ba_resp->txed_2_done; info->status.ampdu_len = ba_resp->txed; diff --git a/drivers/net/wireless/iwlwifi/iwl-drv.c b/drivers/net/wireless/iwlwifi/iwl-drv.c index 96050e6c3d5..40fed1f511e 100644 --- a/drivers/net/wireless/iwlwifi/iwl-drv.c +++ b/drivers/net/wireless/iwlwifi/iwl-drv.c @@ -1211,7 +1211,7 @@ module_param_named(swcrypto, iwlwifi_mod_params.sw_crypto, int, S_IRUGO); MODULE_PARM_DESC(swcrypto, "using crypto in software (default 0 [hardware])"); module_param_named(11n_disable, iwlwifi_mod_params.disable_11n, uint, S_IRUGO); MODULE_PARM_DESC(11n_disable, - "disable 11n functionality, bitmap: 1: full, 2: disable agg TX, 4: disable agg RX, 8 enable agg TX"); + "disable 11n functionality, bitmap: 1: full, 2: agg TX, 4: agg RX"); module_param_named(amsdu_size_8K, iwlwifi_mod_params.amsdu_size_8K, int, S_IRUGO); MODULE_PARM_DESC(amsdu_size_8K, "enable 8K amsdu size (default 0)"); diff --git a/drivers/net/wireless/iwlwifi/iwl-modparams.h b/drivers/net/wireless/iwlwifi/iwl-modparams.h index e99bc55046e..d6f6c37c09f 100644 --- a/drivers/net/wireless/iwlwifi/iwl-modparams.h +++ b/drivers/net/wireless/iwlwifi/iwl-modparams.h @@ -79,12 +79,9 @@ enum iwl_power_level { IWL_POWER_NUM }; -enum iwl_disable_11n { - IWL_DISABLE_HT_ALL = BIT(0), - IWL_DISABLE_HT_TXAGG = BIT(1), - IWL_DISABLE_HT_RXAGG = BIT(2), - IWL_ENABLE_HT_TXAGG = BIT(3), -}; +#define IWL_DISABLE_HT_ALL BIT(0) +#define IWL_DISABLE_HT_TXAGG BIT(1) +#define IWL_DISABLE_HT_RXAGG BIT(2) /** * struct iwl_mod_params @@ -93,7 +90,7 @@ enum iwl_disable_11n { * * @sw_crypto: using hardware encryption, default = 0 * @disable_11n: disable 11n capabilities, default = 0, - * use IWL_[DIS,EN]ABLE_HT_* constants + * use IWL_DISABLE_HT_* constants * @amsdu_size_8K: enable 8K amsdu size, default = 0 * @restart_fw: restart firmware, default = 1 * @plcp_check: enable plcp health check, default = true diff --git a/drivers/net/wireless/iwlwifi/mvm/bt-coex.c b/drivers/net/wireless/iwlwifi/mvm/bt-coex.c index 9649f511bd5..810bfa5f6de 100644 --- a/drivers/net/wireless/iwlwifi/mvm/bt-coex.c +++ b/drivers/net/wireless/iwlwifi/mvm/bt-coex.c @@ -523,11 +523,8 @@ void iwl_mvm_bt_rssi_event(struct iwl_mvm *mvm, struct ieee80211_vif *vif, mutex_lock(&mvm->mutex); - /* - * Rssi update while not associated - can happen since the statistics - * are handled asynchronously - */ - if (mvmvif->ap_sta_id == IWL_MVM_STATION_COUNT) + /* Rssi update while not associated ?! */ + if (WARN_ON_ONCE(mvmvif->ap_sta_id == IWL_MVM_STATION_COUNT)) goto out_unlock; /* No open connection - reports should be disabled */ diff --git a/drivers/net/wireless/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/iwlwifi/mvm/mac80211.c index 88b9c096469..f7545e06ce2 100644 --- a/drivers/net/wireless/iwlwifi/mvm/mac80211.c +++ b/drivers/net/wireless/iwlwifi/mvm/mac80211.c @@ -278,24 +278,6 @@ static void iwl_mvm_mac_tx(struct ieee80211_hw *hw, ieee80211_free_txskb(hw, skb); } -static inline bool iwl_enable_rx_ampdu(const struct iwl_cfg *cfg) -{ - if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_RXAGG) - return false; - return true; -} - -static inline bool iwl_enable_tx_ampdu(const struct iwl_cfg *cfg) -{ - if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_TXAGG) - return false; - if (iwlwifi_mod_params.disable_11n & IWL_ENABLE_HT_TXAGG) - return true; - - /* enabled by default */ - return true; -} - static int iwl_mvm_mac_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif, enum ieee80211_ampdu_mlme_action action, @@ -315,7 +297,7 @@ static int iwl_mvm_mac_ampdu_action(struct ieee80211_hw *hw, switch (action) { case IEEE80211_AMPDU_RX_START: - if (!iwl_enable_rx_ampdu(mvm->cfg)) { + if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_RXAGG) { ret = -EINVAL; break; } @@ -325,7 +307,7 @@ static int iwl_mvm_mac_ampdu_action(struct ieee80211_hw *hw, ret = iwl_mvm_sta_rx_agg(mvm, sta, tid, 0, false); break; case IEEE80211_AMPDU_TX_START: - if (!iwl_enable_tx_ampdu(mvm->cfg)) { + if (iwlwifi_mod_params.disable_11n & IWL_DISABLE_HT_TXAGG) { ret = -EINVAL; break; } diff --git a/drivers/net/wireless/iwlwifi/mvm/tx.c b/drivers/net/wireless/iwlwifi/mvm/tx.c index 4ec8385e430..a2e6112e91e 100644 --- a/drivers/net/wireless/iwlwifi/mvm/tx.c +++ b/drivers/net/wireless/iwlwifi/mvm/tx.c @@ -819,12 +819,16 @@ int iwl_mvm_rx_ba_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb, struct iwl_mvm_ba_notif *ba_notif = (void *)pkt->data; struct sk_buff_head reclaimed_skbs; struct iwl_mvm_tid_data *tid_data; + struct ieee80211_tx_info *info; struct ieee80211_sta *sta; struct iwl_mvm_sta *mvmsta; + struct ieee80211_hdr *hdr; struct sk_buff *skb; int sta_id, tid, freed; + /* "flow" corresponds to Tx queue */ u16 scd_flow = le16_to_cpu(ba_notif->scd_flow); + /* "ssn" is start of block-ack Tx window, corresponds to index * (in Tx queue's circular buffer) of first TFD/frame in window */ u16 ba_resp_scd_ssn = le16_to_cpu(ba_notif->scd_ssn); @@ -881,26 +885,22 @@ int iwl_mvm_rx_ba_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb, freed = 0; skb_queue_walk(&reclaimed_skbs, skb) { - struct ieee80211_hdr *hdr = (void *)skb->data; - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + hdr = (struct ieee80211_hdr *)skb->data; if (ieee80211_is_data_qos(hdr->frame_control)) freed++; else WARN_ON_ONCE(1); + info = IEEE80211_SKB_CB(skb); iwl_trans_free_tx_cmd(mvm->trans, info->driver_data[1]); - memset(&info->status, 0, sizeof(info->status)); - /* Packet was transmitted successfully, failures come as single - * frames because before failing a frame the firmware transmits - * it without aggregation at least once. - */ - info->flags |= IEEE80211_TX_STAT_ACK; - if (freed == 1) { /* this is the first skb we deliver in this batch */ /* put the rate scaling data there */ + info = IEEE80211_SKB_CB(skb); + memset(&info->status, 0, sizeof(info->status)); + info->flags |= IEEE80211_TX_STAT_ACK; info->flags |= IEEE80211_TX_STAT_AMPDU; info->status.ampdu_ack_len = ba_notif->txed_2_done; info->status.ampdu_len = ba_notif->txed; diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c index bb020ad3f76..b53e5c3f403 100644 --- a/drivers/net/wireless/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/iwlwifi/pcie/drv.c @@ -269,8 +269,6 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { {IWL_PCI_DEVICE(0x08B1, 0x4070, iwl7260_2ac_cfg)}, {IWL_PCI_DEVICE(0x08B1, 0x4072, iwl7260_2ac_cfg)}, {IWL_PCI_DEVICE(0x08B1, 0x4170, iwl7260_2ac_cfg)}, - {IWL_PCI_DEVICE(0x08B1, 0x4C60, iwl7260_2ac_cfg)}, - {IWL_PCI_DEVICE(0x08B1, 0x4C70, iwl7260_2ac_cfg)}, {IWL_PCI_DEVICE(0x08B1, 0x4060, iwl7260_2n_cfg)}, {IWL_PCI_DEVICE(0x08B1, 0x406A, iwl7260_2n_cfg)}, {IWL_PCI_DEVICE(0x08B1, 0x4160, iwl7260_2n_cfg)}, @@ -308,8 +306,6 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { {IWL_PCI_DEVICE(0x08B1, 0xC770, iwl7260_2ac_cfg)}, {IWL_PCI_DEVICE(0x08B1, 0xC760, iwl7260_2n_cfg)}, {IWL_PCI_DEVICE(0x08B2, 0xC270, iwl7260_2ac_cfg)}, - {IWL_PCI_DEVICE(0x08B1, 0xCC70, iwl7260_2ac_cfg)}, - {IWL_PCI_DEVICE(0x08B1, 0xCC60, iwl7260_2ac_cfg)}, {IWL_PCI_DEVICE(0x08B2, 0xC272, iwl7260_2ac_cfg)}, {IWL_PCI_DEVICE(0x08B2, 0xC260, iwl7260_2n_cfg)}, {IWL_PCI_DEVICE(0x08B2, 0xC26A, iwl7260_n_cfg)}, diff --git a/drivers/net/wireless/iwlwifi/pcie/trans.c b/drivers/net/wireless/iwlwifi/pcie/trans.c index ff04135d37a..4088dd5e924 100644 --- a/drivers/net/wireless/iwlwifi/pcie/trans.c +++ b/drivers/net/wireless/iwlwifi/pcie/trans.c @@ -339,7 +339,6 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans) { int ret; int t = 0; - int iter; IWL_DEBUG_INFO(trans, "iwl_trans_prepare_card_hw enter\n"); @@ -348,23 +347,18 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans) if (ret >= 0) return 0; - for (iter = 0; iter < 10; iter++) { - /* If HW is not ready, prepare the conditions to check again */ - iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, - CSR_HW_IF_CONFIG_REG_PREPARE); - - do { - ret = iwl_pcie_set_hw_ready(trans); - if (ret >= 0) - return 0; + /* If HW is not ready, prepare the conditions to check again */ + iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG, + CSR_HW_IF_CONFIG_REG_PREPARE); - usleep_range(200, 1000); - t += 200; - } while (t < 150000); - msleep(25); - } + do { + ret = iwl_pcie_set_hw_ready(trans); + if (ret >= 0) + return 0; - IWL_DEBUG_INFO(trans, "got NIC after %d iterations\n", iter); + usleep_range(200, 1000); + t += 200; + } while (t < 150000); return ret; } diff --git a/drivers/net/wireless/mwifiex/11ac.c b/drivers/net/wireless/mwifiex/11ac.c index 5d9a8084665..5e0eec4d71c 100644 --- a/drivers/net/wireless/mwifiex/11ac.c +++ b/drivers/net/wireless/mwifiex/11ac.c @@ -189,7 +189,8 @@ int mwifiex_cmd_append_11ac_tlv(struct mwifiex_private *priv, vht_cap->header.len = cpu_to_le16(sizeof(struct ieee80211_vht_cap)); memcpy((u8 *)vht_cap + sizeof(struct mwifiex_ie_types_header), - (u8 *)bss_desc->bcn_vht_cap, + (u8 *)bss_desc->bcn_vht_cap + + sizeof(struct ieee_types_header), le16_to_cpu(vht_cap->header.len)); mwifiex_fill_vht_cap_tlv(priv, vht_cap, bss_desc->bss_band); diff --git a/drivers/net/wireless/mwifiex/11n.c b/drivers/net/wireless/mwifiex/11n.c index 2658c8cda44..41e9d25a2d8 100644 --- a/drivers/net/wireless/mwifiex/11n.c +++ b/drivers/net/wireless/mwifiex/11n.c @@ -307,7 +307,8 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv, ht_cap->header.len = cpu_to_le16(sizeof(struct ieee80211_ht_cap)); memcpy((u8 *) ht_cap + sizeof(struct mwifiex_ie_types_header), - (u8 *)bss_desc->bcn_ht_cap, + (u8 *) bss_desc->bcn_ht_cap + + sizeof(struct ieee_types_header), le16_to_cpu(ht_cap->header.len)); mwifiex_fill_cap_info(priv, radio_type, ht_cap); diff --git a/drivers/net/wireless/mwifiex/main.c b/drivers/net/wireless/mwifiex/main.c index 83c61964d08..fc3fe8ddcf6 100644 --- a/drivers/net/wireless/mwifiex/main.c +++ b/drivers/net/wireless/mwifiex/main.c @@ -501,7 +501,6 @@ mwifiex_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) } tx_info = MWIFIEX_SKB_TXCB(skb); - memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; diff --git a/drivers/net/wireless/mwifiex/pcie.c b/drivers/net/wireless/mwifiex/pcie.c index 801c709656f..20c9c4c7b0b 100644 --- a/drivers/net/wireless/mwifiex/pcie.c +++ b/drivers/net/wireless/mwifiex/pcie.c @@ -1195,12 +1195,6 @@ static int mwifiex_pcie_process_recv_data(struct mwifiex_adapter *adapter) rd_index = card->rxbd_rdptr & reg->rx_mask; skb_data = card->rx_buf_list[rd_index]; - /* If skb allocation was failed earlier for Rx packet, - * rx_buf_list[rd_index] would have been left with a NULL. - */ - if (!skb_data) - return -ENOMEM; - MWIFIEX_SKB_PACB(skb_data, &buf_pa); pci_unmap_single(card->dev, buf_pa, MWIFIEX_RX_DATA_BUF_SIZE, PCI_DMA_FROMDEVICE); @@ -1515,14 +1509,6 @@ static int mwifiex_pcie_process_cmd_complete(struct mwifiex_adapter *adapter) if (adapter->ps_state == PS_STATE_SLEEP_CFM) { mwifiex_process_sleep_confirm_resp(adapter, skb->data, skb->len); - mwifiex_pcie_enable_host_int(adapter); - if (mwifiex_write_reg(adapter, - PCIE_CPU_INT_EVENT, - CPU_INTR_SLEEP_CFM_DONE)) { - dev_warn(adapter->dev, - "Write register failed\n"); - return -1; - } while (reg->sleep_cookie && (count++ < 10) && mwifiex_pcie_ok_to_access_hw(adapter)) usleep_range(50, 60); @@ -1993,9 +1979,23 @@ static void mwifiex_interrupt_status(struct mwifiex_adapter *adapter) adapter->int_status |= pcie_ireg; spin_unlock_irqrestore(&adapter->int_lock, flags); - if (!adapter->pps_uapsd_mode && - adapter->ps_state == PS_STATE_SLEEP && - mwifiex_pcie_ok_to_access_hw(adapter)) { + if (pcie_ireg & HOST_INTR_CMD_DONE) { + if ((adapter->ps_state == PS_STATE_SLEEP_CFM) || + (adapter->ps_state == PS_STATE_SLEEP)) { + mwifiex_pcie_enable_host_int(adapter); + if (mwifiex_write_reg(adapter, + PCIE_CPU_INT_EVENT, + CPU_INTR_SLEEP_CFM_DONE) + ) { + dev_warn(adapter->dev, + "Write register failed\n"); + return; + + } + } + } else if (!adapter->pps_uapsd_mode && + adapter->ps_state == PS_STATE_SLEEP && + mwifiex_pcie_ok_to_access_hw(adapter)) { /* Potentially for PCIe we could get other * interrupts like shared. Don't change power * state until cookie is set */ diff --git a/drivers/net/wireless/mwifiex/scan.c b/drivers/net/wireless/mwifiex/scan.c index 470347a0a72..50b2fe53219 100644 --- a/drivers/net/wireless/mwifiex/scan.c +++ b/drivers/net/wireless/mwifiex/scan.c @@ -2040,12 +2040,12 @@ mwifiex_save_curr_bcn(struct mwifiex_private *priv) curr_bss->ht_info_offset); if (curr_bss->bcn_vht_cap) - curr_bss->bcn_vht_cap = (void *)(curr_bss->beacon_buf + - curr_bss->vht_cap_offset); + curr_bss->bcn_ht_cap = (void *)(curr_bss->beacon_buf + + curr_bss->vht_cap_offset); if (curr_bss->bcn_vht_oper) - curr_bss->bcn_vht_oper = (void *)(curr_bss->beacon_buf + - curr_bss->vht_info_offset); + curr_bss->bcn_ht_oper = (void *)(curr_bss->beacon_buf + + curr_bss->vht_info_offset); if (curr_bss->bcn_bss_co_2040) curr_bss->bcn_bss_co_2040 = diff --git a/drivers/net/wireless/mwifiex/usb.c b/drivers/net/wireless/mwifiex/usb.c index 923e348dda7..b7adf3d4646 100644 --- a/drivers/net/wireless/mwifiex/usb.c +++ b/drivers/net/wireless/mwifiex/usb.c @@ -511,6 +511,13 @@ static int mwifiex_usb_resume(struct usb_interface *intf) MWIFIEX_BSS_ROLE_ANY), MWIFIEX_ASYNC_CMD); +#ifdef CONFIG_PM + /* Resume handler may be called due to remote wakeup, + * force to exit suspend anyway + */ + usb_disable_autosuspend(card->udev); +#endif /* CONFIG_PM */ + return 0; } @@ -569,6 +576,7 @@ static struct usb_driver mwifiex_usb_driver = { .id_table = mwifiex_usb_table, .suspend = mwifiex_usb_suspend, .resume = mwifiex_usb_resume, + .supports_autosuspend = 1, }; static int mwifiex_usb_tx_init(struct mwifiex_adapter *adapter) diff --git a/drivers/net/wireless/mwifiex/wmm.c b/drivers/net/wireless/mwifiex/wmm.c index 80f72f6b6d5..ae31e8df44d 100644 --- a/drivers/net/wireless/mwifiex/wmm.c +++ b/drivers/net/wireless/mwifiex/wmm.c @@ -556,8 +556,7 @@ mwifiex_clean_txrx(struct mwifiex_private *priv) mwifiex_wmm_delete_all_ralist(priv); memcpy(tos_to_tid, ac_to_tid, sizeof(tos_to_tid)); - if (priv->adapter->if_ops.clean_pcie_ring && - !priv->adapter->surprise_removed) + if (priv->adapter->if_ops.clean_pcie_ring) priv->adapter->if_ops.clean_pcie_ring(priv->adapter); spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags); } diff --git a/drivers/net/wireless/p54/txrx.c b/drivers/net/wireless/p54/txrx.c index 1de59b0f8fa..f95de0d1621 100644 --- a/drivers/net/wireless/p54/txrx.c +++ b/drivers/net/wireless/p54/txrx.c @@ -587,7 +587,7 @@ static void p54_rx_stats(struct p54_common *priv, struct sk_buff *skb) chan = priv->curchan; if (chan) { struct survey_info *survey = &priv->survey[chan->hw_value]; - survey->noise = clamp(priv->noise, -128, 127); + survey->noise = clamp_t(s8, priv->noise, -128, 127); survey->channel_time = priv->survey_raw.active; survey->channel_time_tx = priv->survey_raw.tx; survey->channel_time_busy = priv->survey_raw.tx + diff --git a/drivers/net/wireless/rt2x00/rt2500pci.c b/drivers/net/wireless/rt2x00/rt2500pci.c index d582febbfba..77e45b223d1 100644 --- a/drivers/net/wireless/rt2x00/rt2500pci.c +++ b/drivers/net/wireless/rt2x00/rt2500pci.c @@ -1684,13 +1684,8 @@ static int rt2500pci_init_eeprom(struct rt2x00_dev *rt2x00dev) /* * Detect if this device has an hardware controlled radio. */ - if (rt2x00_get_field16(eeprom, EEPROM_ANTENNA_HARDWARE_RADIO)) { + if (rt2x00_get_field16(eeprom, EEPROM_ANTENNA_HARDWARE_RADIO)) __set_bit(CAPABILITY_HW_BUTTON, &rt2x00dev->cap_flags); - /* - * On this device RFKILL initialized during probe does not work. - */ - __set_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags); - } /* * Check if the BBP tuning should be enabled. diff --git a/drivers/net/wireless/rt2x00/rt2800.h b/drivers/net/wireless/rt2x00/rt2800.h index a629313dd98..a7630d5ec89 100644 --- a/drivers/net/wireless/rt2x00/rt2800.h +++ b/drivers/net/wireless/rt2x00/rt2800.h @@ -1920,7 +1920,7 @@ struct mac_iveiv_entry { * 2 - drop tx power by 12dBm, * 3 - increase tx power by 6dBm */ -#define BBP1_TX_POWER_CTRL FIELD8(0x03) +#define BBP1_TX_POWER_CTRL FIELD8(0x07) #define BBP1_TX_ANTENNA FIELD8(0x18) /* diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c index 400b8679796..9ef0711a5cc 100644 --- a/drivers/net/wireless/rt2x00/rt2800usb.c +++ b/drivers/net/wireless/rt2x00/rt2800usb.c @@ -1091,7 +1091,6 @@ static struct usb_device_id rt2800usb_device_table[] = { /* Ovislink */ { USB_DEVICE(0x1b75, 0x3071) }, { USB_DEVICE(0x1b75, 0x3072) }, - { USB_DEVICE(0x1b75, 0xa200) }, /* Para */ { USB_DEVICE(0x20b8, 0x8888) }, /* Pegatron */ diff --git a/drivers/net/wireless/rt2x00/rt2x00.h b/drivers/net/wireless/rt2x00/rt2x00.h index 1e716ff0f19..7510723a8c3 100644 --- a/drivers/net/wireless/rt2x00/rt2x00.h +++ b/drivers/net/wireless/rt2x00/rt2x00.h @@ -708,7 +708,6 @@ enum rt2x00_capability_flags { REQUIRE_SW_SEQNO, REQUIRE_HT_TX_DESC, REQUIRE_PS_AUTOWAKE, - REQUIRE_DELAYED_RFKILL, /* * Capabilities diff --git a/drivers/net/wireless/rt2x00/rt2x00dev.c b/drivers/net/wireless/rt2x00/rt2x00dev.c index e22942bc2bb..a2889d1cfe3 100644 --- a/drivers/net/wireless/rt2x00/rt2x00dev.c +++ b/drivers/net/wireless/rt2x00/rt2x00dev.c @@ -1128,10 +1128,9 @@ static void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev) return; /* - * Stop rfkill polling. + * Unregister extra components. */ - if (test_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags)) - rt2x00rfkill_unregister(rt2x00dev); + rt2x00rfkill_unregister(rt2x00dev); /* * Allow the HW to uninitialize. @@ -1169,12 +1168,6 @@ static int rt2x00lib_initialize(struct rt2x00_dev *rt2x00dev) set_bit(DEVICE_STATE_INITIALIZED, &rt2x00dev->flags); - /* - * Start rfkill polling. - */ - if (test_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags)) - rt2x00rfkill_register(rt2x00dev); - return 0; } @@ -1370,12 +1363,7 @@ int rt2x00lib_probe_dev(struct rt2x00_dev *rt2x00dev) rt2x00link_register(rt2x00dev); rt2x00leds_register(rt2x00dev); rt2x00debug_register(rt2x00dev); - - /* - * Start rfkill polling. - */ - if (!test_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags)) - rt2x00rfkill_register(rt2x00dev); + rt2x00rfkill_register(rt2x00dev); return 0; @@ -1391,12 +1379,6 @@ void rt2x00lib_remove_dev(struct rt2x00_dev *rt2x00dev) clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags); /* - * Stop rfkill polling. - */ - if (!test_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags)) - rt2x00rfkill_unregister(rt2x00dev); - - /* * Disable radio. */ rt2x00lib_disable_radio(rt2x00dev); diff --git a/drivers/net/wireless/rt2x00/rt2x00mac.c b/drivers/net/wireless/rt2x00/rt2x00mac.c index c03748dafd4..f8cff1f0b6b 100644 --- a/drivers/net/wireless/rt2x00/rt2x00mac.c +++ b/drivers/net/wireless/rt2x00/rt2x00mac.c @@ -489,8 +489,6 @@ int rt2x00mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, crypto.cipher = rt2x00crypto_key_to_cipher(key); if (crypto.cipher == CIPHER_NONE) return -EOPNOTSUPP; - if (crypto.cipher == CIPHER_TKIP && rt2x00_is_usb(rt2x00dev)) - return -EOPNOTSUPP; crypto.cmd = cmd; @@ -625,18 +623,20 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw, bss_conf->bssid); /* + * Update the beacon. This is only required on USB devices. PCI + * devices fetch beacons periodically. + */ + if (changes & BSS_CHANGED_BEACON && rt2x00_is_usb(rt2x00dev)) + rt2x00queue_update_beacon(rt2x00dev, vif); + + /* * Start/stop beaconing. */ if (changes & BSS_CHANGED_BEACON_ENABLED) { if (!bss_conf->enable_beacon && intf->enable_beacon) { + rt2x00queue_clear_beacon(rt2x00dev, vif); rt2x00dev->intf_beaconing--; intf->enable_beacon = false; - /* - * Clear beacon in the H/W for this vif. This is needed - * to disable beaconing on this particular interface - * and keep it running on other interfaces. - */ - rt2x00queue_clear_beacon(rt2x00dev, vif); if (rt2x00dev->intf_beaconing == 0) { /* @@ -647,15 +647,11 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw, rt2x00queue_stop_queue(rt2x00dev->bcn); mutex_unlock(&intf->beacon_skb_mutex); } + + } else if (bss_conf->enable_beacon && !intf->enable_beacon) { rt2x00dev->intf_beaconing++; intf->enable_beacon = true; - /* - * Upload beacon to the H/W. This is only required on - * USB devices. PCI devices fetch beacons periodically. - */ - if (rt2x00_is_usb(rt2x00dev)) - rt2x00queue_update_beacon(rt2x00dev, vif); if (rt2x00dev->intf_beaconing == 1) { /* diff --git a/drivers/net/wireless/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/rtlwifi/rtl8188ee/hw.c index f923d8c9a29..e06971be7df 100644 --- a/drivers/net/wireless/rtlwifi/rtl8188ee/hw.c +++ b/drivers/net/wireless/rtlwifi/rtl8188ee/hw.c @@ -1025,20 +1025,9 @@ int rtl88ee_hw_init(struct ieee80211_hw *hw) bool rtstatus = true; int err = 0; u8 tmp_u1b, u1byte; - unsigned long flags; RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Rtl8188EE hw init\n"); rtlpriv->rtlhal.being_init_adapter = true; - /* As this function can take a very long time (up to 350 ms) - * and can be called with irqs disabled, reenable the irqs - * to let the other devices continue being serviced. - * - * It is safe doing so since our own interrupts will only be enabled - * in a subsequent step. - */ - local_save_flags(flags); - local_irq_enable(); - rtlpriv->intf_ops->disable_aspm(hw); tmp_u1b = rtl_read_byte(rtlpriv, REG_SYS_CLKR+1); @@ -1054,7 +1043,7 @@ int rtl88ee_hw_init(struct ieee80211_hw *hw) if (rtstatus != true) { RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Init MAC failed\n"); err = 1; - goto exit; + return err; } err = rtl88e_download_fw(hw, false); @@ -1062,7 +1051,8 @@ int rtl88ee_hw_init(struct ieee80211_hw *hw) RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING, "Failed to download FW. Init HW without FW now..\n"); err = 1; - goto exit; + rtlhal->fw_ready = false; + return err; } else { rtlhal->fw_ready = true; } @@ -1145,12 +1135,10 @@ int rtl88ee_hw_init(struct ieee80211_hw *hw) } rtl_write_byte(rtlpriv, REG_NAV_CTRL+2, ((30000+127)/128)); rtl88e_dm_init(hw); -exit: - local_irq_restore(flags); rtlpriv->rtlhal.being_init_adapter = false; RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "end of Rtl8188EE hw init %x\n", err); - return err; + return 0; } static enum version_8188e _rtl88ee_read_chip_version(struct ieee80211_hw *hw) diff --git a/drivers/net/wireless/rtlwifi/rtl8188ee/trx.c b/drivers/net/wireless/rtlwifi/rtl8188ee/trx.c index ea4d014a288..a8871d66d56 100644 --- a/drivers/net/wireless/rtlwifi/rtl8188ee/trx.c +++ b/drivers/net/wireless/rtlwifi/rtl8188ee/trx.c @@ -293,7 +293,7 @@ static void _rtl88ee_translate_rx_signal_stuff(struct ieee80211_hw *hw, u8 *psaddr; __le16 fc; u16 type, ufc; - bool match_bssid, packet_toself, packet_beacon = false, addr; + bool match_bssid, packet_toself, packet_beacon, addr; tmp_buf = skb->data + pstatus->rx_drvinfo_size + pstatus->rx_bufshift; diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c index c3f2b55501a..189ba124a8c 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c @@ -985,30 +985,19 @@ int rtl92cu_hw_init(struct ieee80211_hw *hw) struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw)); int err = 0; static bool iqk_initialized; - unsigned long flags; - - /* As this function can take a very long time (up to 350 ms) - * and can be called with irqs disabled, reenable the irqs - * to let the other devices continue being serviced. - * - * It is safe doing so since our own interrupts will only be enabled - * in a subsequent step. - */ - local_save_flags(flags); - local_irq_enable(); rtlhal->hw_type = HARDWARE_TYPE_RTL8192CU; err = _rtl92cu_init_mac(hw); if (err) { RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "init mac failed!\n"); - goto exit; + return err; } err = rtl92c_download_fw(hw); if (err) { RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING, "Failed to download FW. Init HW without FW now..\n"); err = 1; - goto exit; + return err; } rtlhal->last_hmeboxnum = 0; /* h2c */ _rtl92cu_phy_param_tab_init(hw); @@ -1045,8 +1034,6 @@ int rtl92cu_hw_init(struct ieee80211_hw *hw) _InitPABias(hw); _update_mac_setting(hw); rtl92c_dm_init(hw); -exit: - local_irq_restore(flags); return err; } diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c index e7a2af3ad05..8188dcb512f 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c @@ -316,7 +316,6 @@ static struct usb_device_id rtl8192c_usb_ids[] = { {RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/ {RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/ {RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/ - {RTL_USB_DEVICE(0x0df6, 0x0070, rtl92cu_hal_cfg)}, /*Sitecom - 150N */ {RTL_USB_DEVICE(0x0df6, 0x0077, rtl92cu_hal_cfg)}, /*Sitecom-WLA2100V2*/ {RTL_USB_DEVICE(0x0eb0, 0x9071, rtl92cu_hal_cfg)}, /*NO Brand - Etop*/ {RTL_USB_DEVICE(0x4856, 0x0091, rtl92cu_hal_cfg)}, /*NetweeN - Feixun*/ diff --git a/drivers/net/wireless/rtlwifi/rtl8192se/hw.c b/drivers/net/wireless/rtlwifi/rtl8192se/hw.c index c471400fe8f..4f461786a7e 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192se/hw.c +++ b/drivers/net/wireless/rtlwifi/rtl8192se/hw.c @@ -955,7 +955,7 @@ int rtl92se_hw_init(struct ieee80211_hw *hw) struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw)); struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw)); u8 tmp_byte = 0; - unsigned long flags; + bool rtstatus = true; u8 tmp_u1b; int err = false; @@ -967,16 +967,6 @@ int rtl92se_hw_init(struct ieee80211_hw *hw) rtlpci->being_init_adapter = true; - /* As this function can take a very long time (up to 350 ms) - * and can be called with irqs disabled, reenable the irqs - * to let the other devices continue being serviced. - * - * It is safe doing so since our own interrupts will only be enabled - * in a subsequent step. - */ - local_save_flags(flags); - local_irq_enable(); - rtlpriv->intf_ops->disable_aspm(hw); /* 1. MAC Initialize */ @@ -994,8 +984,7 @@ int rtl92se_hw_init(struct ieee80211_hw *hw) RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING, "Failed to download FW. Init HW without FW now... " "Please copy FW into /lib/firmware/rtlwifi\n"); - err = 1; - goto exit; + return 1; } /* After FW download, we have to reset MAC register */ @@ -1008,8 +997,7 @@ int rtl92se_hw_init(struct ieee80211_hw *hw) /* 3. Initialize MAC/PHY Config by MACPHY_reg.txt */ if (!rtl92s_phy_mac_config(hw)) { RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "MAC Config failed\n"); - err = rtstatus; - goto exit; + return rtstatus; } /* because last function modify RCR, so we update @@ -1028,8 +1016,7 @@ int rtl92se_hw_init(struct ieee80211_hw *hw) /* 4. Initialize BB After MAC Config PHY_reg.txt, AGC_Tab.txt */ if (!rtl92s_phy_bb_config(hw)) { RT_TRACE(rtlpriv, COMP_INIT, DBG_EMERG, "BB Config failed\n"); - err = rtstatus; - goto exit; + return rtstatus; } /* 5. Initiailze RF RAIO_A.txt RF RAIO_B.txt */ @@ -1046,8 +1033,7 @@ int rtl92se_hw_init(struct ieee80211_hw *hw) if (!rtl92s_phy_rf_config(hw)) { RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "RF Config failed\n"); - err = rtstatus; - goto exit; + return rtstatus; } /* After read predefined TXT, we must set BB/MAC/RF @@ -1136,9 +1122,8 @@ int rtl92se_hw_init(struct ieee80211_hw *hw) rtlpriv->cfg->ops->led_control(hw, LED_CTL_POWER_ON); rtl92s_dm_init(hw); -exit: - local_irq_restore(flags); rtlpci->being_init_adapter = false; + return err; } diff --git a/drivers/net/wireless/rtlwifi/rtl8192se/trx.c b/drivers/net/wireless/rtlwifi/rtl8192se/trx.c index c240b7591cf..7d0f2e20f1a 100644 --- a/drivers/net/wireless/rtlwifi/rtl8192se/trx.c +++ b/drivers/net/wireless/rtlwifi/rtl8192se/trx.c @@ -49,12 +49,6 @@ static u8 _rtl92se_map_hwqueue_to_fwqueue(struct sk_buff *skb, u8 skb_queue) if (ieee80211_is_nullfunc(fc)) return QSLT_HIGH; - /* Kernel commit 1bf4bbb4024dcdab changed EAPOL packets to use - * queue V0 at priority 7; however, the RTL8192SE appears to have - * that queue at priority 6 - */ - if (skb->priority == 7) - return QSLT_VO; return skb->priority; } diff --git a/drivers/net/wireless/rtlwifi/rtl8723ae/hw.c b/drivers/net/wireless/rtlwifi/rtl8723ae/hw.c index 99f6bc5fa98..c333dfd116b 100644 --- a/drivers/net/wireless/rtlwifi/rtl8723ae/hw.c +++ b/drivers/net/wireless/rtlwifi/rtl8723ae/hw.c @@ -880,25 +880,14 @@ int rtl8723ae_hw_init(struct ieee80211_hw *hw) bool rtstatus = true; int err; u8 tmp_u1b; - unsigned long flags; rtlpriv->rtlhal.being_init_adapter = true; - /* As this function can take a very long time (up to 350 ms) - * and can be called with irqs disabled, reenable the irqs - * to let the other devices continue being serviced. - * - * It is safe doing so since our own interrupts will only be enabled - * in a subsequent step. - */ - local_save_flags(flags); - local_irq_enable(); - rtlpriv->intf_ops->disable_aspm(hw); rtstatus = _rtl8712e_init_mac(hw); if (rtstatus != true) { RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Init MAC failed\n"); err = 1; - goto exit; + return err; } err = rtl8723ae_download_fw(hw); @@ -906,7 +895,8 @@ int rtl8723ae_hw_init(struct ieee80211_hw *hw) RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING, "Failed to download FW. Init HW without FW now..\n"); err = 1; - goto exit; + rtlhal->fw_ready = false; + return err; } else { rtlhal->fw_ready = true; } @@ -981,8 +971,6 @@ int rtl8723ae_hw_init(struct ieee80211_hw *hw) RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE, "under 1.5V\n"); } rtl8723ae_dm_init(hw); -exit: - local_irq_restore(flags); rtlpriv->rtlhal.being_init_adapter = false; return err; } diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 70b830f6c4b..36efb418c26 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -347,8 +347,8 @@ static bool start_new_rx_buffer(int offset, unsigned long size, int head) * into multiple copies tend to give large frags their * own buffers as before. */ - BUG_ON(size > MAX_BUFFER_OFFSET); - if ((offset + size > MAX_BUFFER_OFFSET) && offset && !head) + if ((offset + size > MAX_BUFFER_OFFSET) && + (size <= MAX_BUFFER_OFFSET) && offset && !head) return true; return false; diff --git a/drivers/nfc/microread/microread.c b/drivers/nfc/microread/microread.c index 384ab8ca4b3..3420d833db1 100644 --- a/drivers/nfc/microread/microread.c +++ b/drivers/nfc/microread/microread.c @@ -501,13 +501,9 @@ static void microread_target_discovered(struct nfc_hci_dev *hdev, u8 gate, targets->sens_res = be16_to_cpu(*(u16 *)&skb->data[MICROREAD_EMCF_A_ATQA]); targets->sel_res = skb->data[MICROREAD_EMCF_A_SAK]; - targets->nfcid1_len = skb->data[MICROREAD_EMCF_A_LEN]; - if (targets->nfcid1_len > sizeof(targets->nfcid1)) { - r = -EINVAL; - goto exit_free; - } memcpy(targets->nfcid1, &skb->data[MICROREAD_EMCF_A_UID], - targets->nfcid1_len); + skb->data[MICROREAD_EMCF_A_LEN]); + targets->nfcid1_len = skb->data[MICROREAD_EMCF_A_LEN]; break; case MICROREAD_GATE_ID_MREAD_ISO_A_3: targets->supported_protocols = @@ -515,13 +511,9 @@ static void microread_target_discovered(struct nfc_hci_dev *hdev, u8 gate, targets->sens_res = be16_to_cpu(*(u16 *)&skb->data[MICROREAD_EMCF_A3_ATQA]); targets->sel_res = skb->data[MICROREAD_EMCF_A3_SAK]; - targets->nfcid1_len = skb->data[MICROREAD_EMCF_A3_LEN]; - if (targets->nfcid1_len > sizeof(targets->nfcid1)) { - r = -EINVAL; - goto exit_free; - } memcpy(targets->nfcid1, &skb->data[MICROREAD_EMCF_A3_UID], - targets->nfcid1_len); + skb->data[MICROREAD_EMCF_A3_LEN]); + targets->nfcid1_len = skb->data[MICROREAD_EMCF_A3_LEN]; break; case MICROREAD_GATE_ID_MREAD_ISO_B: targets->supported_protocols = NFC_PROTO_ISO14443_B_MASK; diff --git a/drivers/of/base.c b/drivers/of/base.c index b60f9a77ab0..1d10b4ec681 100644 --- a/drivers/of/base.c +++ b/drivers/of/base.c @@ -963,6 +963,52 @@ int of_property_read_string(struct device_node *np, const char *propname, EXPORT_SYMBOL_GPL(of_property_read_string); /** + * of_property_read_string_index - Find and read a string from a multiple + * strings property. + * @np: device node from which the property value is to be read. + * @propname: name of the property to be searched. + * @index: index of the string in the list of strings + * @out_string: pointer to null terminated return string, modified only if + * return value is 0. + * + * Search for a property in a device tree node and retrieve a null + * terminated string value (pointer to data, not a copy) in the list of strings + * contained in that property. + * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if + * property does not have a value, and -EILSEQ if the string is not + * null-terminated within the length of the property data. + * + * The out_string pointer is modified only if a valid string can be decoded. + */ +int of_property_read_string_index(struct device_node *np, const char *propname, + int index, const char **output) +{ + struct property *prop = of_find_property(np, propname, NULL); + int i = 0; + size_t l = 0, total = 0; + const char *p; + + if (!prop) + return -EINVAL; + if (!prop->value) + return -ENODATA; + if (strnlen(prop->value, prop->length) >= prop->length) + return -EILSEQ; + + p = prop->value; + + for (i = 0; total < prop->length; total += l, p += l) { + l = strlen(p) + 1; + if (i++ == index) { + *output = p; + return 0; + } + } + return -ENODATA; +} +EXPORT_SYMBOL_GPL(of_property_read_string_index); + +/** * of_property_match_string() - Find string in a list and return index * @np: pointer to node containing string list property * @propname: string list property name @@ -988,7 +1034,7 @@ int of_property_match_string(struct device_node *np, const char *propname, end = p + prop->length; for (i = 0; p < end; i++, p += l) { - l = strnlen(p, end - p) + 1; + l = strlen(p) + 1; if (p + l > end) return -EILSEQ; pr_debug("comparing %s with %s\n", string, p); @@ -1000,41 +1046,39 @@ int of_property_match_string(struct device_node *np, const char *propname, EXPORT_SYMBOL_GPL(of_property_match_string); /** - * of_property_read_string_util() - Utility helper for parsing string properties + * of_property_count_strings - Find and return the number of strings from a + * multiple strings property. * @np: device node from which the property value is to be read. * @propname: name of the property to be searched. - * @out_strs: output array of string pointers. - * @sz: number of array elements to read. - * @skip: Number of strings to skip over at beginning of list. * - * Don't call this function directly. It is a utility helper for the - * of_property_read_string*() family of functions. + * Search for a property in a device tree node and retrieve the number of null + * terminated string contain in it. Returns the number of strings on + * success, -EINVAL if the property does not exist, -ENODATA if property + * does not have a value, and -EILSEQ if the string is not null-terminated + * within the length of the property data. */ -int of_property_read_string_helper(struct device_node *np, const char *propname, - const char **out_strs, size_t sz, int skip) +int of_property_count_strings(struct device_node *np, const char *propname) { struct property *prop = of_find_property(np, propname, NULL); - int l = 0, i = 0; - const char *p, *end; + int i = 0; + size_t l = 0, total = 0; + const char *p; if (!prop) return -EINVAL; if (!prop->value) return -ENODATA; + if (strnlen(prop->value, prop->length) >= prop->length) + return -EILSEQ; + p = prop->value; - end = p + prop->length; - for (i = 0; p < end && (!out_strs || i < skip + sz); i++, p += l) { - l = strnlen(p, end - p) + 1; - if (p + l > end) - return -EILSEQ; - if (out_strs && i >= skip) - *out_strs++ = p; - } - i -= skip; - return i <= 0 ? -ENODATA : i; + for (i = 0; total < prop->length; total += l, p += l, i++) + l = strlen(p) + 1; + + return i; } -EXPORT_SYMBOL_GPL(of_property_read_string_helper); +EXPORT_SYMBOL_GPL(of_property_count_strings); /** * of_parse_phandle - Resolve a phandle property to a device_node pointer diff --git a/drivers/of/selftest.c b/drivers/of/selftest.c index f5e8dc7a725..0eb5c38b4e0 100644 --- a/drivers/of/selftest.c +++ b/drivers/of/selftest.c @@ -126,9 +126,8 @@ static void __init of_selftest_parse_phandle_with_args(void) selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); } -static void __init of_selftest_property_string(void) +static void __init of_selftest_property_match_string(void) { - const char *strings[4]; struct device_node *np; int rc; @@ -146,66 +145,13 @@ static void __init of_selftest_property_string(void) rc = of_property_match_string(np, "phandle-list-names", "third"); selftest(rc == 2, "third expected:0 got:%i\n", rc); rc = of_property_match_string(np, "phandle-list-names", "fourth"); - selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc); + selftest(rc == -ENODATA, "unmatched string; rc=%i", rc); rc = of_property_match_string(np, "missing-property", "blah"); - selftest(rc == -EINVAL, "missing property; rc=%i\n", rc); + selftest(rc == -EINVAL, "missing property; rc=%i", rc); rc = of_property_match_string(np, "empty-property", "blah"); - selftest(rc == -ENODATA, "empty property; rc=%i\n", rc); + selftest(rc == -ENODATA, "empty property; rc=%i", rc); rc = of_property_match_string(np, "unterminated-string", "blah"); - selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); - - /* of_property_count_strings() tests */ - rc = of_property_count_strings(np, "string-property"); - selftest(rc == 1, "Incorrect string count; rc=%i\n", rc); - rc = of_property_count_strings(np, "phandle-list-names"); - selftest(rc == 3, "Incorrect string count; rc=%i\n", rc); - rc = of_property_count_strings(np, "unterminated-string"); - selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); - rc = of_property_count_strings(np, "unterminated-string-list"); - selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc); - - /* of_property_read_string_index() tests */ - rc = of_property_read_string_index(np, "string-property", 0, strings); - selftest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc); - strings[0] = NULL; - rc = of_property_read_string_index(np, "string-property", 1, strings); - selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); - rc = of_property_read_string_index(np, "phandle-list-names", 0, strings); - selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc); - rc = of_property_read_string_index(np, "phandle-list-names", 1, strings); - selftest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc); - rc = of_property_read_string_index(np, "phandle-list-names", 2, strings); - selftest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc); - strings[0] = NULL; - rc = of_property_read_string_index(np, "phandle-list-names", 3, strings); - selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); - strings[0] = NULL; - rc = of_property_read_string_index(np, "unterminated-string", 0, strings); - selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); - rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings); - selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc); - strings[0] = NULL; - rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */ - selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); - strings[1] = NULL; - - /* of_property_read_string_array() tests */ - rc = of_property_read_string_array(np, "string-property", strings, 4); - selftest(rc == 1, "Incorrect string count; rc=%i\n", rc); - rc = of_property_read_string_array(np, "phandle-list-names", strings, 4); - selftest(rc == 3, "Incorrect string count; rc=%i\n", rc); - rc = of_property_read_string_array(np, "unterminated-string", strings, 4); - selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); - /* -- An incorrectly formed string should cause a failure */ - rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4); - selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc); - /* -- parsing the correctly formed strings should still work: */ - strings[2] = NULL; - rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2); - selftest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc); - strings[1] = NULL; - rc = of_property_read_string_array(np, "phandle-list-names", strings, 1); - selftest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]); + selftest(rc == -EILSEQ, "unterminated string; rc=%i", rc); } static int __init of_selftest(void) @@ -221,7 +167,7 @@ static int __init of_selftest(void) pr_info("start of selftest - you will see error messages\n"); of_selftest_parse_phandle_with_args(); - of_selftest_property_string(); + of_selftest_property_match_string(); pr_info("end of selftest - %s\n", selftest_passed ? "PASS" : "FAIL"); return 0; } diff --git a/drivers/pci/hotplug/shpchp_ctrl.c b/drivers/pci/hotplug/shpchp_ctrl.c index 6efc2ec5e4d..58499277903 100644 --- a/drivers/pci/hotplug/shpchp_ctrl.c +++ b/drivers/pci/hotplug/shpchp_ctrl.c @@ -282,8 +282,8 @@ static int board_added(struct slot *p_slot) return WRONG_BUS_FREQUENCY; } - bsp = ctrl->pci_dev->subordinate->cur_bus_speed; - msp = ctrl->pci_dev->subordinate->max_bus_speed; + bsp = ctrl->pci_dev->bus->cur_bus_speed; + msp = ctrl->pci_dev->bus->max_bus_speed; /* Check if there are other slots or devices on the same bus */ if (!list_empty(&ctrl->pci_dev->subordinate->devices)) diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 689f3c87ee5..5b4a9d9cd20 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -175,7 +175,7 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, { struct pci_dev *pci_dev = to_pci_dev(dev); - return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n", + return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02x\n", pci_dev->vendor, pci_dev->device, pci_dev->subsystem_vendor, pci_dev->subsystem_device, (u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8), diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index d6ceb2e45c5..0bb7bfd49bf 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -1130,9 +1130,6 @@ static int do_pci_enable_device(struct pci_dev *dev, int bars) return err; pci_fixup_device(pci_fixup_enable, dev); - if (dev->msi_enabled || dev->msix_enabled) - return 0; - pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin); if (pin) { pci_read_config_word(dev, PCI_COMMAND, &cmd); @@ -3659,7 +3656,7 @@ int pci_set_vga_state(struct pci_dev *dev, bool decode, u16 cmd; int rc; - WARN_ON((flags & PCI_VGA_STATE_CHANGE_DECODES) && (command_bits & ~(PCI_COMMAND_IO|PCI_COMMAND_MEMORY))); + WARN_ON((flags & PCI_VGA_STATE_CHANGE_DECODES) & (command_bits & ~(PCI_COMMAND_IO|PCI_COMMAND_MEMORY))); /* ARCH specific VGA enables */ rc = pci_set_vga_state_arch(dev, decode, command_bits, flags); diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 910339c0791..df4655c5c13 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -28,7 +28,6 @@ #include <linux/ioport.h> #include <linux/sched.h> #include <linux/ktime.h> -#include <linux/mm.h> #include <asm/dma.h> /* isa_dma_bridge_buggy */ #include "pci.h" @@ -292,25 +291,6 @@ static void quirk_citrine(struct pci_dev *dev) } DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine); -/* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */ -static void quirk_extend_bar_to_page(struct pci_dev *dev) -{ - int i; - - for (i = 0; i < PCI_STD_RESOURCE_END; i++) { - struct resource *r = &dev->resource[i]; - - if (r->flags & IORESOURCE_MEM && resource_size(r) < PAGE_SIZE) { - r->end = PAGE_SIZE - 1; - r->start = 0; - r->flags |= IORESOURCE_UNSET; - dev_info(&dev->dev, "expanded BAR %d to page size: %pR\n", - i, r); - } - } -} -DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, 0x034a, quirk_extend_bar_to_page); - /* * S3 868 and 968 chips report region size equal to 32M, but they decode 64M. * If it's needed, re-allocate the region. @@ -2950,7 +2930,6 @@ static void disable_igfx_irq(struct pci_dev *dev) } DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq); /* * Some devices may pass our check in pci_intx_mask_supported if diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c index 59a8d325a69..c9076bdaf2c 100644 --- a/drivers/platform/x86/acer-wmi.c +++ b/drivers/platform/x86/acer-wmi.c @@ -572,17 +572,6 @@ static const struct dmi_system_id video_vendor_dmi_table[] = { DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5750"), }, }, - { - /* - * Note no video_set_backlight_video_vendor, we must use the - * acer interface, as there is no native backlight interface. - */ - .ident = "Acer KAV80", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Acer"), - DMI_MATCH(DMI_PRODUCT_NAME, "KAV80"), - }, - }, {} }; diff --git a/drivers/pnp/pnpacpi/rsparser.c b/drivers/pnp/pnpacpi/rsparser.c index a8b7466196e..9847ab16382 100644 --- a/drivers/pnp/pnpacpi/rsparser.c +++ b/drivers/pnp/pnpacpi/rsparser.c @@ -183,7 +183,9 @@ static acpi_status pnpacpi_allocated_resource(struct acpi_resource *res, struct resource r; int i, flags; - if (acpi_dev_resource_address_space(res, &r) + if (acpi_dev_resource_memory(res, &r) + || acpi_dev_resource_io(res, &r) + || acpi_dev_resource_address_space(res, &r) || acpi_dev_resource_ext_address_space(res, &r)) { pnp_add_resource(dev, &r); return AE_OK; @@ -215,17 +217,6 @@ static acpi_status pnpacpi_allocated_resource(struct acpi_resource *res, } switch (res->type) { - case ACPI_RESOURCE_TYPE_MEMORY24: - case ACPI_RESOURCE_TYPE_MEMORY32: - case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: - if (acpi_dev_resource_memory(res, &r)) - pnp_add_resource(dev, &r); - break; - case ACPI_RESOURCE_TYPE_IO: - case ACPI_RESOURCE_TYPE_FIXED_IO: - if (acpi_dev_resource_io(res, &r)) - pnp_add_resource(dev, &r); - break; case ACPI_RESOURCE_TYPE_DMA: dma = &res->data.dma; if (dma->channel_count > 0 && dma->channels[0] != (u8) -1) diff --git a/drivers/rapidio/devices/tsi721.h b/drivers/rapidio/devices/tsi721.h index 7061ac0ad42..b4b0d83f9ef 100644 --- a/drivers/rapidio/devices/tsi721.h +++ b/drivers/rapidio/devices/tsi721.h @@ -678,7 +678,6 @@ struct tsi721_bdma_chan { struct list_head free_list; dma_cookie_t completed_cookie; struct tasklet_struct tasklet; - bool active; }; #endif /* CONFIG_RAPIDIO_DMA_ENGINE */ diff --git a/drivers/rapidio/devices/tsi721_dma.c b/drivers/rapidio/devices/tsi721_dma.c index 47257b6eea8..502663f5f7c 100644 --- a/drivers/rapidio/devices/tsi721_dma.c +++ b/drivers/rapidio/devices/tsi721_dma.c @@ -206,8 +206,8 @@ void tsi721_bdma_handler(struct tsi721_bdma_chan *bdma_chan) { /* Disable BDMA channel interrupts */ iowrite32(0, bdma_chan->regs + TSI721_DMAC_INTE); - if (bdma_chan->active) - tasklet_schedule(&bdma_chan->tasklet); + + tasklet_schedule(&bdma_chan->tasklet); } #ifdef CONFIG_PCI_MSI @@ -287,12 +287,6 @@ struct tsi721_tx_desc *tsi721_desc_get(struct tsi721_bdma_chan *bdma_chan) "desc %p not ACKed\n", tx_desc); } - if (ret == NULL) { - dev_dbg(bdma_chan->dchan.device->dev, - "%s: unable to obtain tx descriptor\n", __func__); - goto err_out; - } - i = bdma_chan->wr_count_next % bdma_chan->bd_num; if (i == bdma_chan->bd_num - 1) { i = 0; @@ -303,7 +297,7 @@ struct tsi721_tx_desc *tsi721_desc_get(struct tsi721_bdma_chan *bdma_chan) tx_desc->txd.phys = bdma_chan->bd_phys + i * sizeof(struct tsi721_dma_desc); tx_desc->hw_desc = &((struct tsi721_dma_desc *)bdma_chan->bd_base)[i]; -err_out: + spin_unlock_bh(&bdma_chan->lock); return ret; @@ -568,7 +562,7 @@ static int tsi721_alloc_chan_resources(struct dma_chan *dchan) } #endif /* CONFIG_PCI_MSI */ - bdma_chan->active = true; + tasklet_enable(&bdma_chan->tasklet); tsi721_bdma_interrupt_enable(bdma_chan, 1); return bdma_chan->bd_num - 1; @@ -582,7 +576,9 @@ err_out: static void tsi721_free_chan_resources(struct dma_chan *dchan) { struct tsi721_bdma_chan *bdma_chan = to_tsi721_chan(dchan); +#ifdef CONFIG_PCI_MSI struct tsi721_device *priv = to_tsi721(dchan->device); +#endif LIST_HEAD(list); dev_dbg(dchan->device->dev, "%s: Entry\n", __func__); @@ -593,25 +589,14 @@ static void tsi721_free_chan_resources(struct dma_chan *dchan) BUG_ON(!list_empty(&bdma_chan->active_list)); BUG_ON(!list_empty(&bdma_chan->queue)); - tsi721_bdma_interrupt_enable(bdma_chan, 0); - bdma_chan->active = false; - -#ifdef CONFIG_PCI_MSI - if (priv->flags & TSI721_USING_MSIX) { - synchronize_irq(priv->msix[TSI721_VECT_DMA0_DONE + - bdma_chan->id].vector); - synchronize_irq(priv->msix[TSI721_VECT_DMA0_INT + - bdma_chan->id].vector); - } else -#endif - synchronize_irq(priv->pdev->irq); - - tasklet_kill(&bdma_chan->tasklet); + tasklet_disable(&bdma_chan->tasklet); spin_lock_bh(&bdma_chan->lock); list_splice_init(&bdma_chan->free_list, &list); spin_unlock_bh(&bdma_chan->lock); + tsi721_bdma_interrupt_enable(bdma_chan, 0); + #ifdef CONFIG_PCI_MSI if (priv->flags & TSI721_USING_MSIX) { free_irq(priv->msix[TSI721_VECT_DMA0_DONE + @@ -805,7 +790,6 @@ int tsi721_register_dma(struct tsi721_device *priv) bdma_chan->dchan.cookie = 1; bdma_chan->dchan.chan_id = i; bdma_chan->id = i; - bdma_chan->active = false; spin_lock_init(&bdma_chan->lock); @@ -815,6 +799,7 @@ int tsi721_register_dma(struct tsi721_device *priv) tasklet_init(&bdma_chan->tasklet, tsi721_dma_tasklet, (unsigned long)bdma_chan); + tasklet_disable(&bdma_chan->tasklet); list_add_tail(&bdma_chan->dchan.device_node, &mport->dma.channels); } diff --git a/drivers/regulator/arizona-ldo1.c b/drivers/regulator/arizona-ldo1.c index b1b35f38d11..81d8681c319 100644 --- a/drivers/regulator/arizona-ldo1.c +++ b/drivers/regulator/arizona-ldo1.c @@ -141,6 +141,8 @@ static struct regulator_ops arizona_ldo1_ops = { .map_voltage = regulator_map_voltage_linear, .get_voltage_sel = regulator_get_voltage_sel_regmap, .set_voltage_sel = regulator_set_voltage_sel_regmap, + .get_bypass = regulator_get_bypass_regmap, + .set_bypass = regulator_set_bypass_regmap, }; static const struct regulator_desc arizona_ldo1 = { diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c index 008415670e9..9a08c465e9d 100644 --- a/drivers/regulator/core.c +++ b/drivers/regulator/core.c @@ -1118,7 +1118,6 @@ static int machine_constraints_current(struct regulator_dev *rdev, return 0; } -static int _regulator_do_enable(struct regulator_dev *rdev); /** * set_machine_constraints - sets regulator constraints @@ -1194,9 +1193,10 @@ static int set_machine_constraints(struct regulator_dev *rdev, /* If the constraints say the regulator should be on at this point * and we have control then make sure it is enabled. */ - if (rdev->constraints->always_on || rdev->constraints->boot_on) { - ret = _regulator_do_enable(rdev); - if (ret < 0 && ret != -EINVAL) { + if ((rdev->constraints->always_on || rdev->constraints->boot_on) && + ops->enable) { + ret = ops->enable(rdev); + if (ret < 0) { rdev_err(rdev, "failed to enable\n"); goto out; } @@ -2020,8 +2020,6 @@ static int _regulator_disable(struct regulator_dev *rdev) rdev_err(rdev, "failed to disable\n"); return ret; } - _notifier_call_chain(rdev, REGULATOR_EVENT_DISABLE, - NULL); } rdev->use_count = 0; @@ -2076,16 +2074,20 @@ static int _regulator_force_disable(struct regulator_dev *rdev) { int ret = 0; - ret = _regulator_do_disable(rdev); - if (ret < 0) { - rdev_err(rdev, "failed to force disable\n"); - return ret; - } - - _notifier_call_chain(rdev, REGULATOR_EVENT_FORCE_DISABLE | + /* force disable */ + if (rdev->desc->ops->disable) { + /* ah well, who wants to live forever... */ + ret = rdev->desc->ops->disable(rdev); + if (ret < 0) { + rdev_err(rdev, "failed to force disable\n"); + return ret; + } + /* notify other consumers that power has been forced off */ + _notifier_call_chain(rdev, REGULATOR_EVENT_FORCE_DISABLE | REGULATOR_EVENT_DISABLE, NULL); + } - return 0; + return ret; } /** @@ -4407,18 +4409,23 @@ int regulator_suspend_finish(void) mutex_lock(®ulator_list_mutex); list_for_each_entry(rdev, ®ulator_list, list) { + struct regulator_ops *ops = rdev->desc->ops; + mutex_lock(&rdev->mutex); - if (rdev->use_count > 0 || rdev->constraints->always_on) { - error = _regulator_do_enable(rdev); + if ((rdev->use_count > 0 || rdev->constraints->always_on) && + ops->enable) { + error = ops->enable(rdev); if (error) ret = error; } else { if (!has_full_constraints) goto unlock; + if (!ops->disable) + goto unlock; if (!_regulator_is_enabled(rdev)) goto unlock; - error = _regulator_do_disable(rdev); + error = ops->disable(rdev); if (error) ret = error; } @@ -4601,7 +4608,7 @@ static int __init regulator_init_complete(void) ops = rdev->desc->ops; c = rdev->constraints; - if (c && c->always_on) + if (!ops->disable || (c && c->always_on)) continue; mutex_lock(&rdev->mutex); @@ -4622,7 +4629,7 @@ static int __init regulator_init_complete(void) /* We log since this may kill the system if it * goes wrong. */ rdev_info(rdev, "disabling\n"); - ret = _regulator_do_disable(rdev); + ret = ops->disable(rdev); if (ret != 0) { rdev_err(rdev, "couldn't disable: %d\n", ret); } diff --git a/drivers/rtc/rtc-at91rm9200.c b/drivers/rtc/rtc-at91rm9200.c index e51cc5fec98..1237c2173c6 100644 --- a/drivers/rtc/rtc-at91rm9200.c +++ b/drivers/rtc/rtc-at91rm9200.c @@ -49,7 +49,6 @@ struct at91_rtc_config { static const struct at91_rtc_config *at91_rtc_config; static DECLARE_COMPLETION(at91_rtc_updated); -static DECLARE_COMPLETION(at91_rtc_upd_rdy); static unsigned int at91_alarm_year = AT91_RTC_EPOCH; static void __iomem *at91_rtc_regs; static int irq; @@ -163,8 +162,6 @@ static int at91_rtc_settime(struct device *dev, struct rtc_time *tm) 1900 + tm->tm_year, tm->tm_mon, tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec); - wait_for_completion(&at91_rtc_upd_rdy); - /* Stop Time/Calendar from counting */ cr = at91_rtc_read(AT91_RTC_CR); at91_rtc_write(AT91_RTC_CR, cr | AT91_RTC_UPDCAL | AT91_RTC_UPDTIM); @@ -187,9 +184,7 @@ static int at91_rtc_settime(struct device *dev, struct rtc_time *tm) /* Restart Time/Calendar */ cr = at91_rtc_read(AT91_RTC_CR); - at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_SECEV); at91_rtc_write(AT91_RTC_CR, cr & ~(AT91_RTC_UPDCAL | AT91_RTC_UPDTIM)); - at91_rtc_write_ier(AT91_RTC_SECEV); return 0; } @@ -296,10 +291,8 @@ static irqreturn_t at91_rtc_interrupt(int irq, void *dev_id) if (rtsr) { /* this interrupt is shared! Is it ours? */ if (rtsr & AT91_RTC_ALARM) events |= (RTC_AF | RTC_IRQF); - if (rtsr & AT91_RTC_SECEV) { - complete(&at91_rtc_upd_rdy); - at91_rtc_write_idr(AT91_RTC_SECEV); - } + if (rtsr & AT91_RTC_SECEV) + events |= (RTC_UF | RTC_IRQF); if (rtsr & AT91_RTC_ACKUPD) complete(&at91_rtc_updated); @@ -422,11 +415,6 @@ static int __init at91_rtc_probe(struct platform_device *pdev) } platform_set_drvdata(pdev, rtc); - /* enable SECEV interrupt in order to initialize at91_rtc_upd_rdy - * completion. - */ - at91_rtc_write_ier(AT91_RTC_SECEV); - dev_info(&pdev->dev, "AT91 Real Time Clock driver.\n"); return 0; diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c index e91ec8cd9b0..d72a9216ee2 100644 --- a/drivers/s390/block/dasd.c +++ b/drivers/s390/block/dasd.c @@ -2879,12 +2879,12 @@ static int dasd_alloc_queue(struct dasd_block *block) elevator_exit(block->request_queue->elevator); block->request_queue->elevator = NULL; - mutex_lock(&block->request_queue->sysfs_lock); rc = elevator_init(block->request_queue, "deadline"); - if (rc) + if (rc) { blk_cleanup_queue(block->request_queue); - mutex_unlock(&block->request_queue->sysfs_lock); - return rc; + return rc; + } + return 0; } /* diff --git a/drivers/s390/char/con3215.c b/drivers/s390/char/con3215.c index bb86494e2b7..eb5d22795c4 100644 --- a/drivers/s390/char/con3215.c +++ b/drivers/s390/char/con3215.c @@ -922,7 +922,7 @@ static int __init con3215_init(void) raw3215_freelist = req; } - cdev = ccw_device_probe_console(&raw3215_ccw_driver); + cdev = ccw_device_probe_console(); if (IS_ERR(cdev)) return -ENODEV; diff --git a/drivers/s390/char/con3270.c b/drivers/s390/char/con3270.c index bb6b0df50b3..699fd3e363d 100644 --- a/drivers/s390/char/con3270.c +++ b/drivers/s390/char/con3270.c @@ -576,6 +576,7 @@ static struct console con3270 = { static int __init con3270_init(void) { + struct ccw_device *cdev; struct raw3270 *rp; void *cbuf; int i; @@ -590,7 +591,10 @@ con3270_init(void) cpcmd("TERM AUTOCR OFF", NULL, 0, NULL); } - rp = raw3270_setup_console(); + cdev = ccw_device_probe_console(); + if (IS_ERR(cdev)) + return -ENODEV; + rp = raw3270_setup_console(cdev); if (IS_ERR(rp)) return PTR_ERR(rp); diff --git a/drivers/s390/char/raw3270.c b/drivers/s390/char/raw3270.c index 651d1f5da7c..24a08e8f19e 100644 --- a/drivers/s390/char/raw3270.c +++ b/drivers/s390/char/raw3270.c @@ -776,24 +776,16 @@ raw3270_setup_device(struct ccw_device *cdev, struct raw3270 *rp, char *ascebc) } #ifdef CONFIG_TN3270_CONSOLE -/* Tentative definition - see below for actual definition. */ -static struct ccw_driver raw3270_ccw_driver; - /* * Setup 3270 device configured as console. */ -struct raw3270 __init *raw3270_setup_console(void) +struct raw3270 __init *raw3270_setup_console(struct ccw_device *cdev) { - struct ccw_device *cdev; unsigned long flags; struct raw3270 *rp; char *ascebc; int rc; - cdev = ccw_device_probe_console(&raw3270_ccw_driver); - if (IS_ERR(cdev)) - return ERR_CAST(cdev); - rp = kzalloc(sizeof(struct raw3270), GFP_KERNEL | GFP_DMA); ascebc = kzalloc(256, GFP_KERNEL); rc = raw3270_setup_device(cdev, rp, ascebc); diff --git a/drivers/s390/char/raw3270.h b/drivers/s390/char/raw3270.h index 359276a8839..7b73ff8c1bd 100644 --- a/drivers/s390/char/raw3270.h +++ b/drivers/s390/char/raw3270.h @@ -190,7 +190,7 @@ raw3270_put_view(struct raw3270_view *view) wake_up(&raw3270_wait_queue); } -struct raw3270 *raw3270_setup_console(void); +struct raw3270 *raw3270_setup_console(struct ccw_device *cdev); void raw3270_wait_cons_dev(struct raw3270 *); /* Notifier for device addition/removal */ diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c index 815e290a1af..8ea7d9b2c67 100644 --- a/drivers/s390/cio/chsc.c +++ b/drivers/s390/cio/chsc.c @@ -500,27 +500,18 @@ static void chsc_process_sei_nt0(struct chsc_sei_nt0_area *sei_area) static void chsc_process_event_information(struct chsc_sei *sei, u64 ntsm) { - static int ntsm_unsupported; - - while (true) { + do { memset(sei, 0, sizeof(*sei)); sei->request.length = 0x0010; sei->request.code = 0x000e; - if (!ntsm_unsupported) - sei->ntsm = ntsm; + sei->ntsm = ntsm; if (chsc(sei)) break; if (sei->response.code != 0x0001) { - CIO_CRW_EVENT(2, "chsc: sei failed (rc=%04x, ntsm=%llx)\n", - sei->response.code, sei->ntsm); - - if (sei->response.code == 3 && sei->ntsm) { - /* Fallback for old firmware. */ - ntsm_unsupported = 1; - continue; - } + CIO_CRW_EVENT(2, "chsc: sei failed (rc=%04x)\n", + sei->response.code); break; } @@ -536,10 +527,7 @@ static void chsc_process_event_information(struct chsc_sei *sei, u64 ntsm) CIO_CRW_EVENT(2, "chsc: unhandled nt: %d\n", sei->nt); break; } - - if (!(sei->u.nt0_area.flags & 0x80)) - break; - } + } while (sei->u.nt0_area.flags & 0x80); } /* diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c index 8d04a9a88cc..1ab5f6c36d9 100644 --- a/drivers/s390/cio/device.c +++ b/drivers/s390/cio/device.c @@ -1610,7 +1610,7 @@ out_unlock: return rc; } -struct ccw_device *ccw_device_probe_console(struct ccw_driver *drv) +struct ccw_device *ccw_device_probe_console(void) { struct io_subchannel_private *io_priv; struct ccw_device *cdev; @@ -1632,7 +1632,6 @@ struct ccw_device *ccw_device_probe_console(struct ccw_driver *drv) kfree(io_priv); return cdev; } - cdev->drv = drv; set_io_private(sch, io_priv); ret = ccw_device_console_enable(cdev, sch); if (ret) { diff --git a/drivers/sbus/char/bbc_envctrl.c b/drivers/sbus/char/bbc_envctrl.c index 0787b975616..160e7510aca 100644 --- a/drivers/sbus/char/bbc_envctrl.c +++ b/drivers/sbus/char/bbc_envctrl.c @@ -452,9 +452,6 @@ static void attach_one_temp(struct bbc_i2c_bus *bp, struct platform_device *op, if (!tp) return; - INIT_LIST_HEAD(&tp->bp_list); - INIT_LIST_HEAD(&tp->glob_list); - tp->client = bbc_i2c_attach(bp, op); if (!tp->client) { kfree(tp); @@ -500,9 +497,6 @@ static void attach_one_fan(struct bbc_i2c_bus *bp, struct platform_device *op, if (!fp) return; - INIT_LIST_HEAD(&fp->bp_list); - INIT_LIST_HEAD(&fp->glob_list); - fp->client = bbc_i2c_attach(bp, op); if (!fp->client) { kfree(fp); diff --git a/drivers/sbus/char/bbc_i2c.c b/drivers/sbus/char/bbc_i2c.c index e0e6cd605cc..c1441ed282e 100644 --- a/drivers/sbus/char/bbc_i2c.c +++ b/drivers/sbus/char/bbc_i2c.c @@ -301,18 +301,13 @@ static struct bbc_i2c_bus * attach_one_i2c(struct platform_device *op, int index if (!bp) return NULL; - INIT_LIST_HEAD(&bp->temps); - INIT_LIST_HEAD(&bp->fans); - bp->i2c_control_regs = of_ioremap(&op->resource[0], 0, 0x2, "bbc_i2c_regs"); if (!bp->i2c_control_regs) goto fail; - if (op->num_resources == 2) { - bp->i2c_bussel_reg = of_ioremap(&op->resource[1], 0, 0x1, "bbc_i2c_bussel"); - if (!bp->i2c_bussel_reg) - goto fail; - } + bp->i2c_bussel_reg = of_ioremap(&op->resource[1], 0, 0x1, "bbc_i2c_bussel"); + if (!bp->i2c_bussel_reg) + goto fail; bp->waiting = 0; init_waitqueue_head(&bp->wq); diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c index 1822cb9ec62..278c9fa6206 100644 --- a/drivers/scsi/arcmsr/arcmsr_hba.c +++ b/drivers/scsi/arcmsr/arcmsr_hba.c @@ -2501,15 +2501,16 @@ static int arcmsr_polling_ccbdone(struct AdapterControlBlock *acb, static int arcmsr_iop_confirm(struct AdapterControlBlock *acb) { uint32_t cdb_phyaddr, cdb_phyaddr_hi32; - + dma_addr_t dma_coherent_handle; /* ******************************************************************** ** here we need to tell iop 331 our freeccb.HighPart ** if freeccb.HighPart is not zero ******************************************************************** */ - cdb_phyaddr = lower_32_bits(acb->dma_coherent_handle); - cdb_phyaddr_hi32 = upper_32_bits(acb->dma_coherent_handle); + dma_coherent_handle = acb->dma_coherent_handle; + cdb_phyaddr = (uint32_t)(dma_coherent_handle); + cdb_phyaddr_hi32 = (uint32_t)((cdb_phyaddr >> 16) >> 16); acb->cdb_phyaddr_hi32 = cdb_phyaddr_hi32; /* *********************************************************************** diff --git a/drivers/scsi/be2iscsi/be_mgmt.c b/drivers/scsi/be2iscsi/be_mgmt.c index ef0a78b0d73..245a9595a93 100644 --- a/drivers/scsi/be2iscsi/be_mgmt.c +++ b/drivers/scsi/be2iscsi/be_mgmt.c @@ -812,20 +812,17 @@ mgmt_static_ip_modify(struct beiscsi_hba *phba, if (ip_action == IP_ACTION_ADD) { memcpy(req->ip_params.ip_record.ip_addr.addr, ip_param->value, - sizeof(req->ip_params.ip_record.ip_addr.addr)); + ip_param->len); if (subnet_param) memcpy(req->ip_params.ip_record.ip_addr.subnet_mask, - subnet_param->value, - sizeof(req->ip_params.ip_record.ip_addr.subnet_mask)); + subnet_param->value, subnet_param->len); } else { memcpy(req->ip_params.ip_record.ip_addr.addr, - if_info->ip_addr.addr, - sizeof(req->ip_params.ip_record.ip_addr.addr)); + if_info->ip_addr.addr, ip_param->len); memcpy(req->ip_params.ip_record.ip_addr.subnet_mask, - if_info->ip_addr.subnet_mask, - sizeof(req->ip_params.ip_record.ip_addr.subnet_mask)); + if_info->ip_addr.subnet_mask, ip_param->len); } rc = mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0); @@ -853,7 +850,7 @@ static int mgmt_modify_gateway(struct beiscsi_hba *phba, uint8_t *gt_addr, req->action = gtway_action; req->ip_addr.ip_type = BE2_IPV4; - memcpy(req->ip_addr.addr, gt_addr, sizeof(req->ip_addr.addr)); + memcpy(req->ip_addr.addr, gt_addr, param_len); return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0); } diff --git a/drivers/scsi/bfa/bfa_ioc.h b/drivers/scsi/bfa/bfa_ioc.h index a119421cb32..23a90e7b710 100644 --- a/drivers/scsi/bfa/bfa_ioc.h +++ b/drivers/scsi/bfa/bfa_ioc.h @@ -72,7 +72,7 @@ struct bfa_sge_s { } while (0) #define bfa_swap_words(_x) ( \ - ((u64)(_x) << 32) | ((u64)(_x) >> 32)) + ((_x) << 32) | ((_x) >> 32)) #ifdef __BIG_ENDIAN #define bfa_sge_to_be(_x) diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c index 287667c20c6..0353d7f2172 100644 --- a/drivers/scsi/hpsa.c +++ b/drivers/scsi/hpsa.c @@ -3118,7 +3118,7 @@ static int hpsa_big_passthru_ioctl(struct ctlr_info *h, void __user *argp) } if (ioc->Request.Type.Direction == XFER_WRITE) { if (copy_from_user(buff[sg_used], data_ptr, sz)) { - status = -EFAULT; + status = -ENOMEM; goto cleanup1; } } else diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c index c62b3e5d44b..d0fa4b6c551 100644 --- a/drivers/scsi/ibmvscsi/ibmvscsi.c +++ b/drivers/scsi/ibmvscsi/ibmvscsi.c @@ -185,11 +185,6 @@ static struct viosrp_crq *crq_queue_next_crq(struct crq_queue *queue) if (crq->valid & 0x80) { if (++queue->cur == queue->size) queue->cur = 0; - - /* Ensure the read of the valid bit occurs before reading any - * other bits of the CRQ entry - */ - rmb(); } else crq = NULL; spin_unlock_irqrestore(&queue->lock, flags); @@ -208,11 +203,6 @@ static int ibmvscsi_send_crq(struct ibmvscsi_host_data *hostdata, { struct vio_dev *vdev = to_vio_dev(hostdata->dev); - /* - * Ensure the command buffer is flushed to memory before handing it - * over to the VIOS to prevent it from fetching any stale data. - */ - mb(); return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, word1, word2); } @@ -804,8 +794,7 @@ static void purge_requests(struct ibmvscsi_host_data *hostdata, int error_code) evt->hostdata->dev); if (evt->cmnd_done) evt->cmnd_done(evt->cmnd); - } else if (evt->done && evt->crq.format != VIOSRP_MAD_FORMAT && - evt->iu.srp.login_req.opcode != SRP_LOGIN_REQ) + } else if (evt->done) evt->done(evt); free_event_struct(&evt->hostdata->pool, evt); spin_lock_irqsave(hostdata->host->host_lock, flags); diff --git a/drivers/scsi/isci/host.h b/drivers/scsi/isci/host.h index 22a9bb1abae..4911310a38f 100644 --- a/drivers/scsi/isci/host.h +++ b/drivers/scsi/isci/host.h @@ -311,8 +311,9 @@ static inline struct Scsi_Host *to_shost(struct isci_host *ihost) } #define for_each_isci_host(id, ihost, pdev) \ - for (id = 0; id < SCI_MAX_CONTROLLERS && \ - (ihost = to_pci_info(pdev)->hosts[id]); id++) + for (id = 0, ihost = to_pci_info(pdev)->hosts[id]; \ + id < ARRAY_SIZE(to_pci_info(pdev)->hosts) && ihost; \ + ihost = to_pci_info(pdev)->hosts[++id]) static inline void wait_for_start(struct isci_host *ihost) { diff --git a/drivers/scsi/isci/port_config.c b/drivers/scsi/isci/port_config.c index 5017bde3b36..cd962da4a57 100644 --- a/drivers/scsi/isci/port_config.c +++ b/drivers/scsi/isci/port_config.c @@ -615,6 +615,13 @@ static void sci_apc_agent_link_up(struct isci_host *ihost, SCIC_SDS_APC_WAIT_LINK_UP_NOTIFICATION); } else { /* the phy is already the part of the port */ + u32 port_state = iport->sm.current_state_id; + + /* if the PORT'S state is resetting then the link up is from + * port hard reset in this case, we need to tell the port + * that link up is recieved + */ + BUG_ON(port_state != SCI_PORT_RESETTING); port_agent->phy_ready_mask |= 1 << phy_index; sci_port_link_up(iport, iphy); } diff --git a/drivers/scsi/isci/task.c b/drivers/scsi/isci/task.c index 5d6fda72d65..0d30ca849e8 100644 --- a/drivers/scsi/isci/task.c +++ b/drivers/scsi/isci/task.c @@ -801,7 +801,7 @@ int isci_task_I_T_nexus_reset(struct domain_device *dev) /* XXX: need to cleanup any ireqs targeting this * domain_device */ - ret = -ENODEV; + ret = TMF_RESP_FUNC_COMPLETE; goto out; } diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c index f91d41788ce..5de94698450 100644 --- a/drivers/scsi/libiscsi.c +++ b/drivers/scsi/libiscsi.c @@ -717,21 +717,11 @@ __iscsi_conn_send_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr, return NULL; } - if (data_size > ISCSI_DEF_MAX_RECV_SEG_LEN) { - iscsi_conn_printk(KERN_ERR, conn, "Invalid buffer len of %u for login task. Max len is %u\n", data_size, ISCSI_DEF_MAX_RECV_SEG_LEN); - return NULL; - } - task = conn->login_task; } else { if (session->state != ISCSI_STATE_LOGGED_IN) return NULL; - if (data_size != 0) { - iscsi_conn_printk(KERN_ERR, conn, "Can not send data buffer of len %u for op 0x%x\n", data_size, opcode); - return NULL; - } - BUG_ON(conn->c_stage == ISCSI_CONN_INITIAL_STAGE); BUG_ON(conn->c_stage == ISCSI_CONN_STOPPED); diff --git a/drivers/scsi/megaraid/megaraid_mm.c b/drivers/scsi/megaraid/megaraid_mm.c index 9bec1717047..25506c77738 100644 --- a/drivers/scsi/megaraid/megaraid_mm.c +++ b/drivers/scsi/megaraid/megaraid_mm.c @@ -486,8 +486,6 @@ mimd_to_kioc(mimd_t __user *umimd, mraid_mmadp_t *adp, uioc_t *kioc) pthru32->dataxferaddr = kioc->buf_paddr; if (kioc->data_dir & UIOC_WR) { - if (pthru32->dataxferlen > kioc->xferlen) - return -EINVAL; if (copy_from_user(kioc->buf_vaddr, kioc->user_data, pthru32->dataxferlen)) { return (-EFAULT); diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h index b5212135838..684cc343cf0 100644 --- a/drivers/scsi/megaraid/megaraid_sas.h +++ b/drivers/scsi/megaraid/megaraid_sas.h @@ -1295,6 +1295,7 @@ struct megasas_instance { u32 *reply_queue; dma_addr_t reply_queue_h; + unsigned long base_addr; struct megasas_register_set __iomem *reg_set; struct megasas_pd_list pd_list[MEGASAS_MAX_PD]; diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c index 4956c99ed90..b3e5c178787 100644 --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c @@ -3461,7 +3461,6 @@ static int megasas_init_fw(struct megasas_instance *instance) u32 max_sectors_1; u32 max_sectors_2; u32 tmp_sectors, msix_enable; - resource_size_t base_addr; struct megasas_register_set __iomem *reg_set; struct megasas_ctrl_info *ctrl_info; unsigned long bar_list; @@ -3470,14 +3469,14 @@ static int megasas_init_fw(struct megasas_instance *instance) /* Find first memory bar */ bar_list = pci_select_bars(instance->pdev, IORESOURCE_MEM); instance->bar = find_first_bit(&bar_list, sizeof(unsigned long)); + instance->base_addr = pci_resource_start(instance->pdev, instance->bar); if (pci_request_selected_regions(instance->pdev, instance->bar, "megasas: LSI")) { printk(KERN_DEBUG "megasas: IO memory region busy!\n"); return -EBUSY; } - base_addr = pci_resource_start(instance->pdev, instance->bar); - instance->reg_set = ioremap_nocache(base_addr, 8192); + instance->reg_set = ioremap_nocache(instance->base_addr, 8192); if (!instance->reg_set) { printk(KERN_DEBUG "megasas: Failed to map IO mem\n"); diff --git a/drivers/scsi/mpt2sas/mpt2sas_scsih.c b/drivers/scsi/mpt2sas/mpt2sas_scsih.c index fe76185cd79..8dbe500c935 100644 --- a/drivers/scsi/mpt2sas/mpt2sas_scsih.c +++ b/drivers/scsi/mpt2sas/mpt2sas_scsih.c @@ -8174,6 +8174,7 @@ _scsih_suspend(struct pci_dev *pdev, pm_message_t state) mpt2sas_base_free_resources(ioc); pci_save_state(pdev); + pci_disable_device(pdev); pci_set_power_state(pdev, device_state); return 0; } diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h index 799c266b0bb..c32efc75322 100644 --- a/drivers/scsi/qla2xxx/qla_def.h +++ b/drivers/scsi/qla2xxx/qla_def.h @@ -2980,7 +2980,8 @@ struct qla_hw_data { IS_QLA25XX(ha) || IS_QLA81XX(ha) || \ IS_QLA82XX(ha) || IS_QLA83XX(ha)) #define IS_MSIX_NACK_CAPABLE(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha)) -#define IS_NOPOLLING_TYPE(ha) (IS_QLA81XX(ha) && (ha)->flags.msix_enabled) +#define IS_NOPOLLING_TYPE(ha) ((IS_QLA25XX(ha) || IS_QLA81XX(ha) || \ + IS_QLA83XX(ha)) && (ha)->flags.msix_enabled) #define IS_FAC_REQUIRED(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha)) #define IS_NOCACHE_VPD_TYPE(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha)) #define IS_ALOGIO_CAPABLE(ha) (IS_QLA23XX(ha) || IS_FWI2_CAPABLE(ha)) diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index 66c495d2101..ad72c1d8511 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -2553,7 +2553,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) ha->flags.enable_64bit_addressing ? "enable" : "disable"); ret = qla2x00_mem_alloc(ha, req_length, rsp_length, &req, &rsp); - if (ret) { + if (!ret) { ql_log_pci(ql_log_fatal, pdev, 0x0031, "Failed to allocate memory for adapter, aborting.\n"); @@ -3458,10 +3458,10 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len, else { qla2x00_set_reserved_loop_ids(ha); ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0123, - "loop_id_map=%p.\n", ha->loop_id_map); + "loop_id_map=%p. \n", ha->loop_id_map); } - return 0; + return 1; fail_async_pd: dma_pool_free(ha->s_dma_pool, ha->ex_init_cb, ha->ex_init_cb_dma); diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c index e6884940d10..f033b191a02 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -1514,10 +1514,12 @@ static inline void qlt_unmap_sg(struct scsi_qla_host *vha, static int qlt_check_reserve_free_req(struct scsi_qla_host *vha, uint32_t req_cnt) { + struct qla_hw_data *ha = vha->hw; + device_reg_t __iomem *reg = ha->iobase; uint32_t cnt; if (vha->req->cnt < (req_cnt + 2)) { - cnt = (uint16_t)RD_REG_DWORD(vha->req->req_q_out); + cnt = (uint16_t)RD_REG_DWORD(®->isp24.req_q_out); ql_dbg(ql_dbg_tgt, vha, 0xe00a, "Request ring circled: cnt=%d, vha->->ring_index=%d, " diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c index cfd49eca67a..66b0b26a138 100644 --- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c +++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c @@ -762,16 +762,7 @@ static void tcm_qla2xxx_clear_nacl_from_fcport_map(struct qla_tgt_sess *sess) pr_debug("fc_rport domain: port_id 0x%06x\n", nacl->nport_id); node = btree_remove32(&lport->lport_fcport_map, nacl->nport_id); - if (WARN_ON(node && (node != se_nacl))) { - /* - * The nacl no longer matches what we think it should be. - * Most likely a new dynamic acl has been added while - * someone dropped the hardware lock. It clearly is a - * bug elsewhere, but this bit can't make things worse. - */ - btree_insert32(&lport->lport_fcport_map, nacl->nport_id, - node, GFP_ATOMIC); - } + WARN_ON(node && (node != se_nacl)); pr_debug("Removed from fcport_map: %p for WWNN: 0x%016LX, port_id: 0x%06x\n", se_nacl, nacl->nport_wwnn, nacl->nport_id); diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index e5953c8018c..86d522004a2 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -815,14 +815,6 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes) scsi_next_command(cmd); return; } - } else if (blk_rq_bytes(req) == 0 && result && !sense_deferred) { - /* - * Certain non BLOCK_PC requests are commands that don't - * actually transfer anything (FLUSH), so cannot use - * good_bytes != blk_rq_bytes(req) as the signal for an error. - * This sets the error explicitly for the problem case. - */ - error = __scsi_error_from_host_byte(cmd, result); } /* no bidi support for !REQ_TYPE_BLOCK_PC yet */ diff --git a/drivers/scsi/scsi_netlink.c b/drivers/scsi/scsi_netlink.c index 109802f776e..fe30ea94ffe 100644 --- a/drivers/scsi/scsi_netlink.c +++ b/drivers/scsi/scsi_netlink.c @@ -77,7 +77,7 @@ scsi_nl_rcv_msg(struct sk_buff *skb) goto next_msg; } - if (!netlink_capable(skb, CAP_SYS_ADMIN)) { + if (!capable(CAP_SYS_ADMIN)) { err = -EPERM; goto next_msg; } diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c index 859240408f9..3e58b2245f1 100644 --- a/drivers/scsi/scsi_scan.c +++ b/drivers/scsi/scsi_scan.c @@ -320,7 +320,6 @@ static void scsi_target_destroy(struct scsi_target *starget) struct Scsi_Host *shost = dev_to_shost(dev->parent); unsigned long flags; - starget->state = STARGET_DEL; transport_destroy_device(dev); spin_lock_irqsave(shost->host_lock, flags); if (shost->hostt->target_destroy) @@ -372,37 +371,6 @@ static struct scsi_target *__scsi_find_target(struct device *parent, } /** - * scsi_target_reap_ref_release - remove target from visibility - * @kref: the reap_ref in the target being released - * - * Called on last put of reap_ref, which is the indication that no device - * under this target is visible anymore, so render the target invisible in - * sysfs. Note: we have to be in user context here because the target reaps - * should be done in places where the scsi device visibility is being removed. - */ -static void scsi_target_reap_ref_release(struct kref *kref) -{ - struct scsi_target *starget - = container_of(kref, struct scsi_target, reap_ref); - - /* - * if we get here and the target is still in the CREATED state that - * means it was allocated but never made visible (because a scan - * turned up no LUNs), so don't call device_del() on it. - */ - if (starget->state != STARGET_CREATED) { - transport_remove_device(&starget->dev); - device_del(&starget->dev); - } - scsi_target_destroy(starget); -} - -static void scsi_target_reap_ref_put(struct scsi_target *starget) -{ - kref_put(&starget->reap_ref, scsi_target_reap_ref_release); -} - -/** * scsi_alloc_target - allocate a new or find an existing target * @parent: parent of the target (need not be a scsi host) * @channel: target channel number (zero if no channels) @@ -424,7 +392,7 @@ static struct scsi_target *scsi_alloc_target(struct device *parent, + shost->transportt->target_size; struct scsi_target *starget; struct scsi_target *found_target; - int error, ref_got; + int error; starget = kzalloc(size, GFP_KERNEL); if (!starget) { @@ -433,7 +401,7 @@ static struct scsi_target *scsi_alloc_target(struct device *parent, } dev = &starget->dev; device_initialize(dev); - kref_init(&starget->reap_ref); + starget->reap_ref = 1; dev->parent = get_device(parent); dev_set_name(dev, "target%d:%d:%d", shost->host_no, channel, id); dev->bus = &scsi_bus_type; @@ -473,36 +441,29 @@ static struct scsi_target *scsi_alloc_target(struct device *parent, return starget; found: - /* - * release routine already fired if kref is zero, so if we can still - * take the reference, the target must be alive. If we can't, it must - * be dying and we need to wait for a new target - */ - ref_got = kref_get_unless_zero(&found_target->reap_ref); - + found_target->reap_ref++; spin_unlock_irqrestore(shost->host_lock, flags); - if (ref_got) { + if (found_target->state != STARGET_DEL) { put_device(dev); return found_target; } - /* - * Unfortunately, we found a dying target; need to wait until it's - * dead before we can get a new one. There is an anomaly here. We - * *should* call scsi_target_reap() to balance the kref_get() of the - * reap_ref above. However, since the target being released, it's - * already invisible and the reap_ref is irrelevant. If we call - * scsi_target_reap() we might spuriously do another device_del() on - * an already invisible target. - */ + /* Unfortunately, we found a dying target; need to + * wait until it's dead before we can get a new one */ put_device(&found_target->dev); - /* - * length of time is irrelevant here, we just want to yield the CPU - * for a tick to avoid busy waiting for the target to die. - */ - msleep(1); + flush_scheduled_work(); goto retry; } +static void scsi_target_reap_usercontext(struct work_struct *work) +{ + struct scsi_target *starget = + container_of(work, struct scsi_target, ew.work); + + transport_remove_device(&starget->dev); + device_del(&starget->dev); + scsi_target_destroy(starget); +} + /** * scsi_target_reap - check to see if target is in use and destroy if not * @starget: target to be checked @@ -513,13 +474,28 @@ static struct scsi_target *scsi_alloc_target(struct device *parent, */ void scsi_target_reap(struct scsi_target *starget) { - /* - * serious problem if this triggers: STARGET_DEL is only set in the if - * the reap_ref drops to zero, so we're trying to do another final put - * on an already released kref - */ - BUG_ON(starget->state == STARGET_DEL); - scsi_target_reap_ref_put(starget); + struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); + unsigned long flags; + enum scsi_target_state state; + int empty = 0; + + spin_lock_irqsave(shost->host_lock, flags); + state = starget->state; + if (--starget->reap_ref == 0 && list_empty(&starget->devices)) { + empty = 1; + starget->state = STARGET_DEL; + } + spin_unlock_irqrestore(shost->host_lock, flags); + + if (!empty) + return; + + BUG_ON(state == STARGET_DEL); + if (state == STARGET_CREATED) + scsi_target_destroy(starget); + else + execute_in_process_context(scsi_target_reap_usercontext, + &starget->ew); } /** @@ -1551,10 +1527,6 @@ struct scsi_device *__scsi_add_device(struct Scsi_Host *shost, uint channel, } mutex_unlock(&shost->scan_mutex); scsi_autopm_put_target(starget); - /* - * paired with scsi_alloc_target(). Target will be destroyed unless - * scsi_probe_and_add_lun made an underlying device visible - */ scsi_target_reap(starget); put_device(&starget->dev); @@ -1635,10 +1607,8 @@ static void __scsi_scan_target(struct device *parent, unsigned int channel, out_reap: scsi_autopm_put_target(starget); - /* - * paired with scsi_alloc_target(): determine if the target has - * any children at all and if not, nuke it - */ + /* now determine if the target has any children at all + * and if not, nuke it */ scsi_target_reap(starget); put_device(&starget->dev); diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c index 9e2dd478dd1..931a7d95420 100644 --- a/drivers/scsi/scsi_sysfs.c +++ b/drivers/scsi/scsi_sysfs.c @@ -332,14 +332,17 @@ static void scsi_device_dev_release_usercontext(struct work_struct *work) { struct scsi_device *sdev; struct device *parent; + struct scsi_target *starget; struct list_head *this, *tmp; unsigned long flags; sdev = container_of(work, struct scsi_device, ew.work); parent = sdev->sdev_gendev.parent; + starget = to_scsi_target(parent); spin_lock_irqsave(sdev->host->host_lock, flags); + starget->reap_ref++; list_del(&sdev->siblings); list_del(&sdev->same_target_siblings); list_del(&sdev->starved_entry); @@ -359,6 +362,8 @@ static void scsi_device_dev_release_usercontext(struct work_struct *work) /* NULL queue means the device can't be used */ sdev->request_queue = NULL; + scsi_target_reap(scsi_target(sdev)); + kfree(sdev->inquiry); kfree(sdev); @@ -973,13 +978,6 @@ void __scsi_remove_device(struct scsi_device *sdev) sdev->host->hostt->slave_destroy(sdev); transport_destroy_device(dev); - /* - * Paired with the kref_get() in scsi_sysfs_initialize(). We have - * remoed sysfs visibility from the device, so make the target - * invisible if this was the last device underneath it. - */ - scsi_target_reap(scsi_target(sdev)); - put_device(dev); } @@ -1042,7 +1040,7 @@ void scsi_remove_target(struct device *dev) continue; if (starget->dev.parent == dev || &starget->dev == dev) { /* assuming new targets arrive at the end */ - kref_get(&starget->reap_ref); + starget->reap_ref++; spin_unlock_irqrestore(shost->host_lock, flags); if (last) scsi_target_reap(last); @@ -1126,12 +1124,6 @@ void scsi_sysfs_device_initialize(struct scsi_device *sdev) list_add_tail(&sdev->same_target_siblings, &starget->devices); list_add_tail(&sdev->siblings, &shost->__devices); spin_unlock_irqrestore(shost->host_lock, flags); - /* - * device can now only be removed via __scsi_remove_device() so hold - * the target. Target will be held in CREATED state until something - * beneath it becomes visible (in which case it moves to RUNNING) - */ - kref_get(&starget->reap_ref); } int scsi_is_sdev_device(const struct device *dev) diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c index 87ca72d36d5..fb7437dd5b7 100644 --- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c @@ -33,7 +33,6 @@ #include <linux/device.h> #include <linux/hyperv.h> #include <linux/mempool.h> -#include <linux/blkdev.h> #include <scsi/scsi.h> #include <scsi/scsi_cmnd.h> #include <scsi/scsi_host.h> @@ -804,13 +803,6 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb, case ATA_12: set_host_byte(scmnd, DID_PASSTHROUGH); break; - /* - * On Some Windows hosts TEST_UNIT_READY command can return - * SRB_STATUS_ERROR, let the upper level code deal with it - * based on the sense information. - */ - case TEST_UNIT_READY: - break; default: set_host_byte(scmnd, DID_TARGET_FAILURE); } @@ -1197,9 +1189,6 @@ static void storvsc_device_destroy(struct scsi_device *sdevice) { struct stor_mem_pools *memp = sdevice->hostdata; - if (!memp) - return; - mempool_destroy(memp->request_mempool); kmem_cache_destroy(memp->request_pool); kfree(memp); @@ -1293,16 +1282,6 @@ static int storvsc_host_reset_handler(struct scsi_cmnd *scmnd) return SUCCESS; } -/* - * The host guarantees to respond to each command, although I/O latencies might - * be unbounded on Azure. Reset the timer unconditionally to give the host a - * chance to perform EH. - */ -static enum blk_eh_timer_return storvsc_eh_timed_out(struct scsi_cmnd *scmnd) -{ - return BLK_EH_RESET_TIMER; -} - static bool storvsc_scsi_cmd_ok(struct scsi_cmnd *scmnd) { bool allowed = true; @@ -1462,7 +1441,6 @@ static struct scsi_host_template scsi_driver = { .bios_param = storvsc_get_chs, .queuecommand = storvsc_queuecommand, .eh_host_reset_handler = storvsc_host_reset_handler, - .eh_timed_out = storvsc_eh_timed_out, .slave_alloc = storvsc_device_alloc, .slave_destroy = storvsc_device_destroy, .slave_configure = storvsc_device_configure, diff --git a/drivers/scsi/sym53c8xx_2/sym_hipd.c b/drivers/scsi/sym53c8xx_2/sym_hipd.c index 6b349e30186..d92fe4037e9 100644 --- a/drivers/scsi/sym53c8xx_2/sym_hipd.c +++ b/drivers/scsi/sym53c8xx_2/sym_hipd.c @@ -3000,11 +3000,7 @@ sym_dequeue_from_squeue(struct sym_hcb *np, int i, int target, int lun, int task if ((target == -1 || cp->target == target) && (lun == -1 || cp->lun == lun) && (task == -1 || cp->tag == task)) { -#ifdef SYM_OPT_HANDLE_DEVICE_QUEUEING sym_set_cam_status(cp->cmd, DID_SOFT_ERROR); -#else - sym_set_cam_status(cp->cmd, DID_REQUEUE); -#endif sym_remque(&cp->link_ccbq); sym_insque_tail(&cp->link_ccbq, &np->comp_ccbq); } diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index 11f5326f449..b26f1a5cc0e 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@ -270,16 +270,6 @@ static void virtscsi_req_done(struct virtqueue *vq) virtscsi_vq_done(vscsi, req_vq, virtscsi_complete_cmd); }; -static void virtscsi_poll_requests(struct virtio_scsi *vscsi) -{ - int i, num_vqs; - - num_vqs = vscsi->num_queues; - for (i = 0; i < num_vqs; i++) - virtscsi_vq_done(vscsi, &vscsi->req_vqs[i], - virtscsi_complete_cmd); -} - static void virtscsi_complete_free(struct virtio_scsi *vscsi, void *buf) { struct virtio_scsi_cmd *cmd = buf; @@ -298,8 +288,6 @@ static void virtscsi_ctrl_done(struct virtqueue *vq) virtscsi_vq_done(vscsi, &vscsi->ctrl_vq, virtscsi_complete_free); }; -static void virtscsi_handle_event(struct work_struct *work); - static int virtscsi_kick_event(struct virtio_scsi *vscsi, struct virtio_scsi_event_node *event_node) { @@ -307,7 +295,6 @@ static int virtscsi_kick_event(struct virtio_scsi *vscsi, struct scatterlist sg; unsigned long flags; - INIT_WORK(&event_node->work, virtscsi_handle_event); sg_init_one(&sg, &event_node->event, sizeof(struct virtio_scsi_event)); spin_lock_irqsave(&vscsi->event_vq.vq_lock, flags); @@ -425,6 +412,7 @@ static void virtscsi_complete_event(struct virtio_scsi *vscsi, void *buf) { struct virtio_scsi_event_node *event_node = buf; + INIT_WORK(&event_node->work, virtscsi_handle_event); schedule_work(&event_node->work); } @@ -614,18 +602,6 @@ static int virtscsi_tmf(struct virtio_scsi *vscsi, struct virtio_scsi_cmd *cmd) cmd->resp.tmf.response == VIRTIO_SCSI_S_FUNCTION_SUCCEEDED) ret = SUCCESS; - /* - * The spec guarantees that all requests related to the TMF have - * been completed, but the callback might not have run yet if - * we're using independent interrupts (e.g. MSI). Poll the - * virtqueues once. - * - * In the abort case, sc->scsi_done will do nothing, because - * the block layer must have detected a timeout and as a result - * REQ_ATOM_COMPLETE has been set. - */ - virtscsi_poll_requests(vscsi); - out: mempool_free(cmd, virtscsi_cmd_pool); return ret; @@ -775,12 +751,8 @@ static void __virtscsi_set_affinity(struct virtio_scsi *vscsi, bool affinity) vscsi->affinity_hint_set = true; } else { - for (i = 0; i < vscsi->num_queues; i++) { - if (!vscsi->req_vqs[i].vq) - continue; - + for (i = 0; i < vscsi->num_queues; i++) virtqueue_set_affinity(vscsi->req_vqs[i].vq, -1); - } vscsi->affinity_hint_set = false; } diff --git a/drivers/spi/spi-ath79.c b/drivers/spi/spi-ath79.c index 23f1ba6e9cc..e504b763605 100644 --- a/drivers/spi/spi-ath79.c +++ b/drivers/spi/spi-ath79.c @@ -132,9 +132,9 @@ static int ath79_spi_setup_cs(struct spi_device *spi) flags = GPIOF_DIR_OUT; if (spi->mode & SPI_CS_HIGH) - flags |= GPIOF_INIT_LOW; - else flags |= GPIOF_INIT_HIGH; + else + flags |= GPIOF_INIT_LOW; status = gpio_request_one(cdata->gpio, flags, dev_name(&spi->dev)); diff --git a/drivers/spi/spi-dw-mid.c b/drivers/spi/spi-dw-mid.c index 0791c92e8c5..b9f0192758d 100644 --- a/drivers/spi/spi-dw-mid.c +++ b/drivers/spi/spi-dw-mid.c @@ -89,13 +89,7 @@ err_exit: static void mid_spi_dma_exit(struct dw_spi *dws) { - if (!dws->dma_inited) - return; - - dmaengine_terminate_all(dws->txchan); dma_release_channel(dws->txchan); - - dmaengine_terminate_all(dws->rxchan); dma_release_channel(dws->rxchan); } @@ -142,7 +136,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change) txconf.dst_addr = dws->dma_addr; txconf.dst_maxburst = LNW_DMA_MSIZE_16; txconf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; - txconf.dst_addr_width = dws->dma_width; + txconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES; txconf.device_fc = false; txchan->device->device_control(txchan, DMA_SLAVE_CONFIG, @@ -165,7 +159,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change) rxconf.src_addr = dws->dma_addr; rxconf.src_maxburst = LNW_DMA_MSIZE_16; rxconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; - rxconf.src_addr_width = dws->dma_width; + rxconf.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES; rxconf.device_fc = false; rxchan->device->device_control(rxchan, DMA_SLAVE_CONFIG, diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c index 798729eb668..86d2158946b 100644 --- a/drivers/spi/spi-omap2-mcspi.c +++ b/drivers/spi/spi-omap2-mcspi.c @@ -136,7 +136,6 @@ struct omap2_mcspi_cs { void __iomem *base; unsigned long phys; int word_len; - u16 mode; struct list_head node; /* Context save and restore shadow register */ u32 chconf0; @@ -802,8 +801,6 @@ static int omap2_mcspi_setup_transfer(struct spi_device *spi, mcspi_write_chconf0(spi, l); - cs->mode = spi->mode; - dev_dbg(&spi->dev, "setup: speed %d, sample %s edge, clk %s\n", OMAP2_MCSPI_MAX_FREQ >> div, (spi->mode & SPI_CPHA) ? "trailing" : "leading", @@ -874,7 +871,6 @@ static int omap2_mcspi_setup(struct spi_device *spi) return -ENOMEM; cs->base = mcspi->base + spi->chip_select * 0x14; cs->phys = mcspi->phys + spi->chip_select * 0x14; - cs->mode = 0; cs->chconf0 = 0; spi->controller_state = cs; /* Link this to context save list */ @@ -1047,16 +1043,6 @@ static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m) mcspi_read_cs_reg(spi, OMAP2_MCSPI_MODULCTRL); } - /* - * The slave driver could have changed spi->mode in which case - * it will be different from cs->mode (the current hardware setup). - * If so, set par_override (even though its not a parity issue) so - * omap2_mcspi_setup_transfer will be called to configure the hardware - * with the correct mode on the first iteration of the loop below. - */ - if (spi->mode != cs->mode) - par_override = 1; - omap2_mcspi_set_enable(spi, 0); m->status = status; diff --git a/drivers/spi/spi-orion.c b/drivers/spi/spi-orion.c index 183aa80c901..66a5f82cf13 100644 --- a/drivers/spi/spi-orion.c +++ b/drivers/spi/spi-orion.c @@ -403,6 +403,8 @@ static int orion_spi_probe(struct platform_device *pdev) struct resource *r; unsigned long tclk_hz; int status = 0; + const u32 *iprop; + int size; master = spi_alloc_master(&pdev->dev, sizeof *spi); if (master == NULL) { @@ -413,10 +415,10 @@ static int orion_spi_probe(struct platform_device *pdev) if (pdev->id != -1) master->bus_num = pdev->id; if (pdev->dev.of_node) { - u32 cell_index; - if (!of_property_read_u32(pdev->dev.of_node, "cell-index", - &cell_index)) - master->bus_num = cell_index; + iprop = of_get_property(pdev->dev.of_node, "cell-index", + &size); + if (iprop && size == sizeof(*iprop)) + master->bus_num = *iprop; } /* we support only mode 0, and no options */ diff --git a/drivers/spi/spi-pl022.c b/drivers/spi/spi-pl022.c index 5266c89fc98..371cc66f1a0 100644 --- a/drivers/spi/spi-pl022.c +++ b/drivers/spi/spi-pl022.c @@ -1080,7 +1080,7 @@ err_rxdesc: pl022->sgt_tx.nents, DMA_TO_DEVICE); err_tx_sgmap: dma_unmap_sg(rxchan->device->dev, pl022->sgt_rx.sgl, - pl022->sgt_rx.nents, DMA_FROM_DEVICE); + pl022->sgt_tx.nents, DMA_FROM_DEVICE); err_rx_sgmap: sg_free_table(&pl022->sgt_tx); err_alloc_tx_sg: diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c index d26a2d195d2..48b396fced0 100644 --- a/drivers/spi/spi-pxa2xx.c +++ b/drivers/spi/spi-pxa2xx.c @@ -1324,9 +1324,7 @@ static int pxa2xx_spi_suspend(struct device *dev) if (status != 0) return status; write_SSCR0(0, drv_data->ioaddr); - - if (!pm_runtime_suspended(dev)) - clk_disable_unprepare(ssp->clk); + clk_disable_unprepare(ssp->clk); return 0; } @@ -1340,8 +1338,7 @@ static int pxa2xx_spi_resume(struct device *dev) pxa2xx_spi_dma_resume(drv_data); /* Enable the SSP clock */ - if (!pm_runtime_suspended(dev)) - clk_prepare_enable(ssp->clk); + clk_prepare_enable(ssp->clk); /* Start the queue running */ status = spi_master_resume(drv_data->master); diff --git a/drivers/staging/comedi/drivers/8255_pci.c b/drivers/staging/comedi/drivers/8255_pci.c index e54031c558e..05bcf0dffb8 100644 --- a/drivers/staging/comedi/drivers/8255_pci.c +++ b/drivers/staging/comedi/drivers/8255_pci.c @@ -59,7 +59,6 @@ Configuration Options: not applicable, uses PCI auto config #include "../comedidev.h" #include "8255.h" -#include "mite.h" enum pci_8255_boardid { BOARD_ADLINK_PCI7224, @@ -83,7 +82,6 @@ struct pci_8255_boardinfo { const char *name; int dio_badr; int n_8255; - unsigned int has_mite:1; }; static const struct pci_8255_boardinfo pci_8255_boards[] = { @@ -131,43 +129,36 @@ static const struct pci_8255_boardinfo pci_8255_boards[] = { .name = "ni_pci-dio-96", .dio_badr = 1, .n_8255 = 4, - .has_mite = 1, }, [BOARD_NI_PCIDIO96B] = { .name = "ni_pci-dio-96b", .dio_badr = 1, .n_8255 = 4, - .has_mite = 1, }, [BOARD_NI_PXI6508] = { .name = "ni_pxi-6508", .dio_badr = 1, .n_8255 = 4, - .has_mite = 1, }, [BOARD_NI_PCI6503] = { .name = "ni_pci-6503", .dio_badr = 1, .n_8255 = 1, - .has_mite = 1, }, [BOARD_NI_PCI6503B] = { .name = "ni_pci-6503b", .dio_badr = 1, .n_8255 = 1, - .has_mite = 1, }, [BOARD_NI_PCI6503X] = { .name = "ni_pci-6503x", .dio_badr = 1, .n_8255 = 1, - .has_mite = 1, }, [BOARD_NI_PXI_6503] = { .name = "ni_pxi-6503", .dio_badr = 1, .n_8255 = 1, - .has_mite = 1, }, }; @@ -175,25 +166,6 @@ struct pci_8255_private { void __iomem *mmio_base; }; -static int pci_8255_mite_init(struct pci_dev *pcidev) -{ - void __iomem *mite_base; - u32 main_phys_addr; - - /* ioremap the MITE registers (BAR 0) temporarily */ - mite_base = pci_ioremap_bar(pcidev, 0); - if (!mite_base) - return -ENOMEM; - - /* set data window to main registers (BAR 1) */ - main_phys_addr = pci_resource_start(pcidev, 1); - writel(main_phys_addr | WENAB, mite_base + MITE_IODWBSR); - - /* finished with MITE registers */ - iounmap(mite_base); - return 0; -} - static int pci_8255_mmio(int dir, int port, int data, unsigned long iobase) { void __iomem *mmio_base = (void __iomem *)iobase; @@ -233,12 +205,6 @@ static int pci_8255_auto_attach(struct comedi_device *dev, if (ret) return ret; - if (board->has_mite) { - ret = pci_8255_mite_init(pcidev); - if (ret) - return ret; - } - is_mmio = (pci_resource_flags(pcidev, board->dio_badr) & IORESOURCE_MEM) != 0; if (is_mmio) { diff --git a/drivers/staging/comedi/drivers/ni_daq_700.c b/drivers/staging/comedi/drivers/ni_daq_700.c index 5e80d428e54..d067ef70e19 100644 --- a/drivers/staging/comedi/drivers/ni_daq_700.c +++ b/drivers/staging/comedi/drivers/ni_daq_700.c @@ -127,8 +127,6 @@ static int daq700_ai_rinsn(struct comedi_device *dev, /* write channel to multiplexer */ /* set mask scan bit high to disable scanning */ outb(chan | 0x80, dev->iobase + CMD_R1); - /* mux needs 2us to really settle [Fred Brooks]. */ - udelay(2); /* convert n samples */ for (n = 0; n < insn->n; n++) { diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c index bc23d66a7a1..6330af656a0 100644 --- a/drivers/staging/iio/impedance-analyzer/ad5933.c +++ b/drivers/staging/iio/impedance-analyzer/ad5933.c @@ -115,7 +115,6 @@ static const struct iio_chan_spec ad5933_channels[] = { .channel = 0, .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), .address = AD5933_REG_TEMP_DATA, - .scan_index = -1, .scan_type = { .sign = 's', .realbits = 14, @@ -125,7 +124,9 @@ static const struct iio_chan_spec ad5933_channels[] = { .type = IIO_VOLTAGE, .indexed = 1, .channel = 0, - .extend_name = "real", + .extend_name = "real_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | + BIT(IIO_CHAN_INFO_SCALE), .address = AD5933_REG_REAL_DATA, .scan_index = 0, .scan_type = { @@ -137,7 +138,9 @@ static const struct iio_chan_spec ad5933_channels[] = { .type = IIO_VOLTAGE, .indexed = 1, .channel = 0, - .extend_name = "imag", + .extend_name = "imag_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | + BIT(IIO_CHAN_INFO_SCALE), .address = AD5933_REG_IMAG_DATA, .scan_index = 1, .scan_type = { @@ -743,14 +746,14 @@ static int ad5933_probe(struct i2c_client *client, indio_dev->name = id->name; indio_dev->modes = INDIO_DIRECT_MODE; indio_dev->channels = ad5933_channels; - indio_dev->num_channels = ARRAY_SIZE(ad5933_channels); + indio_dev->num_channels = 1; /* only register temp0_input */ ret = ad5933_register_ring_funcs_and_init(indio_dev); if (ret) goto error_disable_reg; - ret = iio_buffer_register(indio_dev, ad5933_channels, - ARRAY_SIZE(ad5933_channels)); + /* skip temp0_input, register in0_(real|imag)_raw */ + ret = iio_buffer_register(indio_dev, &ad5933_channels[1], 2); if (ret) goto error_unreg_ring; diff --git a/drivers/staging/iio/light/tsl2x7x_core.c b/drivers/staging/iio/light/tsl2x7x_core.c index 64c73adfa3b..c99f890cc6c 100644 --- a/drivers/staging/iio/light/tsl2x7x_core.c +++ b/drivers/staging/iio/light/tsl2x7x_core.c @@ -672,13 +672,9 @@ static int tsl2x7x_chip_on(struct iio_dev *indio_dev) chip->tsl2x7x_config[TSL2X7X_PRX_COUNT] = chip->tsl2x7x_settings.prox_pulse_count; chip->tsl2x7x_config[TSL2X7X_PRX_MINTHRESHLO] = - (chip->tsl2x7x_settings.prox_thres_low) & 0xFF; - chip->tsl2x7x_config[TSL2X7X_PRX_MINTHRESHHI] = - (chip->tsl2x7x_settings.prox_thres_low >> 8) & 0xFF; + chip->tsl2x7x_settings.prox_thres_low; chip->tsl2x7x_config[TSL2X7X_PRX_MAXTHRESHLO] = - (chip->tsl2x7x_settings.prox_thres_high) & 0xFF; - chip->tsl2x7x_config[TSL2X7X_PRX_MAXTHRESHHI] = - (chip->tsl2x7x_settings.prox_thres_high >> 8) & 0xFF; + chip->tsl2x7x_settings.prox_thres_high; /* and make sure we're not already on */ if (chip->tsl2x7x_chip_status == TSL2X7X_CHIP_WORKING) { diff --git a/drivers/staging/iio/meter/ade7758.h b/drivers/staging/iio/meter/ade7758.h index e8c98cf5707..07318203a83 100644 --- a/drivers/staging/iio/meter/ade7758.h +++ b/drivers/staging/iio/meter/ade7758.h @@ -119,6 +119,7 @@ struct ade7758_state { u8 *tx; u8 *rx; struct mutex buf_lock; + const struct iio_chan_spec *ade7758_ring_channels; struct spi_transfer ring_xfer[4]; struct spi_message ring_msg; /* diff --git a/drivers/staging/iio/meter/ade7758_core.c b/drivers/staging/iio/meter/ade7758_core.c index 75d9fe6a1bc..8f5bcfab356 100644 --- a/drivers/staging/iio/meter/ade7758_core.c +++ b/drivers/staging/iio/meter/ade7758_core.c @@ -648,6 +648,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_VOLTAGE, .indexed = 1, .channel = 0, + .extend_name = "raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_A, AD7758_VOLTAGE), .scan_index = 0, .scan_type = { @@ -659,6 +662,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_CURRENT, .indexed = 1, .channel = 0, + .extend_name = "raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_A, AD7758_CURRENT), .scan_index = 1, .scan_type = { @@ -670,7 +676,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 0, - .extend_name = "apparent", + .extend_name = "apparent_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_A, AD7758_APP_PWR), .scan_index = 2, .scan_type = { @@ -682,7 +690,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 0, - .extend_name = "active", + .extend_name = "active_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_A, AD7758_ACT_PWR), .scan_index = 3, .scan_type = { @@ -694,7 +704,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 0, - .extend_name = "reactive", + .extend_name = "reactive_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_A, AD7758_REACT_PWR), .scan_index = 4, .scan_type = { @@ -706,6 +718,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_VOLTAGE, .indexed = 1, .channel = 1, + .extend_name = "raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_B, AD7758_VOLTAGE), .scan_index = 5, .scan_type = { @@ -717,6 +732,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_CURRENT, .indexed = 1, .channel = 1, + .extend_name = "raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_B, AD7758_CURRENT), .scan_index = 6, .scan_type = { @@ -728,7 +746,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 1, - .extend_name = "apparent", + .extend_name = "apparent_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_B, AD7758_APP_PWR), .scan_index = 7, .scan_type = { @@ -740,7 +760,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 1, - .extend_name = "active", + .extend_name = "active_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_B, AD7758_ACT_PWR), .scan_index = 8, .scan_type = { @@ -752,7 +774,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 1, - .extend_name = "reactive", + .extend_name = "reactive_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_B, AD7758_REACT_PWR), .scan_index = 9, .scan_type = { @@ -764,6 +788,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_VOLTAGE, .indexed = 1, .channel = 2, + .extend_name = "raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_C, AD7758_VOLTAGE), .scan_index = 10, .scan_type = { @@ -775,6 +802,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_CURRENT, .indexed = 1, .channel = 2, + .extend_name = "raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_C, AD7758_CURRENT), .scan_index = 11, .scan_type = { @@ -786,7 +816,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 2, - .extend_name = "apparent", + .extend_name = "apparent_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_C, AD7758_APP_PWR), .scan_index = 12, .scan_type = { @@ -798,7 +830,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 2, - .extend_name = "active", + .extend_name = "active_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_C, AD7758_ACT_PWR), .scan_index = 13, .scan_type = { @@ -810,7 +844,9 @@ static const struct iio_chan_spec ade7758_channels[] = { .type = IIO_POWER, .indexed = 1, .channel = 2, - .extend_name = "reactive", + .extend_name = "reactive_raw", + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), .address = AD7758_WT(AD7758_PHASE_C, AD7758_REACT_PWR), .scan_index = 14, .scan_type = { @@ -854,14 +890,13 @@ static int ade7758_probe(struct spi_device *spi) goto error_free_rx; } st->us = spi; + st->ade7758_ring_channels = &ade7758_channels[0]; mutex_init(&st->buf_lock); indio_dev->name = spi->dev.driver->name; indio_dev->dev.parent = &spi->dev; indio_dev->info = &ade7758_info; indio_dev->modes = INDIO_DIRECT_MODE; - indio_dev->channels = ade7758_channels; - indio_dev->num_channels = ARRAY_SIZE(ade7758_channels); ret = ade7758_configure_ring(indio_dev); if (ret) diff --git a/drivers/staging/iio/meter/ade7758_ring.c b/drivers/staging/iio/meter/ade7758_ring.c index 6a0ef97e914..b29e2d5d993 100644 --- a/drivers/staging/iio/meter/ade7758_ring.c +++ b/drivers/staging/iio/meter/ade7758_ring.c @@ -89,10 +89,11 @@ static irqreturn_t ade7758_trigger_handler(int irq, void *p) **/ static int ade7758_ring_preenable(struct iio_dev *indio_dev) { + struct ade7758_state *st = iio_priv(indio_dev); unsigned channel; int ret; - if (bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength)) + if (!bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength)) return -EINVAL; ret = iio_sw_buffer_preenable(indio_dev); @@ -103,7 +104,7 @@ static int ade7758_ring_preenable(struct iio_dev *indio_dev) indio_dev->masklength); ade7758_write_waveform_type(&indio_dev->dev, - indio_dev->channels[channel].address); + st->ade7758_ring_channels[channel].address); return 0; } diff --git a/drivers/staging/iio/meter/ade7758_trigger.c b/drivers/staging/iio/meter/ade7758_trigger.c index 8c4f2896cd0..7a94ddd42f5 100644 --- a/drivers/staging/iio/meter/ade7758_trigger.c +++ b/drivers/staging/iio/meter/ade7758_trigger.c @@ -85,7 +85,7 @@ int ade7758_probe_trigger(struct iio_dev *indio_dev) ret = iio_trigger_register(st->trig); /* select default trigger */ - indio_dev->trig = iio_trigger_get(st->trig); + indio_dev->trig = st->trig; if (ret) goto error_free_irq; diff --git a/drivers/staging/rtl8712/rtl871x_recv.c b/drivers/staging/rtl8712/rtl871x_recv.c index 274c359279e..23ec684b60e 100644 --- a/drivers/staging/rtl8712/rtl871x_recv.c +++ b/drivers/staging/rtl8712/rtl871x_recv.c @@ -254,7 +254,7 @@ union recv_frame *r8712_portctrl(struct _adapter *adapter, struct sta_info *psta; struct sta_priv *pstapriv; union recv_frame *prtnframe; - u16 ether_type; + u16 ether_type = 0; pstapriv = &adapter->stapriv; ptr = get_recvframe_data(precv_frame); @@ -263,14 +263,15 @@ union recv_frame *r8712_portctrl(struct _adapter *adapter, psta = r8712_get_stainfo(pstapriv, psta_addr); auth_alg = adapter->securitypriv.AuthAlgrthm; if (auth_alg == 2) { - /* get ether_type */ - ptr = ptr + pfhdr->attrib.hdrlen + LLC_HEADER_SIZE; - memcpy(ðer_type, ptr, 2); - ether_type = ntohs((unsigned short)ether_type); - if ((psta != NULL) && (psta->ieee8021x_blocked)) { /* blocked * only accept EAPOL frame */ + prtnframe = precv_frame; + /*get ether_type */ + ptr = ptr + pfhdr->attrib.hdrlen + + pfhdr->attrib.iv_len + LLC_HEADER_SIZE; + memcpy(ðer_type, ptr, 2); + ether_type = ntohs((unsigned short)ether_type); if (ether_type == 0x888e) prtnframe = precv_frame; else { diff --git a/drivers/staging/serqt_usb2/serqt_usb2.c b/drivers/staging/serqt_usb2/serqt_usb2.c index 380d9d70710..8a6e5ea476e 100644 --- a/drivers/staging/serqt_usb2/serqt_usb2.c +++ b/drivers/staging/serqt_usb2/serqt_usb2.c @@ -725,7 +725,7 @@ static int qt_startup(struct usb_serial *serial) goto startup_error; } - switch (le16_to_cpu(serial->dev->descriptor.idProduct)) { + switch (serial->dev->descriptor.idProduct) { case QUATECH_DSU100: case QUATECH_QSU100: case QUATECH_ESU100A: diff --git a/drivers/staging/speakup/main.c b/drivers/staging/speakup/main.c index e70a48e3b37..6c7b55c2947 100644 --- a/drivers/staging/speakup/main.c +++ b/drivers/staging/speakup/main.c @@ -2219,7 +2219,6 @@ static void __exit speakup_exit(void) unregister_keyboard_notifier(&keyboard_notifier_block); unregister_vt_notifier(&vt_notifier_block); speakup_unregister_devsynth(); - speakup_cancel_paste(); del_timer(&cursor_timer); kthread_stop(speakup_task); speakup_task = NULL; diff --git a/drivers/staging/speakup/selection.c b/drivers/staging/speakup/selection.c index b9359753784..f0fb00392d6 100644 --- a/drivers/staging/speakup/selection.c +++ b/drivers/staging/speakup/selection.c @@ -4,9 +4,6 @@ #include <linux/sched.h> #include <linux/device.h> /* for dev_warn */ #include <linux/selection.h> -#include <linux/workqueue.h> -#include <linux/tty.h> -#include <asm/cmpxchg.h> #include "speakup.h" @@ -124,60 +121,31 @@ int speakup_set_selection(struct tty_struct *tty) return 0; } -struct speakup_paste_work { - struct work_struct work; - struct tty_struct *tty; -}; - -static void __speakup_paste_selection(struct work_struct *work) +/* TODO: move to some helper thread, probably. That'd fix having to check for + * in_atomic(). */ +int speakup_paste_selection(struct tty_struct *tty) { - struct speakup_paste_work *spw = - container_of(work, struct speakup_paste_work, work); - struct tty_struct *tty = xchg(&spw->tty, NULL); struct vc_data *vc = (struct vc_data *) tty->driver_data; int pasted = 0, count; - struct tty_ldisc *ld; DECLARE_WAITQUEUE(wait, current); - - ld = tty_ldisc_ref_wait(tty); - - /* FIXME: this is completely unsafe */ add_wait_queue(&vc->paste_wait, &wait); while (sel_buffer && sel_buffer_lth > pasted) { set_current_state(TASK_INTERRUPTIBLE); if (test_bit(TTY_THROTTLED, &tty->flags)) { + if (in_atomic()) + /* if we are in an interrupt handler, abort */ + break; schedule(); continue; } count = sel_buffer_lth - pasted; count = min_t(int, count, tty->receive_room); - ld->ops->receive_buf(tty, sel_buffer + pasted, NULL, count); + tty->ldisc->ops->receive_buf(tty, sel_buffer + pasted, + NULL, count); pasted += count; } remove_wait_queue(&vc->paste_wait, &wait); current->state = TASK_RUNNING; - - tty_ldisc_deref(ld); - tty_kref_put(tty); -} - -static struct speakup_paste_work speakup_paste_work = { - .work = __WORK_INITIALIZER(speakup_paste_work.work, - __speakup_paste_selection) -}; - -int speakup_paste_selection(struct tty_struct *tty) -{ - if (cmpxchg(&speakup_paste_work.tty, NULL, tty) != NULL) - return -EBUSY; - - tty_kref_get(tty); - schedule_work_on(WORK_CPU_UNBOUND, &speakup_paste_work.work); return 0; } -void speakup_cancel_paste(void) -{ - cancel_work_sync(&speakup_paste_work.work); - tty_kref_put(speakup_paste_work.tty); -} diff --git a/drivers/staging/speakup/speakup.h b/drivers/staging/speakup/speakup.h index 74fe72429b2..0126f714821 100644 --- a/drivers/staging/speakup/speakup.h +++ b/drivers/staging/speakup/speakup.h @@ -77,7 +77,6 @@ extern void synth_buffer_clear(void); extern void speakup_clear_selection(void); extern int speakup_set_selection(struct tty_struct *tty); extern int speakup_paste_selection(struct tty_struct *tty); -extern void speakup_cancel_paste(void); extern void speakup_register_devsynth(void); extern void speakup_unregister_devsynth(void); extern void synth_write(const char *buf, size_t count); diff --git a/drivers/staging/tidspbridge/core/dsp-clock.c b/drivers/staging/tidspbridge/core/dsp-clock.c index a1aca4416ca..2f084e181d3 100644 --- a/drivers/staging/tidspbridge/core/dsp-clock.c +++ b/drivers/staging/tidspbridge/core/dsp-clock.c @@ -226,7 +226,7 @@ int dsp_clk_enable(enum dsp_clk_id clk_id) case GPT_CLK: status = omap_dm_timer_start(timer[clk_id - 1]); break; -#ifdef CONFIG_SND_OMAP_SOC_MCBSP +#ifdef CONFIG_OMAP_MCBSP case MCBSP_CLK: omap_mcbsp_request(MCBSP_ID(clk_id)); omap2_mcbsp_set_clks_src(MCBSP_ID(clk_id), MCBSP_CLKS_PAD_SRC); @@ -302,7 +302,7 @@ int dsp_clk_disable(enum dsp_clk_id clk_id) case GPT_CLK: status = omap_dm_timer_stop(timer[clk_id - 1]); break; -#ifdef CONFIG_SND_OMAP_SOC_MCBSP +#ifdef CONFIG_OMAP_MCBSP case MCBSP_CLK: omap2_mcbsp_set_clks_src(MCBSP_ID(clk_id), MCBSP_CLKS_PRCM_SRC); omap_mcbsp_free(MCBSP_ID(clk_id)); diff --git a/drivers/staging/vt6655/bssdb.c b/drivers/staging/vt6655/bssdb.c index 3496a77612b..f983915168b 100644 --- a/drivers/staging/vt6655/bssdb.c +++ b/drivers/staging/vt6655/bssdb.c @@ -1026,7 +1026,7 @@ start: pDevice->byERPFlag &= ~(WLAN_SET_ERP_USE_PROTECTION(1)); } - if (pDevice->eCommandState == WLAN_ASSOCIATE_WAIT) { + { pDevice->byReAssocCount++; if ((pDevice->byReAssocCount > 10) && (pDevice->bLinkPass != true)) { //10 sec timeout printk("Re-association timeout!!!\n"); diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c index d170b6f9db7..08b250f01da 100644 --- a/drivers/staging/vt6655/device_main.c +++ b/drivers/staging/vt6655/device_main.c @@ -2434,7 +2434,6 @@ static irqreturn_t device_intr(int irq, void *dev_instance) { int handled = 0; unsigned char byData = 0; int ii = 0; - unsigned long flags; // unsigned char byRSSI; MACvReadISR(pDevice->PortOffset, &pDevice->dwIsr); @@ -2460,8 +2459,7 @@ static irqreturn_t device_intr(int irq, void *dev_instance) { handled = 1; MACvIntDisable(pDevice->PortOffset); - - spin_lock_irqsave(&pDevice->lock, flags); + spin_lock_irq(&pDevice->lock); //Make sure current page is 0 VNSvInPortB(pDevice->PortOffset + MAC_REG_PAGE1SEL, &byOrgPageSel); @@ -2702,8 +2700,7 @@ static irqreturn_t device_intr(int irq, void *dev_instance) { MACvSelectPage1(pDevice->PortOffset); } - spin_unlock_irqrestore(&pDevice->lock, flags); - + spin_unlock_irq(&pDevice->lock); MACvIntEnable(pDevice->PortOffset, IMR_MASK_VALUE); return IRQ_RETVAL(handled); diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c index 651b5768862..5b07fd156bd 100644 --- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -460,7 +460,6 @@ int iscsit_del_np(struct iscsi_np *np) spin_lock_bh(&np->np_thread_lock); np->np_exports--; if (np->np_exports) { - np->enabled = true; spin_unlock_bh(&np->np_thread_lock); return 0; } @@ -1313,7 +1312,7 @@ iscsit_check_dataout_hdr(struct iscsi_conn *conn, unsigned char *buf, if (cmd->data_direction != DMA_TO_DEVICE) { pr_err("Command ITT: 0x%08x received DataOUT for a" " NON-WRITE command.\n", cmd->init_task_tag); - return iscsit_dump_data_payload(conn, payload_length, 1); + return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR, buf); } se_cmd = &cmd->se_cmd; iscsit_mod_dataout_timer(cmd); @@ -2455,7 +2454,6 @@ static void iscsit_build_conn_drop_async_message(struct iscsi_conn *conn) { struct iscsi_cmd *cmd; struct iscsi_conn *conn_p; - bool found = false; /* * Only send a Asynchronous Message on connections whos network @@ -2464,12 +2462,11 @@ static void iscsit_build_conn_drop_async_message(struct iscsi_conn *conn) list_for_each_entry(conn_p, &conn->sess->sess_conn_list, conn_list) { if (conn_p->conn_state == TARG_CONN_STATE_LOGGED_IN) { iscsit_inc_conn_usage_count(conn_p); - found = true; break; } } - if (!found) + if (!conn_p) return; cmd = iscsit_allocate_cmd(conn_p, GFP_ATOMIC); @@ -3656,7 +3653,7 @@ iscsit_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd, int state break; case ISTATE_REMOVE: spin_lock_bh(&conn->cmd_lock); - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); iscsit_free_cmd(cmd, false); @@ -4102,7 +4099,7 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn) spin_lock_bh(&conn->cmd_lock); list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_conn_node) { - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); iscsit_increment_maxcmdsn(cmd, sess); @@ -4147,9 +4144,7 @@ int iscsit_close_connection( iscsit_stop_timers_for_cmds(conn); iscsit_stop_nopin_response_timer(conn); iscsit_stop_nopin_timer(conn); - - if (conn->conn_transport->iscsit_wait_conn) - conn->conn_transport->iscsit_wait_conn(conn); + iscsit_free_queue_reqs_for_conn(conn); /* * During Connection recovery drop unacknowledged out of order @@ -4167,7 +4162,6 @@ int iscsit_close_connection( iscsit_clear_ooo_cmdsns_for_conn(conn); iscsit_release_commands_from_conn(conn); } - iscsit_free_queue_reqs_for_conn(conn); /* * Handle decrementing session or connection usage count if @@ -4453,7 +4447,6 @@ static void iscsit_logout_post_handler_diffcid( { struct iscsi_conn *l_conn; struct iscsi_session *sess = conn->sess; - bool conn_found = false; if (!sess) return; @@ -4462,13 +4455,12 @@ static void iscsit_logout_post_handler_diffcid( list_for_each_entry(l_conn, &sess->sess_conn_list, conn_list) { if (l_conn->cid == cid) { iscsit_inc_conn_usage_count(l_conn); - conn_found = true; break; } } spin_unlock_bh(&sess->conn_lock); - if (!conn_found) + if (!l_conn) return; if (l_conn->sock) diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c index 3c9a8dfd1c2..130a1e4f96a 100644 --- a/drivers/target/iscsi/iscsi_target_auth.c +++ b/drivers/target/iscsi/iscsi_target_auth.c @@ -316,16 +316,6 @@ static int chap_server_compute_md5( goto out; } /* - * During mutual authentication, the CHAP_C generated by the - * initiator must not match the original CHAP_C generated by - * the target. - */ - if (!memcmp(challenge_binhex, chap->challenge, CHAP_CHALLENGE_LENGTH)) { - pr_err("initiator CHAP_C matches target CHAP_C, failing" - " login attempt\n"); - goto out; - } - /* * Generate CHAP_N and CHAP_R for mutual authentication. */ tfm = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC); diff --git a/drivers/target/iscsi/iscsi_target_core.h b/drivers/target/iscsi/iscsi_target_core.h index e117870eb44..8907dcdc0db 100644 --- a/drivers/target/iscsi/iscsi_target_core.h +++ b/drivers/target/iscsi/iscsi_target_core.h @@ -760,7 +760,6 @@ struct iscsi_np { int np_ip_proto; int np_sock_type; enum np_thread_state_table np_thread_state; - bool enabled; enum iscsi_timer_flags_table np_login_timer_flags; u32 np_exports; enum np_flags_table np_flags; diff --git a/drivers/target/iscsi/iscsi_target_erl2.c b/drivers/target/iscsi/iscsi_target_erl2.c index 0d2d013076c..45a5afd5ea1 100644 --- a/drivers/target/iscsi/iscsi_target_erl2.c +++ b/drivers/target/iscsi/iscsi_target_erl2.c @@ -140,7 +140,7 @@ void iscsit_free_connection_recovery_entires(struct iscsi_session *sess) list_for_each_entry_safe(cmd, cmd_tmp, &cr->conn_recovery_cmd_list, i_conn_node) { - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); cmd->conn = NULL; spin_unlock(&cr->conn_recovery_cmd_lock); iscsit_free_cmd(cmd, true); @@ -162,7 +162,7 @@ void iscsit_free_connection_recovery_entires(struct iscsi_session *sess) list_for_each_entry_safe(cmd, cmd_tmp, &cr->conn_recovery_cmd_list, i_conn_node) { - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); cmd->conn = NULL; spin_unlock(&cr->conn_recovery_cmd_lock); iscsit_free_cmd(cmd, true); @@ -218,7 +218,7 @@ int iscsit_remove_cmd_from_connection_recovery( } cr = cmd->cr; - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); return --cr->cmd_count; } @@ -299,7 +299,7 @@ int iscsit_discard_unacknowledged_ooo_cmdsns_for_conn(struct iscsi_conn *conn) if (!(cmd->cmd_flags & ICF_OOO_CMDSN)) continue; - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); iscsit_free_cmd(cmd, true); @@ -337,7 +337,7 @@ int iscsit_prepare_cmds_for_realligance(struct iscsi_conn *conn) /* * Only perform connection recovery on ISCSI_OP_SCSI_CMD or * ISCSI_OP_NOOP_OUT opcodes. For all other opcodes call - * list_del_init(&cmd->i_conn_node); to release the command to the + * list_del(&cmd->i_conn_node); to release the command to the * session pool and remove it from the connection's list. * * Also stop the DataOUT timer, which will be restarted after @@ -353,7 +353,7 @@ int iscsit_prepare_cmds_for_realligance(struct iscsi_conn *conn) " CID: %hu\n", cmd->iscsi_opcode, cmd->init_task_tag, cmd->cmd_sn, conn->cid); - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); iscsit_free_cmd(cmd, true); spin_lock_bh(&conn->cmd_lock); @@ -373,7 +373,7 @@ int iscsit_prepare_cmds_for_realligance(struct iscsi_conn *conn) */ if (!(cmd->cmd_flags & ICF_OOO_CMDSN) && !cmd->immediate_cmd && iscsi_sna_gte(cmd->cmd_sn, conn->sess->exp_cmd_sn)) { - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); iscsit_free_cmd(cmd, true); spin_lock_bh(&conn->cmd_lock); @@ -395,7 +395,7 @@ int iscsit_prepare_cmds_for_realligance(struct iscsi_conn *conn) cmd->sess = conn->sess; - list_del_init(&cmd->i_conn_node); + list_del(&cmd->i_conn_node); spin_unlock_bh(&conn->cmd_lock); iscsit_free_all_datain_reqs(cmd); diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c index e14e105acff..bc788c52b6c 100644 --- a/drivers/target/iscsi/iscsi_target_login.c +++ b/drivers/target/iscsi/iscsi_target_login.c @@ -250,28 +250,6 @@ static void iscsi_login_set_conn_values( mutex_unlock(&auth_id_lock); } -static __printf(2, 3) int iscsi_change_param_sprintf( - struct iscsi_conn *conn, - const char *fmt, ...) -{ - va_list args; - unsigned char buf[64]; - - memset(buf, 0, sizeof buf); - - va_start(args, fmt); - vsnprintf(buf, sizeof buf, fmt, args); - va_end(args); - - if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); - return -1; - } - - return 0; -} - /* * This is the leading connection of a new session, * or session reinstatement. @@ -361,6 +339,7 @@ static int iscsi_login_zero_tsih_s2( { struct iscsi_node_attrib *na; struct iscsi_session *sess = conn->sess; + unsigned char buf[32]; bool iser = false; sess->tpg = conn->tpg; @@ -401,16 +380,26 @@ static int iscsi_login_zero_tsih_s2( * * In our case, we have already located the struct iscsi_tiqn at this point. */ - if (iscsi_change_param_sprintf(conn, "TargetPortalGroupTag=%hu", sess->tpg->tpgt)) + memset(buf, 0, 32); + sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt); + if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, + ISCSI_LOGIN_STATUS_NO_RESOURCES); return -1; + } /* * Workaround for Initiators that have broken connection recovery logic. * * "We would really like to get rid of this." Linux-iSCSI.org team */ - if (iscsi_change_param_sprintf(conn, "ErrorRecoveryLevel=%d", na->default_erl)) + memset(buf, 0, 32); + sprintf(buf, "ErrorRecoveryLevel=%d", na->default_erl); + if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, + ISCSI_LOGIN_STATUS_NO_RESOURCES); return -1; + } if (iscsi_login_disable_FIM_keys(conn->param_list, conn) < 0) return -1; @@ -422,9 +411,12 @@ static int iscsi_login_zero_tsih_s2( unsigned long mrdsl, off; int rc; - if (iscsi_change_param_sprintf(conn, "RDMAExtensions=Yes")) + sprintf(buf, "RDMAExtensions=Yes"); + if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, + ISCSI_LOGIN_STATUS_NO_RESOURCES); return -1; - + } /* * Make MaxRecvDataSegmentLength PAGE_SIZE aligned for * Immediate Data + Unsolicitied Data-OUT if necessary.. @@ -454,8 +446,12 @@ static int iscsi_login_zero_tsih_s2( pr_warn("Aligning ISER MaxRecvDataSegmentLength: %lu down" " to PAGE_SIZE\n", mrdsl); - if (iscsi_change_param_sprintf(conn, "MaxRecvDataSegmentLength=%lu\n", mrdsl)) + sprintf(buf, "MaxRecvDataSegmentLength=%lu\n", mrdsl); + if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, + ISCSI_LOGIN_STATUS_NO_RESOURCES); return -1; + } } return 0; @@ -597,8 +593,13 @@ static int iscsi_login_non_zero_tsih_s2( * * In our case, we have already located the struct iscsi_tiqn at this point. */ - if (iscsi_change_param_sprintf(conn, "TargetPortalGroupTag=%hu", sess->tpg->tpgt)) + memset(buf, 0, 32); + sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt); + if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, + ISCSI_LOGIN_STATUS_NO_RESOURCES); return -1; + } return iscsi_login_disable_FIM_keys(conn->param_list, conn); } @@ -983,7 +984,6 @@ int iscsi_target_setup_login_socket( } np->np_transport = t; - np->enabled = true; return 0; } diff --git a/drivers/target/iscsi/iscsi_target_parameters.c b/drivers/target/iscsi/iscsi_target_parameters.c index 30be6c9bdbc..e38222191a3 100644 --- a/drivers/target/iscsi/iscsi_target_parameters.c +++ b/drivers/target/iscsi/iscsi_target_parameters.c @@ -603,7 +603,7 @@ int iscsi_copy_param_list( param_list = kzalloc(sizeof(struct iscsi_param_list), GFP_KERNEL); if (!param_list) { pr_err("Unable to allocate memory for struct iscsi_param_list.\n"); - return -1; + goto err_out; } INIT_LIST_HEAD(¶m_list->param_list); INIT_LIST_HEAD(¶m_list->extra_response_list); diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c index 75a4e83842c..439260b7d87 100644 --- a/drivers/target/iscsi/iscsi_target_tpg.c +++ b/drivers/target/iscsi/iscsi_target_tpg.c @@ -138,7 +138,7 @@ struct iscsi_portal_group *iscsit_get_tpg_from_np( list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) { spin_lock(&tpg->tpg_state_lock); - if (tpg->tpg_state != TPG_STATE_ACTIVE) { + if (tpg->tpg_state == TPG_STATE_FREE) { spin_unlock(&tpg->tpg_state_lock); continue; } @@ -175,16 +175,13 @@ void iscsit_put_tpg(struct iscsi_portal_group *tpg) static void iscsit_clear_tpg_np_login_thread( struct iscsi_tpg_np *tpg_np, - struct iscsi_portal_group *tpg, - bool shutdown) + struct iscsi_portal_group *tpg) { if (!tpg_np->tpg_np) { pr_err("struct iscsi_tpg_np->tpg_np is NULL!\n"); return; } - if (shutdown) - tpg_np->tpg_np->enabled = false; iscsit_reset_np_thread(tpg_np->tpg_np, tpg_np, tpg); } @@ -200,7 +197,7 @@ void iscsit_clear_tpg_np_login_threads( continue; } spin_unlock(&tpg->tpg_np_lock); - iscsit_clear_tpg_np_login_thread(tpg_np, tpg, false); + iscsit_clear_tpg_np_login_thread(tpg_np, tpg); spin_lock(&tpg->tpg_np_lock); } spin_unlock(&tpg->tpg_np_lock); @@ -523,7 +520,7 @@ static int iscsit_tpg_release_np( struct iscsi_portal_group *tpg, struct iscsi_np *np) { - iscsit_clear_tpg_np_login_thread(tpg_np, tpg, true); + iscsit_clear_tpg_np_login_thread(tpg_np, tpg); pr_debug("CORE[%s] - Removed Network Portal: %s:%hu,%hu on %s\n", tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt, diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c index c9790f6fdd8..77dad2474c8 100644 --- a/drivers/target/iscsi/iscsi_target_util.c +++ b/drivers/target/iscsi/iscsi_target_util.c @@ -1288,8 +1288,6 @@ int iscsit_tx_login_rsp(struct iscsi_conn *conn, u8 status_class, u8 status_deta login->login_failed = 1; iscsit_collect_login_stats(conn, status_class, status_detail); - memset(&login->rsp[0], 0, ISCSI_HDR_LEN); - hdr = (struct iscsi_login_rsp *)&login->rsp[0]; hdr->opcode = ISCSI_OP_LOGIN_RSP; hdr->status_class = status_class; diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c index df58a67f81e..f608fbc14a2 100644 --- a/drivers/target/target_core_alua.c +++ b/drivers/target/target_core_alua.c @@ -409,16 +409,7 @@ static inline int core_alua_state_standby( case REPORT_LUNS: case RECEIVE_DIAGNOSTIC: case SEND_DIAGNOSTIC: - case READ_CAPACITY: return 0; - case SERVICE_ACTION_IN: - switch (cdb[1] & 0x1f) { - case SAI_READ_CAPACITY_16: - return 0; - default: - *alua_ascq = ASCQ_04H_ALUA_TG_PT_STANDBY; - return 1; - } case MAINTENANCE_IN: switch (cdb[1] & 0x1f) { case MI_REPORT_TARGET_PGS: diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c index 8cda4080b59..4a8bd36d395 100644 --- a/drivers/target/target_core_configfs.c +++ b/drivers/target/target_core_configfs.c @@ -2034,11 +2034,6 @@ static ssize_t target_core_alua_tg_pt_gp_store_attr_alua_access_state( " tg_pt_gp ID: %hu\n", tg_pt_gp->tg_pt_gp_valid_id); return -EINVAL; } - if (!(dev->dev_flags & DF_CONFIGURED)) { - pr_err("Unable to set alua_access_state while device is" - " not configured\n"); - return -ENODEV; - } ret = strict_strtoul(page, 0, &tmp); if (ret < 0) { diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c index 2be407e22eb..660b109487a 100644 --- a/drivers/target/target_core_device.c +++ b/drivers/target/target_core_device.c @@ -614,7 +614,6 @@ void core_dev_unexport( dev->export_count--; spin_unlock(&hba->device_lock); - lun->lun_sep = NULL; lun->lun_se_dev = NULL; } @@ -797,10 +796,10 @@ int se_dev_set_emulate_write_cache(struct se_device *dev, int flag) pr_err("emulate_write_cache not supported for pSCSI\n"); return -EINVAL; } - if (flag && - dev->transport->get_write_cache) { - pr_err("emulate_write_cache not supported for this device\n"); - return -EINVAL; + if (dev->transport->get_write_cache) { + pr_warn("emulate_write_cache cannot be changed when underlying" + " HW reports WriteCacheEnabled, ignoring request\n"); + return 0; } dev->dev_attrib.emulate_write_cache = flag; @@ -1293,8 +1292,7 @@ int core_dev_add_initiator_node_lun_acl( * Check to see if there are any existing persistent reservation APTPL * pre-registrations that need to be enabled for this LUN ACL.. */ - core_scsi3_check_aptpl_registration(lun->lun_se_dev, tpg, lun, nacl, - lacl->mapped_lun); + core_scsi3_check_aptpl_registration(lun->lun_se_dev, tpg, lun, lacl); return 0; } diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c index 27ec6e4d1c7..04a74938bb4 100644 --- a/drivers/target/target_core_pr.c +++ b/drivers/target/target_core_pr.c @@ -945,10 +945,10 @@ int core_scsi3_check_aptpl_registration( struct se_device *dev, struct se_portal_group *tpg, struct se_lun *lun, - struct se_node_acl *nacl, - u32 mapped_lun) + struct se_lun_acl *lun_acl) { - struct se_dev_entry *deve = nacl->device_list[mapped_lun]; + struct se_node_acl *nacl = lun_acl->se_lun_nacl; + struct se_dev_entry *deve = nacl->device_list[lun_acl->mapped_lun]; if (dev->dev_reservation_flags & DRF_SPC2_RESERVATIONS) return 0; diff --git a/drivers/target/target_core_pr.h b/drivers/target/target_core_pr.h index ea9220de1df..b4a004247ab 100644 --- a/drivers/target/target_core_pr.h +++ b/drivers/target/target_core_pr.h @@ -55,7 +55,7 @@ extern int core_scsi3_alloc_aptpl_registration( unsigned char *, u16, u32, int, int, u8); extern int core_scsi3_check_aptpl_registration(struct se_device *, struct se_portal_group *, struct se_lun *, - struct se_node_acl *, u32); + struct se_lun_acl *); extern void core_scsi3_free_pr_reg_from_nacl(struct se_device *, struct se_node_acl *); extern void core_scsi3_free_all_registrations(struct se_device *); diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c index 5c3b6778c22..0921a64b555 100644 --- a/drivers/target/target_core_rd.c +++ b/drivers/target/target_core_rd.c @@ -174,7 +174,7 @@ static int rd_build_device_space(struct rd_dev *rd_dev) - 1; for (j = 0; j < sg_per_table; j++) { - pg = alloc_pages(GFP_KERNEL | __GFP_ZERO, 0); + pg = alloc_pages(GFP_KERNEL, 0); if (!pg) { pr_err("Unable to allocate scatterlist" " pages for struct rd_dev_sg_table\n"); diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c index 0ef75fb0ecb..bbc5b0ee2bd 100644 --- a/drivers/target/target_core_sbc.c +++ b/drivers/target/target_core_sbc.c @@ -63,7 +63,7 @@ sbc_emulate_readcapacity(struct se_cmd *cmd) transport_kunmap_data_sg(cmd); } - target_complete_cmd_with_length(cmd, GOOD, 8); + target_complete_cmd(cmd, GOOD); return 0; } @@ -101,7 +101,7 @@ sbc_emulate_readcapacity_16(struct se_cmd *cmd) transport_kunmap_data_sg(cmd); } - target_complete_cmd_with_length(cmd, GOOD, 32); + target_complete_cmd(cmd, GOOD); return 0; } diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c index 34254b2ec46..9fabbf7214c 100644 --- a/drivers/target/target_core_spc.c +++ b/drivers/target/target_core_spc.c @@ -628,7 +628,6 @@ spc_emulate_inquiry(struct se_cmd *cmd) unsigned char buf[SE_INQUIRY_BUF]; sense_reason_t ret; int p; - int len = 0; memset(buf, 0, SE_INQUIRY_BUF); @@ -646,7 +645,6 @@ spc_emulate_inquiry(struct se_cmd *cmd) } ret = spc_emulate_inquiry_std(cmd, buf); - len = buf[4] + 5; goto out; } @@ -654,7 +652,6 @@ spc_emulate_inquiry(struct se_cmd *cmd) if (cdb[2] == evpd_handlers[p].page) { buf[1] = cdb[2]; ret = evpd_handlers[p].emulate(cmd, buf); - len = get_unaligned_be16(&buf[2]) + 4; goto out; } } @@ -670,7 +667,7 @@ out: } if (!ret) - target_complete_cmd_with_length(cmd, GOOD, len); + target_complete_cmd(cmd, GOOD); return ret; } @@ -988,7 +985,7 @@ set_length: transport_kunmap_data_sg(cmd); } - target_complete_cmd_with_length(cmd, GOOD, length); + target_complete_cmd(cmd, GOOD); return 0; } @@ -1165,7 +1162,7 @@ done: buf[3] = (lun_count & 0xff); transport_kunmap_data_sg(cmd); - target_complete_cmd_with_length(cmd, GOOD, 8 + lun_count * 8); + target_complete_cmd(cmd, GOOD); return 0; } EXPORT_SYMBOL(spc_emulate_report_luns); diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c index 8572207e3d4..aac9d2727e3 100644 --- a/drivers/target/target_core_tpg.c +++ b/drivers/target/target_core_tpg.c @@ -40,7 +40,6 @@ #include <target/target_core_fabric.h> #include "target_core_internal.h" -#include "target_core_pr.h" extern struct se_device *g_lun0_dev; @@ -166,13 +165,6 @@ void core_tpg_add_node_to_devs( core_enable_device_list_for_node(lun, NULL, lun->unpacked_lun, lun_access, acl, tpg); - /* - * Check to see if there are any existing persistent reservation - * APTPL pre-registrations that need to be enabled for this dynamic - * LUN ACL now.. - */ - core_scsi3_check_aptpl_registration(dev, tpg, lun, acl, - lun->unpacked_lun); spin_lock(&tpg->tpg_lun_lock); } spin_unlock(&tpg->tpg_lun_lock); diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index 12342695ed7..21e315874a5 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -488,7 +488,7 @@ static int transport_cmd_check_stop(struct se_cmd *cmd, bool remove_from_lists) spin_unlock_irqrestore(&cmd->t_state_lock, flags); - complete_all(&cmd->t_transport_stop_comp); + complete(&cmd->t_transport_stop_comp); return 1; } @@ -617,7 +617,7 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status) if (cmd->transport_state & CMD_T_ABORTED && cmd->transport_state & CMD_T_STOP) { spin_unlock_irqrestore(&cmd->t_state_lock, flags); - complete_all(&cmd->t_transport_stop_comp); + complete(&cmd->t_transport_stop_comp); return; } else if (cmd->transport_state & CMD_T_FAILED) { INIT_WORK(&cmd->work, target_complete_failure_work); @@ -633,23 +633,6 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status) } EXPORT_SYMBOL(target_complete_cmd); -void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length) -{ - if (scsi_status == SAM_STAT_GOOD && length < cmd->data_length) { - if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT) { - cmd->residual_count += cmd->data_length - length; - } else { - cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT; - cmd->residual_count = cmd->data_length - length; - } - - cmd->data_length = length; - } - - target_complete_cmd(cmd, scsi_status); -} -EXPORT_SYMBOL(target_complete_cmd_with_length); - static void target_add_to_state_list(struct se_cmd *cmd) { struct se_device *dev = cmd->se_dev; @@ -1705,7 +1688,7 @@ void target_execute_cmd(struct se_cmd *cmd) cmd->se_tfo->get_task_tag(cmd)); spin_unlock_irq(&cmd->t_state_lock); - complete_all(&cmd->t_transport_stop_comp); + complete(&cmd->t_transport_stop_comp); return; } @@ -1788,7 +1771,8 @@ static void transport_complete_qf(struct se_cmd *cmd) if (cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) { ret = cmd->se_tfo->queue_status(cmd); - goto out; + if (ret) + goto out; } switch (cmd->data_direction) { @@ -2893,12 +2877,6 @@ static void target_tmr_work(struct work_struct *work) int transport_generic_handle_tmr( struct se_cmd *cmd) { - unsigned long flags; - - spin_lock_irqsave(&cmd->t_state_lock, flags); - cmd->transport_state |= CMD_T_ACTIVE; - spin_unlock_irqrestore(&cmd->t_state_lock, flags); - INIT_WORK(&cmd->work, target_tmr_work); queue_work(cmd->se_dev->tmr_wq, &cmd->work); return 0; diff --git a/drivers/target/tcm_fc/tfc_sess.c b/drivers/target/tcm_fc/tfc_sess.c index 639fdb395fb..4859505ae2e 100644 --- a/drivers/target/tcm_fc/tfc_sess.c +++ b/drivers/target/tcm_fc/tfc_sess.c @@ -68,7 +68,6 @@ static struct ft_tport *ft_tport_create(struct fc_lport *lport) if (tport) { tport->tpg = tpg; - tpg->tport = tport; return tport; } diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c index f179033eaa3..eb255e807c0 100644 --- a/drivers/tty/hvc/hvc_console.c +++ b/drivers/tty/hvc/hvc_console.c @@ -31,7 +31,6 @@ #include <linux/list.h> #include <linux/module.h> #include <linux/major.h> -#include <linux/atomic.h> #include <linux/sysrq.h> #include <linux/tty.h> #include <linux/tty_flip.h> @@ -71,9 +70,6 @@ static struct task_struct *hvc_task; /* Picks up late kicks after list walk but before schedule() */ static int hvc_kicked; -/* hvc_init is triggered from hvc_alloc, i.e. only when actually used */ -static atomic_t hvc_needs_init __read_mostly = ATOMIC_INIT(-1); - static int hvc_init(void); #ifdef CONFIG_MAGIC_SYSRQ @@ -190,7 +186,7 @@ static struct tty_driver *hvc_console_device(struct console *c, int *index) return hvc_driver; } -static int hvc_console_setup(struct console *co, char *options) +static int __init hvc_console_setup(struct console *co, char *options) { if (co->index < 0 || co->index >= MAX_NR_HVC_CONSOLES) return -ENODEV; @@ -846,7 +842,7 @@ struct hvc_struct *hvc_alloc(uint32_t vtermno, int data, int i; /* We wait until a driver actually comes along */ - if (atomic_inc_not_zero(&hvc_needs_init)) { + if (!hvc_driver) { int err = hvc_init(); if (err) return ERR_PTR(err); diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c index 6cfe4019abc..6c7fe90ad72 100644 --- a/drivers/tty/n_tty.c +++ b/drivers/tty/n_tty.c @@ -2066,12 +2066,8 @@ static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, if (tty->ops->flush_chars) tty->ops->flush_chars(tty); } else { - struct n_tty_data *ldata = tty->disc_data; - while (nr > 0) { - mutex_lock(&ldata->output_lock); c = tty->ops->write(tty, b, nr); - mutex_unlock(&ldata->output_lock); if (c < 0) { retval = c; goto break_out; diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c index c98e069d666..264054fe8a6 100644 --- a/drivers/tty/serial/8250/8250_core.c +++ b/drivers/tty/serial/8250/8250_core.c @@ -557,7 +557,7 @@ static void serial8250_set_sleep(struct uart_8250_port *p, int sleep) */ if ((p->port.type == PORT_XR17V35X) || (p->port.type == PORT_XR17D15X)) { - serial_out(p, UART_EXAR_SLEEP, sleep ? 0xff : 0); + serial_out(p, UART_EXAR_SLEEP, 0xff); return; } @@ -1524,7 +1524,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) status = serial8250_rx_chars(up, status); } serial8250_modem_status(up); - if (!up->dma && (status & UART_LSR_THRE)) + if (status & UART_LSR_THRE) serial8250_tx_chars(up); spin_unlock_irqrestore(&port->lock, flags); diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c index 148ffe4c232..7046769608d 100644 --- a/drivers/tty/serial/8250/8250_dma.c +++ b/drivers/tty/serial/8250/8250_dma.c @@ -20,15 +20,12 @@ static void __dma_tx_complete(void *param) struct uart_8250_port *p = param; struct uart_8250_dma *dma = p->dma; struct circ_buf *xmit = &p->port.state->xmit; - unsigned long flags; + + dma->tx_running = 0; dma_sync_single_for_cpu(dma->txchan->device->dev, dma->tx_addr, UART_XMIT_SIZE, DMA_TO_DEVICE); - spin_lock_irqsave(&p->port.lock, flags); - - dma->tx_running = 0; - xmit->tail += dma->tx_size; xmit->tail &= UART_XMIT_SIZE - 1; p->port.icount.tx += dma->tx_size; @@ -38,8 +35,6 @@ static void __dma_tx_complete(void *param) if (!uart_circ_empty(xmit) && !uart_tx_stopped(&p->port)) serial8250_tx_dma(p); - - spin_unlock_irqrestore(&p->port.lock, flags); } static void __dma_rx_complete(void *param) @@ -192,28 +187,21 @@ int serial8250_request_dma(struct uart_8250_port *p) dma->rx_buf = dma_alloc_coherent(dma->rxchan->device->dev, dma->rx_size, &dma->rx_addr, GFP_KERNEL); - if (!dma->rx_buf) - goto err; + if (!dma->rx_buf) { + dma_release_channel(dma->rxchan); + dma_release_channel(dma->txchan); + return -ENOMEM; + } /* TX buffer */ dma->tx_addr = dma_map_single(dma->txchan->device->dev, p->port.state->xmit.buf, UART_XMIT_SIZE, DMA_TO_DEVICE); - if (dma_mapping_error(dma->txchan->device->dev, dma->tx_addr)) { - dma_free_coherent(dma->rxchan->device->dev, dma->rx_size, - dma->rx_buf, dma->rx_addr); - goto err; - } dev_dbg_ratelimited(p->port.dev, "got both dma channels\n"); return 0; -err: - dma_release_channel(dma->rxchan); - dma_release_channel(dma->txchan); - - return -ENOMEM; } EXPORT_SYMBOL_GPL(serial8250_request_dma); diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c index 345b5ddcb1a..5d880917850 100644 --- a/drivers/tty/serial/8250/8250_dw.c +++ b/drivers/tty/serial/8250/8250_dw.c @@ -54,100 +54,58 @@ struct dw8250_data { - int last_mcr; + int last_lcr; int line; struct clk *clk; }; -static inline int dw8250_modify_msr(struct uart_port *p, int offset, int value) -{ - struct dw8250_data *d = p->private_data; - - /* If reading MSR, report CTS asserted when auto-CTS/RTS enabled */ - if (offset == UART_MSR && d->last_mcr & UART_MCR_AFE) { - value |= UART_MSR_CTS; - value &= ~UART_MSR_DCTS; - } - - return value; -} - -static void dw8250_force_idle(struct uart_port *p) -{ - serial8250_clear_and_reinit_fifos(container_of - (p, struct uart_8250_port, port)); - (void)p->serial_in(p, UART_RX); -} - static void dw8250_serial_out(struct uart_port *p, int offset, int value) { struct dw8250_data *d = p->private_data; - if (offset == UART_MCR) - d->last_mcr = value; - - writeb(value, p->membase + (offset << p->regshift)); + if (offset == UART_LCR) + d->last_lcr = value; - /* Make sure LCR write wasn't ignored */ - if (offset == UART_LCR) { - int tries = 1000; - while (tries--) { - unsigned int lcr = p->serial_in(p, UART_LCR); - if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) - return; - dw8250_force_idle(p); - writeb(value, p->membase + (UART_LCR << p->regshift)); - } - dev_err(p->dev, "Couldn't set LCR to %d\n", value); - } + offset <<= p->regshift; + writeb(value, p->membase + offset); } static unsigned int dw8250_serial_in(struct uart_port *p, int offset) { - unsigned int value = readb(p->membase + (offset << p->regshift)); + offset <<= p->regshift; - return dw8250_modify_msr(p, offset, value); + return readb(p->membase + offset); } static void dw8250_serial_out32(struct uart_port *p, int offset, int value) { struct dw8250_data *d = p->private_data; - if (offset == UART_MCR) - d->last_mcr = value; + if (offset == UART_LCR) + d->last_lcr = value; - writel(value, p->membase + (offset << p->regshift)); - - /* Make sure LCR write wasn't ignored */ - if (offset == UART_LCR) { - int tries = 1000; - while (tries--) { - unsigned int lcr = p->serial_in(p, UART_LCR); - if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) - return; - dw8250_force_idle(p); - writel(value, p->membase + (UART_LCR << p->regshift)); - } - dev_err(p->dev, "Couldn't set LCR to %d\n", value); - } + offset <<= p->regshift; + writel(value, p->membase + offset); } static unsigned int dw8250_serial_in32(struct uart_port *p, int offset) { - unsigned int value = readl(p->membase + (offset << p->regshift)); + offset <<= p->regshift; - return dw8250_modify_msr(p, offset, value); + return readl(p->membase + offset); } static int dw8250_handle_irq(struct uart_port *p) { + struct dw8250_data *d = p->private_data; unsigned int iir = p->serial_in(p, UART_IIR); if (serial8250_handle_irq(p, iir)) { return 1; } else if ((iir & UART_IIR_BUSY) == UART_IIR_BUSY) { - /* Clear the USR */ + /* Clear the USR and write the LCR again. */ (void)p->serial_in(p, DW_UART_USR); + p->serial_out(p, UART_LCR, d->last_lcr); return 1; } diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c index 293ac210546..bec45eb7cf6 100644 --- a/drivers/tty/serial/serial_core.c +++ b/drivers/tty/serial/serial_core.c @@ -244,9 +244,6 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state) /* * Turn off DTR and RTS early. */ - if (uart_console(uport) && tty) - uport->cons->cflag = tty->termios.c_cflag; - if (!tty || (tty->termios.c_cflag & HUPCL)) uart_clear_mctrl(uport, TIOCM_DTR | TIOCM_RTS); @@ -362,7 +359,7 @@ uart_get_baud_rate(struct uart_port *port, struct ktermios *termios, * The spd_hi, spd_vhi, spd_shi, spd_warp kludge... * Die! Die! Die! */ - if (try == 0 && baud == 38400) + if (baud == 38400) baud = altbaud; /* diff --git a/drivers/tty/serial/sunsab.c b/drivers/tty/serial/sunsab.c index aa53fee1df6..a422c8b55a4 100644 --- a/drivers/tty/serial/sunsab.c +++ b/drivers/tty/serial/sunsab.c @@ -157,15 +157,6 @@ receive_chars(struct uart_sunsab_port *up, (up->port.line == up->port.cons->index)) saw_console_brk = 1; - if (count == 0) { - if (unlikely(stat->sreg.isr1 & SAB82532_ISR1_BRK)) { - stat->sreg.isr0 &= ~(SAB82532_ISR0_PERR | - SAB82532_ISR0_FERR); - up->port.icount.brk++; - uart_handle_break(&up->port); - } - } - for (i = 0; i < count; i++) { unsigned char ch = buf[i], flag; diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index d35afccdb6c..59d26ef538d 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -1267,13 +1267,12 @@ static void pty_line_name(struct tty_driver *driver, int index, char *p) * * Locking: None */ -static ssize_t tty_line_name(struct tty_driver *driver, int index, char *p) +static void tty_line_name(struct tty_driver *driver, int index, char *p) { if (driver->flags & TTY_DRIVER_UNNUMBERED_NODE) - return sprintf(p, "%s", driver->name); + strcpy(p, driver->name); else - return sprintf(p, "%s%d", driver->name, - index + driver->name_base); + sprintf(p, "%s%d", driver->name, index + driver->name_base); } /** @@ -1698,7 +1697,6 @@ int tty_release(struct inode *inode, struct file *filp) int pty_master, tty_closing, o_tty_closing, do_sleep; int idx; char buf[64]; - long timeout = 0; if (tty_paranoia_check(tty, inode, __func__)) return 0; @@ -1783,11 +1781,7 @@ int tty_release(struct inode *inode, struct file *filp) __func__, tty_name(tty, buf)); tty_unlock_pair(tty, o_tty); mutex_unlock(&tty_mutex); - schedule_timeout_killable(timeout); - if (timeout < 120 * HZ) - timeout = 2 * timeout + 1; - else - timeout = MAX_SCHEDULE_TIMEOUT; + schedule(); } /* @@ -3544,19 +3538,9 @@ static ssize_t show_cons_active(struct device *dev, if (i >= ARRAY_SIZE(cs)) break; } - while (i--) { - int index = cs[i]->index; - struct tty_driver *drv = cs[i]->device(cs[i], &index); - - /* don't resolve tty0 as some programs depend on it */ - if (drv && (cs[i]->index > 0 || drv->major != TTY_MAJOR)) - count += tty_line_name(drv, index, buf + count); - else - count += sprintf(buf + count, "%s%d", - cs[i]->name, cs[i]->index); - - count += sprintf(buf + count, "%c", i ? ' ':'\n'); - } + while (i--) + count += sprintf(buf + count, "%s%d%c", + cs[i]->name, cs[i]->index, i ? ' ':'\n'); console_unlock(); return count; diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c index a446f61ba44..b842635dde5 100644 --- a/drivers/usb/class/cdc-acm.c +++ b/drivers/usb/class/cdc-acm.c @@ -122,23 +122,13 @@ static void acm_release_minor(struct acm *acm) static int acm_ctrl_msg(struct acm *acm, int request, int value, void *buf, int len) { - int retval; - - retval = usb_autopm_get_interface(acm->control); - if (retval) - return retval; - - retval = usb_control_msg(acm->dev, usb_sndctrlpipe(acm->dev, 0), + int retval = usb_control_msg(acm->dev, usb_sndctrlpipe(acm->dev, 0), request, USB_RT_ACM, value, acm->control->altsetting[0].desc.bInterfaceNumber, buf, len, 5000); - dev_dbg(&acm->control->dev, "%s - rq 0x%02x, val %#x, len %#x, result %d\n", __func__, request, value, len, retval); - - usb_autopm_put_interface(acm->control); - return retval < 0 ? retval : 0; } @@ -242,9 +232,21 @@ static int acm_write_start(struct acm *acm, int wbn) acm->susp_count); usb_autopm_get_interface_async(acm->control); if (acm->susp_count) { - usb_anchor_urb(wb->urb, &acm->delayed); +#ifdef CONFIG_PM + acm->transmitting++; + wb->urb->transfer_buffer = wb->buf; + wb->urb->transfer_dma = wb->dmah; + wb->urb->transfer_buffer_length = wb->len; + wb->urb->dev = acm->dev; + usb_anchor_urb(wb->urb, &acm->deferred); +#else + if (!acm->delayed_wb) + acm->delayed_wb = wb; + else + usb_autopm_put_interface_async(acm->control); +#endif spin_unlock_irqrestore(&acm->write_lock, flags); - return 0; + return 0; /* A white lie */ } usb_mark_last_busy(acm->dev); rc = acm_start_wb(acm, wb); @@ -544,7 +546,6 @@ static int acm_port_activate(struct tty_port *port, struct tty_struct *tty) { struct acm *acm = container_of(port, struct acm, port); int retval = -ENODEV; - int i; dev_dbg(&acm->control->dev, "%s\n", __func__); @@ -603,8 +604,6 @@ static int acm_port_activate(struct tty_port *port, struct tty_struct *tty) return 0; error_submit_read_urbs: - for (i = 0; i < acm->rx_buflimit; i++) - usb_kill_urb(acm->read_urbs[i]); acm->ctrlout = 0; acm_set_control(acm, acm->ctrlout); error_set_control: @@ -632,8 +631,6 @@ static void acm_port_destruct(struct tty_port *port) static void acm_port_shutdown(struct tty_port *port) { struct acm *acm = container_of(port, struct acm, port); - struct urb *urb; - struct acm_wb *wb; int i; int pm_err; @@ -643,16 +640,6 @@ static void acm_port_shutdown(struct tty_port *port) if (!acm->disconnected) { pm_err = usb_autopm_get_interface(acm->control); acm_set_control(acm, acm->ctrlout = 0); - - for (;;) { - urb = usb_get_from_anchor(&acm->delayed); - if (!urb) - break; - wb = urb->context; - wb->use = 0; - usb_autopm_put_interface_async(acm->control); - } - usb_kill_urb(acm->ctrlurb); for (i = 0; i < ACM_NW; i++) usb_kill_urb(acm->wb[i].urb); @@ -937,12 +924,11 @@ static void acm_tty_set_termios(struct tty_struct *tty, /* FIXME: Needs to clear unsupported bits in the termios */ acm->clocal = ((termios->c_cflag & CLOCAL) != 0); - if (C_BAUD(tty) == B0) { + if (!newline.dwDTERate) { newline.dwDTERate = acm->line.dwDTERate; newctrl &= ~ACM_CTRL_DTR; - } else if (termios_old && (termios_old->c_cflag & CBAUD) == B0) { + } else newctrl |= ACM_CTRL_DTR; - } if (newctrl != acm->ctrlout) acm_set_control(acm, acm->ctrlout = newctrl); @@ -1272,7 +1258,6 @@ made_compressed_probe: acm->no_hangup_in_reset_resume = 1; tty_port_init(&acm->port); acm->port.ops = &acm_port_ops; - init_usb_anchor(&acm->delayed); buf = usb_alloc_coherent(usb_dev, ctrlsize, GFP_KERNEL, &acm->ctrl_dma); if (!buf) { @@ -1527,15 +1512,18 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message) return -ENODEV; } - spin_lock_irq(&acm->read_lock); - spin_lock(&acm->write_lock); if (PMSG_IS_AUTO(message)) { - if (acm->transmitting) { - spin_unlock(&acm->write_lock); - spin_unlock_irq(&acm->read_lock); + int b; + + spin_lock_irq(&acm->write_lock); + b = acm->transmitting; + spin_unlock_irq(&acm->write_lock); + if (b) return -EBUSY; - } } + + spin_lock_irq(&acm->read_lock); + spin_lock(&acm->write_lock); cnt = acm->susp_count++; spin_unlock(&acm->write_lock); spin_unlock_irq(&acm->read_lock); @@ -1543,7 +1531,8 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message) if (cnt) return 0; - stop_data_traffic(acm); + if (test_bit(ASYNCB_INITIALIZED, &acm->port.flags)) + stop_data_traffic(acm); return 0; } @@ -1551,31 +1540,57 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message) static int acm_resume(struct usb_interface *intf) { struct acm *acm = usb_get_intfdata(intf); - struct urb *urb; - int rv = 0; + int rv = 0; + int cnt; +#ifdef CONFIG_PM + struct urb *res; +#else + struct acm_wb *wb; +#endif - if (!acm) { - pr_err("%s: !acm\n", __func__); - return -ENODEV; - } + if (!acm) { + pr_err("%s: !acm\n", __func__); + return -ENODEV; + } - spin_lock_irq(&acm->read_lock); - spin_lock(&acm->write_lock); - if (acm->susp_count <= 0) - goto out; + spin_lock_irq(&acm->read_lock); + if (acm->susp_count > 0) { + acm->susp_count -= 1; + cnt = acm->susp_count; + } else { + spin_unlock_irq(&acm->read_lock); + return 0; + } + spin_unlock_irq(&acm->read_lock); - if (--acm->susp_count) - goto out; + if (cnt) + return 0; if (test_bit(ASYNCB_INITIALIZED, &acm->port.flags)) { - rv = usb_submit_urb(acm->ctrlurb, GFP_ATOMIC); - - for (;;) { - urb = usb_get_from_anchor(&acm->delayed); - if (!urb) - break; - - acm_start_wb(acm, urb->context); + rv = usb_submit_urb(acm->ctrlurb, GFP_NOIO); + spin_lock_irq(&acm->write_lock); +#ifdef CONFIG_PM + while ((res = usb_get_from_anchor(&acm->deferred))) { + /* decrement ref count*/ + usb_put_urb(res); + rv = usb_submit_urb(res, GFP_ATOMIC); + if (rv < 0) { + dev_dbg(&acm->data->dev, + "usb_submit_urb(pending request) failed: %d", + rv); + usb_unanchor_urb(res); + acm_write_done(acm, res->context); + } + } + spin_unlock_irq(&acm->write_lock); +#else + if (acm->delayed_wb) { + wb = acm->delayed_wb; + acm->delayed_wb = NULL; + spin_unlock_irq(&acm->write_lock); + acm_start_wb(acm, wb); + } else { + spin_unlock_irq(&acm->write_lock); } /* @@ -1583,14 +1598,12 @@ static int acm_resume(struct usb_interface *intf) * do the write path at all cost */ if (rv < 0) - goto out; + goto err_out; - rv = acm_submit_read_urbs(acm, GFP_ATOMIC); + rv = acm_submit_read_urbs(acm, GFP_NOIO); } -out: - spin_unlock(&acm->write_lock); - spin_unlock_irq(&acm->read_lock); +err_out: return rv; } @@ -1675,32 +1688,17 @@ static const struct usb_device_id acm_ids[] = { { USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ }, - { USB_DEVICE(0x2184, 0x001c) }, /* GW Instek AFG-2225 */ { USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */ }, /* Motorola H24 HSPA module: */ { USB_DEVICE(0x22b8, 0x2d91) }, /* modem */ - { USB_DEVICE(0x22b8, 0x2d92), /* modem + diagnostics */ - .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ - }, - { USB_DEVICE(0x22b8, 0x2d93), /* modem + AT port */ - .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ - }, - { USB_DEVICE(0x22b8, 0x2d95), /* modem + AT port + diagnostics */ - .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ - }, - { USB_DEVICE(0x22b8, 0x2d96), /* modem + NMEA */ - .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ - }, - { USB_DEVICE(0x22b8, 0x2d97), /* modem + diagnostics + NMEA */ - .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ - }, - { USB_DEVICE(0x22b8, 0x2d99), /* modem + AT port + NMEA */ - .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ - }, - { USB_DEVICE(0x22b8, 0x2d9a), /* modem + AT port + diagnostics + NMEA */ - .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ - }, + { USB_DEVICE(0x22b8, 0x2d92) }, /* modem + diagnostics */ + { USB_DEVICE(0x22b8, 0x2d93) }, /* modem + AT port */ + { USB_DEVICE(0x22b8, 0x2d95) }, /* modem + AT port + diagnostics */ + { USB_DEVICE(0x22b8, 0x2d96) }, /* modem + NMEA */ + { USB_DEVICE(0x22b8, 0x2d97) }, /* modem + diagnostics + NMEA */ + { USB_DEVICE(0x22b8, 0x2d99) }, /* modem + AT port + NMEA */ + { USB_DEVICE(0x22b8, 0x2d9a) }, /* modem + AT port + diagnostics + NMEA */ { USB_DEVICE(0x0572, 0x1329), /* Hummingbird huc56s (Conexant) */ .driver_info = NO_UNION_NORMAL, /* union descriptor misplaced on diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h index 9f0390d6005..5f30b9a3e2c 100644 --- a/drivers/usb/class/cdc-acm.h +++ b/drivers/usb/class/cdc-acm.h @@ -123,7 +123,8 @@ struct acm { unsigned int throttle_req:1; /* throttle requested */ unsigned int no_hangup_in_reset_resume:1; /* do not call tty_hangup in acm_reset_resume */ u8 bInterval; - struct usb_anchor delayed; /* writes queued for a device about to be woken */ + struct acm_wb *delayed_wb; /* write queued for a device about to be woken */ + struct usb_anchor deferred; }; #define CDC_DATA_INTERFACE_TYPE 0x0a diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c index 65243832519..548d1996590 100644 --- a/drivers/usb/core/config.c +++ b/drivers/usb/core/config.c @@ -718,10 +718,6 @@ int usb_get_configuration(struct usb_device *dev) result = -ENOMEM; goto err; } - - if (dev->quirks & USB_QUIRK_DELAY_INIT) - msleep(100); - result = usb_get_descriptor(dev, USB_DT_CONFIG, cfgno, bigbuffer, length); if (result < 0) { diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c index 5951d146929..a83cf15d0ba 100644 --- a/drivers/usb/core/driver.c +++ b/drivers/usb/core/driver.c @@ -953,7 +953,8 @@ EXPORT_SYMBOL_GPL(usb_deregister); * it doesn't support pre_reset/post_reset/reset_resume or * because it doesn't support suspend/resume. * - * The caller must hold @intf's device's lock, but not @intf's lock. + * The caller must hold @intf's device's lock, but not its pm_mutex + * and not @intf->dev.sem. */ void usb_forced_unbind_intf(struct usb_interface *intf) { @@ -966,37 +967,16 @@ void usb_forced_unbind_intf(struct usb_interface *intf) intf->needs_binding = 1; } -/* - * Unbind drivers for @udev's marked interfaces. These interfaces have - * the needs_binding flag set, for example by usb_resume_interface(). - * - * The caller must hold @udev's device lock. - */ -static void unbind_marked_interfaces(struct usb_device *udev) -{ - struct usb_host_config *config; - int i; - struct usb_interface *intf; - - config = udev->actconfig; - if (config) { - for (i = 0; i < config->desc.bNumInterfaces; ++i) { - intf = config->interface[i]; - if (intf->dev.driver && intf->needs_binding) - usb_forced_unbind_intf(intf); - } - } -} - /* Delayed forced unbinding of a USB interface driver and scan * for rebinding. * - * The caller must hold @intf's device's lock, but not @intf's lock. + * The caller must hold @intf's device's lock, but not its pm_mutex + * and not @intf->dev.sem. * * Note: Rebinds will be skipped if a system sleep transition is in * progress and the PM "complete" callback hasn't occurred yet. */ -static void usb_rebind_intf(struct usb_interface *intf) +void usb_rebind_intf(struct usb_interface *intf) { int rc; @@ -1013,66 +993,68 @@ static void usb_rebind_intf(struct usb_interface *intf) } } -/* - * Rebind drivers to @udev's marked interfaces. These interfaces have - * the needs_binding flag set. +#ifdef CONFIG_PM + +/* Unbind drivers for @udev's interfaces that don't support suspend/resume + * There is no check for reset_resume here because it can be determined + * only during resume whether reset_resume is needed. * * The caller must hold @udev's device lock. */ -static void rebind_marked_interfaces(struct usb_device *udev) +static void unbind_no_pm_drivers_interfaces(struct usb_device *udev) { struct usb_host_config *config; int i; struct usb_interface *intf; + struct usb_driver *drv; config = udev->actconfig; if (config) { for (i = 0; i < config->desc.bNumInterfaces; ++i) { intf = config->interface[i]; - if (intf->needs_binding) - usb_rebind_intf(intf); + + if (intf->dev.driver) { + drv = to_usb_driver(intf->dev.driver); + if (!drv->suspend || !drv->resume) + usb_forced_unbind_intf(intf); + } } } } -/* - * Unbind all of @udev's marked interfaces and then rebind all of them. - * This ordering is necessary because some drivers claim several interfaces - * when they are first probed. +/* Unbind drivers for @udev's interfaces that failed to support reset-resume. + * These interfaces have the needs_binding flag set by usb_resume_interface(). * * The caller must hold @udev's device lock. */ -void usb_unbind_and_rebind_marked_interfaces(struct usb_device *udev) +static void unbind_no_reset_resume_drivers_interfaces(struct usb_device *udev) { - unbind_marked_interfaces(udev); - rebind_marked_interfaces(udev); -} + struct usb_host_config *config; + int i; + struct usb_interface *intf; -#ifdef CONFIG_PM + config = udev->actconfig; + if (config) { + for (i = 0; i < config->desc.bNumInterfaces; ++i) { + intf = config->interface[i]; + if (intf->dev.driver && intf->needs_binding) + usb_forced_unbind_intf(intf); + } + } +} -/* Unbind drivers for @udev's interfaces that don't support suspend/resume - * There is no check for reset_resume here because it can be determined - * only during resume whether reset_resume is needed. - * - * The caller must hold @udev's device lock. - */ -static void unbind_no_pm_drivers_interfaces(struct usb_device *udev) +static void do_rebind_interfaces(struct usb_device *udev) { struct usb_host_config *config; int i; struct usb_interface *intf; - struct usb_driver *drv; config = udev->actconfig; if (config) { for (i = 0; i < config->desc.bNumInterfaces; ++i) { intf = config->interface[i]; - - if (intf->dev.driver) { - drv = to_usb_driver(intf->dev.driver); - if (!drv->suspend || !drv->resume) - usb_forced_unbind_intf(intf); - } + if (intf->needs_binding) + usb_rebind_intf(intf); } } } @@ -1403,7 +1385,7 @@ int usb_resume_complete(struct device *dev) * whose needs_binding flag is set */ if (udev->state != USB_STATE_NOTATTACHED) - rebind_marked_interfaces(udev); + do_rebind_interfaces(udev); return 0; } @@ -1428,7 +1410,7 @@ int usb_resume(struct device *dev, pm_message_t msg) pm_runtime_disable(dev); pm_runtime_set_active(dev); pm_runtime_enable(dev); - unbind_marked_interfaces(udev); + unbind_no_reset_resume_drivers_interfaces(udev); } /* Avoid PM error messages for devices disconnected while suspended @@ -1763,13 +1745,10 @@ int usb_runtime_suspend(struct device *dev) if (status == -EAGAIN || status == -EBUSY) usb_mark_last_busy(udev); - /* - * The PM core reacts badly unless the return code is 0, - * -EAGAIN, or -EBUSY, so always return -EBUSY on an error - * (except for root hubs, because they don't suspend through - * an upstream port like other USB devices). + /* The PM core reacts badly unless the return code is 0, + * -EAGAIN, or -EBUSY, so always return -EBUSY on an error. */ - if (status != 0 && udev->parent) + if (status != 0) return -EBUSY; return status; } diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c index 4676917e2b1..caeb8d6d39f 100644 --- a/drivers/usb/core/hcd-pci.c +++ b/drivers/usb/core/hcd-pci.c @@ -75,7 +75,7 @@ static void for_each_companion(struct pci_dev *pdev, struct usb_hcd *hcd, PCI_SLOT(companion->devfn) != slot) continue; companion_hcd = pci_get_drvdata(companion); - if (!companion_hcd || !companion_hcd->self.root_hub) + if (!companion_hcd) continue; fn(pdev, hcd, companion, companion_hcd); } diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c index f6e5ceb03af..d53547d2e4c 100644 --- a/drivers/usb/core/hcd.c +++ b/drivers/usb/core/hcd.c @@ -1947,8 +1947,6 @@ int usb_alloc_streams(struct usb_interface *interface, return -EINVAL; if (dev->speed != USB_SPEED_SUPER) return -EINVAL; - if (dev->state < USB_STATE_CONFIGURED) - return -ENODEV; /* Streams only apply to bulk endpoints. */ for (i = 0; i < num_eps; i++) diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index 7ec89c4f729..c14c34fc0fa 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -887,25 +887,6 @@ static int hub_usb3_port_disable(struct usb_hub *hub, int port1) if (!hub_is_superspeed(hub->hdev)) return -EINVAL; - ret = hub_port_status(hub, port1, &portstatus, &portchange); - if (ret < 0) - return ret; - - /* - * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI - * Controller [1022:7814] will have spurious result making the following - * usb 3.0 device hotplugging route to the 2.0 root hub and recognized - * as high-speed device if we set the usb 3.0 port link state to - * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we - * check the state here to avoid the bug. - */ - if ((portstatus & USB_PORT_STAT_LINK_STATE) == - USB_SS_PORT_LS_RX_DETECT) { - dev_dbg(&hub->ports[port1 - 1]->dev, - "Not disabling port; link state is RxDetect\n"); - return ret; - } - ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED); if (ret) return ret; @@ -1165,8 +1146,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type) /* Tell khubd to disconnect the device or * check for a new connection */ - if (udev || (portstatus & USB_PORT_STAT_CONNECTION) || - (portstatus & USB_PORT_STAT_OVERCURRENT)) + if (udev || (portstatus & USB_PORT_STAT_CONNECTION)) set_bit(port1, hub->change_bits); } else if (portstatus & USB_PORT_STAT_ENABLE) { @@ -1668,19 +1648,10 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id) desc = intf->cur_altsetting; hdev = interface_to_usbdev(intf); - /* - * Hubs have proper suspend/resume support, except for root hubs - * where the controller driver doesn't have bus_suspend and - * bus_resume methods. - */ - if (hdev->parent) { /* normal device */ - usb_enable_autosuspend(hdev); - } else { /* root hub */ - const struct hc_driver *drv = bus_to_hcd(hdev->bus)->driver; + pm_runtime_set_autosuspend_delay(&hdev->dev, 200); - if (drv->bus_suspend && drv->bus_resume) - usb_enable_autosuspend(hdev); - } + /* Hubs have proper suspend/resume support. */ + usb_enable_autosuspend(hdev); if (hdev->level == MAX_TOPO_LEVEL) { dev_err(&intf->dev, @@ -1910,10 +1881,8 @@ void usb_set_device_state(struct usb_device *udev, || new_state == USB_STATE_SUSPENDED) ; /* No change to wakeup settings */ else if (new_state == USB_STATE_CONFIGURED) - wakeup = (udev->quirks & - USB_QUIRK_IGNORE_REMOTE_WAKEUP) ? 0 : - udev->actconfig->desc.bmAttributes & - USB_CONFIG_ATT_WAKEUP; + wakeup = udev->actconfig->desc.bmAttributes + & USB_CONFIG_ATT_WAKEUP; else wakeup = 0; } @@ -3139,43 +3108,6 @@ static int finish_port_resume(struct usb_device *udev) } /* - * There are some SS USB devices which take longer time for link training. - * XHCI specs 4.19.4 says that when Link training is successful, port - * sets CSC bit to 1. So if SW reads port status before successful link - * training, then it will not find device to be present. - * USB Analyzer log with such buggy devices show that in some cases - * device switch on the RX termination after long delay of host enabling - * the VBUS. In few other cases it has been seen that device fails to - * negotiate link training in first attempt. It has been - * reported till now that few devices take as long as 2000 ms to train - * the link after host enabling its VBUS and termination. Following - * routine implements a 2000 ms timeout for link training. If in a case - * link trains before timeout, loop will exit earlier. - * - * FIXME: If a device was connected before suspend, but was removed - * while system was asleep, then the loop in the following routine will - * only exit at timeout. - * - * This routine should only be called when persist is enabled for a SS - * device. - */ -static int wait_for_ss_port_enable(struct usb_device *udev, - struct usb_hub *hub, int *port1, - u16 *portchange, u16 *portstatus) -{ - int status = 0, delay_ms = 0; - - while (delay_ms < 2000) { - if (status || *portstatus & USB_PORT_STAT_CONNECTION) - break; - msleep(20); - delay_ms += 20; - status = hub_port_status(hub, *port1, portstatus, portchange); - } - return status; -} - -/* * usb_port_resume - re-activate a suspended usb device's upstream port * @udev: device to re-activate, not a root hub * Context: must be able to sleep; device not locked; pm locks held @@ -3277,10 +3209,6 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg) clear_bit(port1, hub->busy_bits); - if (udev->persist_enabled && hub_is_superspeed(hub->hdev)) - status = wait_for_ss_port_enable(udev, hub, &port1, &portchange, - &portstatus); - status = check_port_resume_type(udev, hub, port1, status, portchange, portstatus); if (status == 0) @@ -4669,10 +4597,9 @@ static void hub_events(void) hub = list_entry(tmp, struct usb_hub, event_list); kref_get(&hub->kref); - hdev = hub->hdev; - usb_get_dev(hdev); spin_unlock_irq(&hub_event_lock); + hdev = hub->hdev; hub_dev = hub->intfdev; intf = to_usb_interface(hub_dev); dev_dbg(hub_dev, "state %d ports %d chg %04x evt %04x\n", @@ -4887,7 +4814,6 @@ static void hub_events(void) usb_autopm_put_interface(intf); loop_disconnected: usb_unlock_device(hdev); - usb_put_dev(hdev); kref_put(&hub->kref, hub_release); } /* end while (1) */ @@ -5298,11 +5224,10 @@ int usb_reset_device(struct usb_device *udev) else if (cintf->condition == USB_INTERFACE_BOUND) rebind = 1; - if (rebind) - cintf->needs_binding = 1; } + if (ret == 0 && rebind) + usb_rebind_intf(cintf); } - usb_unbind_and_rebind_marked_interfaces(udev); } usb_autosuspend_device(udev); diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c index a301b3fa622..01fe36273f3 100644 --- a/drivers/usb/core/quirks.c +++ b/drivers/usb/core/quirks.c @@ -46,10 +46,6 @@ static const struct usb_device_id usb_quirk_list[] = { /* Microsoft LifeCam-VX700 v2.0 */ { USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME }, - /* Logitech HD Pro Webcams C920 and C930e */ - { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT }, - { USB_DEVICE(0x046d, 0x0843), .driver_info = USB_QUIRK_DELAY_INIT }, - /* Logitech Quickcam Fusion */ { USB_DEVICE(0x046d, 0x08c1), .driver_info = USB_QUIRK_RESET_RESUME }, @@ -162,10 +158,6 @@ static const struct usb_device_id usb_interface_quirk_list[] = { { USB_VENDOR_AND_INTERFACE_INFO(0x046d, USB_CLASS_VIDEO, 1, 0), .driver_info = USB_QUIRK_RESET_RESUME }, - /* ASUS Base Station(T100) */ - { USB_DEVICE(0x0b05, 0x17e0), .driver_info = - USB_QUIRK_IGNORE_REMOTE_WAKEUP }, - { } /* terminating entry must be last */ }; diff --git a/drivers/usb/core/usb.h b/drivers/usb/core/usb.h index 0923add72b5..823857767a1 100644 --- a/drivers/usb/core/usb.h +++ b/drivers/usb/core/usb.h @@ -55,7 +55,7 @@ extern int usb_match_one_id_intf(struct usb_device *dev, extern int usb_match_device(struct usb_device *dev, const struct usb_device_id *id); extern void usb_forced_unbind_intf(struct usb_interface *intf); -extern void usb_unbind_and_rebind_marked_interfaces(struct usb_device *udev); +extern void usb_rebind_intf(struct usb_interface *intf); extern int usb_hub_claim_port(struct usb_device *hdev, unsigned port, struct dev_state *owner); diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c index 1d386030d3c..358375e0b29 100644 --- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -603,6 +603,12 @@ static int dwc3_remove(struct platform_device *pdev) { struct dwc3 *dwc = platform_get_drvdata(pdev); + usb_phy_set_suspend(dwc->usb2_phy, 1); + usb_phy_set_suspend(dwc->usb3_phy, 1); + + pm_runtime_put(&pdev->dev); + pm_runtime_disable(&pdev->dev); + dwc3_debugfs_exit(dwc); switch (dwc->mode) { @@ -623,15 +629,8 @@ static int dwc3_remove(struct platform_device *pdev) dwc3_event_buffers_cleanup(dwc); dwc3_free_event_buffers(dwc); - - usb_phy_set_suspend(dwc->usb2_phy, 1); - usb_phy_set_suspend(dwc->usb3_phy, 1); - dwc3_core_exit(dwc); - pm_runtime_put_sync(&pdev->dev); - pm_runtime_disable(&pdev->dev); - return 0; } diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h index 7ab3c998525..27dad993b00 100644 --- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -836,15 +836,15 @@ struct dwc3_event_depevt { * 12 - VndrDevTstRcved * @reserved15_12: Reserved, not used * @event_info: Information about this event - * @reserved31_25: Reserved, not used + * @reserved31_24: Reserved, not used */ struct dwc3_event_devt { u32 one_bit:1; u32 device_event:7; u32 type:4; u32 reserved15_12:4; - u32 event_info:9; - u32 reserved31_25:7; + u32 event_info:8; + u32 reserved31_24:8; } __packed; /** diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c index cb5f8c44eb3..34638b92500 100644 --- a/drivers/usb/dwc3/dwc3-omap.c +++ b/drivers/usb/dwc3/dwc3-omap.c @@ -395,9 +395,9 @@ static int dwc3_omap_remove(struct platform_device *pdev) struct dwc3_omap *omap = platform_get_drvdata(pdev); dwc3_omap_disable_irqs(omap); - device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core); pm_runtime_put_sync(&pdev->dev); pm_runtime_disable(&pdev->dev); + device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core); return 0; } diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c index 6cd418f6ac0..3cea676ba90 100644 --- a/drivers/usb/dwc3/ep0.c +++ b/drivers/usb/dwc3/ep0.c @@ -270,7 +270,7 @@ static void dwc3_ep0_stall_and_restart(struct dwc3 *dwc) /* stall is always issued on EP0 */ dep = dwc->eps[0]; - __dwc3_gadget_ep_set_halt(dep, 1, false); + __dwc3_gadget_ep_set_halt(dep, 1); dep->flags = DWC3_EP_ENABLED; dwc->delayed_status = false; @@ -480,7 +480,7 @@ static int dwc3_ep0_handle_feature(struct dwc3 *dwc, return -EINVAL; if (set == 0 && (dep->flags & DWC3_EP_WEDGE)) break; - ret = __dwc3_gadget_ep_set_halt(dep, set, true); + ret = __dwc3_gadget_ep_set_halt(dep, set); if (ret) return -EINVAL; break; diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 8f8e75e392d..69948ad3983 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -550,11 +550,12 @@ static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, if (!usb_endpoint_xfer_isoc(desc)) return 0; + memset(&trb_link, 0, sizeof(trb_link)); + /* Link TRB for ISOC. The HWO bit is never reset */ trb_st_hw = &dep->trb_pool[0]; trb_link = &dep->trb_pool[DWC3_TRB_NUM - 1]; - memset(trb_link, 0, sizeof(*trb_link)); trb_link->bpl = lower_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw)); trb_link->bph = upper_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw)); @@ -603,10 +604,6 @@ static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep) dwc3_remove_requests(dwc, dep); - /* make sure HW endpoint isn't stalled */ - if (dep->flags & DWC3_EP_STALL) - __dwc3_gadget_ep_set_halt(dep, 0, false); - reg = dwc3_readl(dwc->regs, DWC3_DALEPENA); reg &= ~DWC3_DALEPENA_EP(dep->number); dwc3_writel(dwc->regs, DWC3_DALEPENA, reg); @@ -1205,7 +1202,7 @@ out0: return ret; } -int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol) +int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value) { struct dwc3_gadget_ep_cmd_params params; struct dwc3 *dwc = dep->dwc; @@ -1214,14 +1211,6 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol) memset(¶ms, 0x00, sizeof(params)); if (value) { - if (!protocol && ((dep->direction && dep->flags & DWC3_EP_BUSY) || - (!list_empty(&dep->req_queued) || - !list_empty(&dep->request_list)))) { - dev_dbg(dwc->dev, "%s: pending request, cannot halt\n", - dep->name); - return -EAGAIN; - } - ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, DWC3_DEPCMD_SETSTALL, ¶ms); if (ret) @@ -1261,7 +1250,7 @@ static int dwc3_gadget_ep_set_halt(struct usb_ep *ep, int value) goto out; } - ret = __dwc3_gadget_ep_set_halt(dep, value, false); + ret = __dwc3_gadget_ep_set_halt(dep, value); out: spin_unlock_irqrestore(&dwc->lock, flags); @@ -1281,7 +1270,7 @@ static int dwc3_gadget_ep_set_wedge(struct usb_ep *ep) if (dep->number == 0 || dep->number == 1) return dwc3_gadget_ep0_set_halt(ep, 1); else - return __dwc3_gadget_ep_set_halt(dep, 1, false); + return dwc3_gadget_ep_set_halt(ep, 1); } /* -------------------------------------------------------------------------- */ diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h index b3f25c302e3..99e6d724882 100644 --- a/drivers/usb/dwc3/gadget.h +++ b/drivers/usb/dwc3/gadget.h @@ -114,7 +114,7 @@ void dwc3_ep0_out_start(struct dwc3 *dwc); int dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value); int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request, gfp_t gfp_flags); -int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol); +int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value); int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep, unsigned cmd, struct dwc3_gadget_ep_cmd_params *params); int dwc3_send_gadget_generic_command(struct dwc3 *dwc, int cmd, u32 param); diff --git a/drivers/usb/gadget/at91_udc.c b/drivers/usb/gadget/at91_udc.c index 55e96131753..073b938f913 100644 --- a/drivers/usb/gadget/at91_udc.c +++ b/drivers/usb/gadget/at91_udc.c @@ -1703,6 +1703,16 @@ static int at91udc_probe(struct platform_device *pdev) return -ENODEV; } + if (pdev->num_resources != 2) { + DBG("invalid num_resources\n"); + return -ENODEV; + } + if ((pdev->resource[0].flags != IORESOURCE_MEM) + || (pdev->resource[1].flags != IORESOURCE_IRQ)) { + DBG("invalid resource type\n"); + return -ENODEV; + } + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!res) return -ENXIO; diff --git a/drivers/usb/gadget/f_acm.c b/drivers/usb/gadget/f_acm.c index 3384486c288..ab1065afbbd 100644 --- a/drivers/usb/gadget/f_acm.c +++ b/drivers/usb/gadget/f_acm.c @@ -430,12 +430,11 @@ static int acm_set_alt(struct usb_function *f, unsigned intf, unsigned alt) if (acm->notify->driver_data) { VDBG(cdev, "reset acm control interface %d\n", intf); usb_ep_disable(acm->notify); - } - - if (!acm->notify->desc) + } else { + VDBG(cdev, "init acm ctrl interface %d\n", intf); if (config_ep_by_speed(cdev->gadget, f, acm->notify)) return -EINVAL; - + } usb_ep_enable(acm->notify); acm->notify->driver_data = acm; diff --git a/drivers/usb/gadget/f_fs.c b/drivers/usb/gadget/f_fs.c index 6294a79dbe7..8b400f00f84 100644 --- a/drivers/usb/gadget/f_fs.c +++ b/drivers/usb/gadget/f_fs.c @@ -1397,13 +1397,11 @@ static int functionfs_bind(struct ffs_data *ffs, struct usb_composite_dev *cdev) ffs->ep0req->context = ffs; lang = ffs->stringtabs; - if (lang) { - for (; *lang; ++lang) { - struct usb_string *str = (*lang)->strings; - int id = first_id; - for (; str->s; ++id, ++str) - str->id = id; - } + for (lang = ffs->stringtabs; *lang; ++lang) { + struct usb_string *str = (*lang)->strings; + int id = first_id; + for (; str->s; ++id, ++str) + str->id = id; } ffs->gadget = cdev->gadget; diff --git a/drivers/usb/gadget/inode.c b/drivers/usb/gadget/inode.c index 42a30903d4f..570c005062a 100644 --- a/drivers/usb/gadget/inode.c +++ b/drivers/usb/gadget/inode.c @@ -1509,7 +1509,7 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl) } break; -#ifndef CONFIG_USB_PXA25X +#ifndef CONFIG_USB_GADGET_PXA25X /* PXA automagically handles this request too */ case USB_REQ_GET_CONFIGURATION: if (ctrl->bRequestType != 0x80) diff --git a/drivers/usb/gadget/tcm_usb_gadget.c b/drivers/usb/gadget/tcm_usb_gadget.c index e4d8d79a499..7cacd6ae818 100644 --- a/drivers/usb/gadget/tcm_usb_gadget.c +++ b/drivers/usb/gadget/tcm_usb_gadget.c @@ -1614,7 +1614,7 @@ static struct se_wwn *usbg_make_tport( return ERR_PTR(-ENOMEM); } tport->tport_wwpn = wwpn; - snprintf(tport->tport_name, sizeof(tport->tport_name), "%s", wnn_name); + snprintf(tport->tport_name, sizeof(tport->tport_name), wnn_name); return &tport->tport_wwn; } diff --git a/drivers/usb/gadget/udc-core.c b/drivers/usb/gadget/udc-core.c index 58a861395ea..afe9b9e50cc 100644 --- a/drivers/usb/gadget/udc-core.c +++ b/drivers/usb/gadget/udc-core.c @@ -447,11 +447,6 @@ static ssize_t usb_udc_softconn_store(struct device *dev, { struct usb_udc *udc = container_of(dev, struct usb_udc, dev); - if (!udc->driver) { - dev_err(dev, "soft-connect without a gadget driver\n"); - return -EOPNOTSUPP; - } - if (sysfs_streq(buf, "connect")) { usb_gadget_udc_start(udc->gadget, udc->driver); usb_gadget_connect(udc->gadget); diff --git a/drivers/usb/gadget/zero.c b/drivers/usb/gadget/zero.c index d31814c7238..0deb9d6cde2 100644 --- a/drivers/usb/gadget/zero.c +++ b/drivers/usb/gadget/zero.c @@ -280,7 +280,7 @@ static int __init zero_bind(struct usb_composite_dev *cdev) ss_opts->isoc_interval = gzero_options.isoc_interval; ss_opts->isoc_maxpacket = gzero_options.isoc_maxpacket; ss_opts->isoc_mult = gzero_options.isoc_mult; - ss_opts->isoc_maxburst = gzero_options.isoc_maxburst; + ss_opts->isoc_maxburst = gzero_options.isoc_maxpacket; ss_opts->bulk_buflen = gzero_options.bulk_buflen; func_ss = usb_get_function(func_inst_ss); diff --git a/drivers/usb/host/ehci-fsl.c b/drivers/usb/host/ehci-fsl.c index bfcf38383f7..3c0a49a298d 100644 --- a/drivers/usb/host/ehci-fsl.c +++ b/drivers/usb/host/ehci-fsl.c @@ -261,8 +261,7 @@ static int ehci_fsl_setup_phy(struct usb_hcd *hcd, break; } - if (pdata->have_sysif_regs && - pdata->controller_ver > FSL_USB_VER_1_6 && + if (pdata->have_sysif_regs && pdata->controller_ver && (phy_mode == FSL_USB2_PHY_ULPI)) { /* check PHY_CLK_VALID to get phy clk valid */ if (!spin_event_timeout(in_be32(non_ehci + FSL_SOC_USB_CTRL) & diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c index 0bbd8a5ee57..8864548a374 100644 --- a/drivers/usb/host/ehci-hcd.c +++ b/drivers/usb/host/ehci-hcd.c @@ -980,6 +980,8 @@ rescan: } qh->exception = 1; + if (ehci->rh_state < EHCI_RH_RUNNING) + qh->qh_state = QH_STATE_IDLE; switch (qh->qh_state) { case QH_STATE_LINKED: case QH_STATE_COMPLETING: diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c index fe131565d09..8fe401c7d15 100644 --- a/drivers/usb/host/ehci-pci.c +++ b/drivers/usb/host/ehci-pci.c @@ -35,21 +35,6 @@ static const char hcd_name[] = "ehci-pci"; #define PCI_DEVICE_ID_INTEL_CE4100_USB 0x2e70 /*-------------------------------------------------------------------------*/ -#define PCI_DEVICE_ID_INTEL_QUARK_X1000_SOC 0x0939 -static inline bool is_intel_quark_x1000(struct pci_dev *pdev) -{ - return pdev->vendor == PCI_VENDOR_ID_INTEL && - pdev->device == PCI_DEVICE_ID_INTEL_QUARK_X1000_SOC; -} - -/* - * 0x84 is the offset of in/out threshold register, - * and it is the same offset as the register of 'hostpc'. - */ -#define intel_quark_x1000_insnreg01 hostpc - -/* Maximum usable threshold value is 0x7f dwords for both IN and OUT */ -#define INTEL_QUARK_X1000_EHCI_MAX_THRESHOLD 0x007f007f /* called after powerup, by probe or system-pm "wakeup" */ static int ehci_pci_reinit(struct ehci_hcd *ehci, struct pci_dev *pdev) @@ -65,16 +50,6 @@ static int ehci_pci_reinit(struct ehci_hcd *ehci, struct pci_dev *pdev) if (!retval) ehci_dbg(ehci, "MWI active\n"); - /* Reset the threshold limit */ - if (is_intel_quark_x1000(pdev)) { - /* - * For the Intel QUARK X1000, raise the I/O threshold to the - * maximum usable value in order to improve performance. - */ - ehci_writel(ehci, INTEL_QUARK_X1000_EHCI_MAX_THRESHOLD, - ehci->regs->intel_quark_x1000_insnreg01); - } - return 0; } diff --git a/drivers/usb/host/ohci-hub.c b/drivers/usb/host/ohci-hub.c index cd908066fde..60ff4220e8b 100644 --- a/drivers/usb/host/ohci-hub.c +++ b/drivers/usb/host/ohci-hub.c @@ -90,24 +90,6 @@ __acquires(ohci->lock) dl_done_list (ohci); finish_unlinks (ohci, ohci_frame_no(ohci)); - /* - * Some controllers don't handle "global" suspend properly if - * there are unsuspended ports. For these controllers, put all - * the enabled ports into suspend before suspending the root hub. - */ - if (ohci->flags & OHCI_QUIRK_GLOBAL_SUSPEND) { - __hc32 __iomem *portstat = ohci->regs->roothub.portstatus; - int i; - unsigned temp; - - for (i = 0; i < ohci->num_ports; (++i, ++portstat)) { - temp = ohci_readl(ohci, portstat); - if ((temp & (RH_PS_PES | RH_PS_PSS)) == - RH_PS_PES) - ohci_writel(ohci, RH_PS_PSS, portstat); - } - } - /* maybe resume can wake root hub */ if (ohci_to_hcd(ohci)->self.root_hub->do_remote_wakeup || autostop) { ohci->hc_control |= OHCI_CTRL_RWE; diff --git a/drivers/usb/host/ohci-pci.c b/drivers/usb/host/ohci-pci.c index 67af8eef653..ef6782bd1fa 100644 --- a/drivers/usb/host/ohci-pci.c +++ b/drivers/usb/host/ohci-pci.c @@ -172,7 +172,6 @@ static int ohci_quirk_amd700(struct usb_hcd *hcd) pci_dev_put(amd_smbus_dev); amd_smbus_dev = NULL; - ohci->flags |= OHCI_QUIRK_GLOBAL_SUSPEND; return 0; } diff --git a/drivers/usb/host/ohci-q.c b/drivers/usb/host/ohci-q.c index 1e1563da181..37dc8373200 100644 --- a/drivers/usb/host/ohci-q.c +++ b/drivers/usb/host/ohci-q.c @@ -314,7 +314,8 @@ static void periodic_unlink (struct ohci_hcd *ohci, struct ed *ed) * - ED_OPER: when there's any request queued, the ED gets rescheduled * immediately. HC should be working on them. * - * - ED_IDLE: when there's no TD queue or the HC isn't running. + * - ED_IDLE: when there's no TD queue. there's no reason for the HC + * to care about this ED; safe to disable the endpoint. * * When finish_unlinks() runs later, after SOF interrupt, it will often * complete one or more URB unlinks before making that state change. @@ -927,10 +928,6 @@ rescan_all: int completed, modified; __hc32 *prev; - /* Is this ED already invisible to the hardware? */ - if (ed->state == ED_IDLE) - goto ed_idle; - /* only take off EDs that the HC isn't using, accounting for * frame counter wraps and EDs with partially retired TDs */ @@ -960,20 +957,12 @@ skip_ed: } } - /* ED's now officially unlinked, hc doesn't see */ - ed->state = ED_IDLE; - if (quirk_zfmicro(ohci) && ed->type == PIPE_INTERRUPT) - ohci->eds_scheduled--; - ed->hwHeadP &= ~cpu_to_hc32(ohci, ED_H); - ed->hwNextED = 0; - wmb(); - ed->hwINFO &= ~cpu_to_hc32(ohci, ED_SKIP | ED_DEQUEUE); -ed_idle: - /* reentrancy: if we drop the schedule lock, someone might * have modified this list. normally it's just prepending * entries (which we'd ignore), but paranoia won't hurt. */ + *last = ed->ed_next; + ed->ed_next = NULL; modified = 0; /* unlink urbs as requested, but rescan the list after @@ -1031,20 +1020,19 @@ rescan_this: if (completed && !list_empty (&ed->td_list)) goto rescan_this; - /* - * If no TDs are queued, take ED off the ed_rm_list. - * Otherwise, if the HC is running, reschedule. - * If not, leave it on the list for further dequeues. - */ - if (list_empty(&ed->td_list)) { - *last = ed->ed_next; - ed->ed_next = NULL; - } else if (ohci->rh_state == OHCI_RH_RUNNING) { - *last = ed->ed_next; - ed->ed_next = NULL; - ed_schedule(ohci, ed); - } else { - last = &ed->ed_next; + /* ED's now officially unlinked, hc doesn't see */ + ed->state = ED_IDLE; + if (quirk_zfmicro(ohci) && ed->type == PIPE_INTERRUPT) + ohci->eds_scheduled--; + ed->hwHeadP &= ~cpu_to_hc32(ohci, ED_H); + ed->hwNextED = 0; + wmb (); + ed->hwINFO &= ~cpu_to_hc32 (ohci, ED_SKIP | ED_DEQUEUE); + + /* but if there's work queued, reschedule */ + if (!list_empty (&ed->td_list)) { + if (ohci->rh_state == OHCI_RH_RUNNING) + ed_schedule (ohci, ed); } if (modified) diff --git a/drivers/usb/host/ohci.h b/drivers/usb/host/ohci.h index f2521f3185d..d3299143d9e 100644 --- a/drivers/usb/host/ohci.h +++ b/drivers/usb/host/ohci.h @@ -405,8 +405,6 @@ struct ohci_hcd { #define OHCI_QUIRK_HUB_POWER 0x100 /* distrust firmware power/oc setup */ #define OHCI_QUIRK_AMD_PLL 0x200 /* AMD PLL quirk*/ #define OHCI_QUIRK_AMD_PREFETCH 0x400 /* pre-fetch for ISO transfer */ -#define OHCI_QUIRK_GLOBAL_SUSPEND 0x800 /* must suspend ports */ - // there are also chip quirks/bugs in init logic struct work_struct nec_work; /* Worker for NEC quirk */ diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c index 9cfe3af3101..4c338ec03a0 100644 --- a/drivers/usb/host/pci-quirks.c +++ b/drivers/usb/host/pci-quirks.c @@ -555,14 +555,6 @@ static const struct dmi_system_id ehci_dmi_nohandoff_table[] = { DMI_MATCH(DMI_BIOS_VERSION, "Lucid-"), }, }, - { - /* HASEE E200 */ - .matches = { - DMI_MATCH(DMI_BOARD_VENDOR, "HASEE"), - DMI_MATCH(DMI_BOARD_NAME, "E210"), - DMI_MATCH(DMI_BIOS_VERSION, "6.00"), - }, - }, { } }; @@ -572,14 +564,9 @@ static void ehci_bios_handoff(struct pci_dev *pdev, { int try_handoff = 1, tried_handoff = 0; - /* - * The Pegatron Lucid tablet sporadically waits for 98 seconds trying - * the handoff on its unused controller. Skip it. - * - * The HASEE E200 hangs when the semaphore is set (bugzilla #77021). - */ - if (pdev->vendor == 0x8086 && (pdev->device == 0x283a || - pdev->device == 0x27cc)) { + /* The Pegatron Lucid tablet sporadically waits for 98 seconds trying + * the handoff on its unused controller. Skip it. */ + if (pdev->vendor == 0x8086 && pdev->device == 0x283a) { if (dmi_check_system(ehci_dmi_nohandoff_table)) try_handoff = 0; } diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c index 8c2c8ba91f6..eff8b13831e 100644 --- a/drivers/usb/host/xhci-hub.c +++ b/drivers/usb/host/xhci-hub.c @@ -462,8 +462,7 @@ void xhci_test_and_clear_bit(struct xhci_hcd *xhci, __le32 __iomem **port_array, } /* Updates Link Status for super Speed port */ -static void xhci_hub_report_link_state(struct xhci_hcd *xhci, - u32 *status, u32 status_reg) +static void xhci_hub_report_link_state(u32 *status, u32 status_reg) { u32 pls = status_reg & PORT_PLS_MASK; @@ -502,8 +501,7 @@ static void xhci_hub_report_link_state(struct xhci_hcd *xhci, * in which sometimes the port enters compliance mode * caused by a delay on the host-device negotiation. */ - if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) && - (pls == USB_SS_PORT_LS_COMP_MOD)) + if (pls == USB_SS_PORT_LS_COMP_MOD) pls |= USB_PORT_STAT_CONNECTION; } @@ -696,7 +694,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, } /* Update Port Link State for super speed ports*/ if (hcd->speed == HCD_USB3) { - xhci_hub_report_link_state(xhci, &status, temp); + xhci_hub_report_link_state(&status, temp); /* * Verify if all USB3 Ports Have entered U0 already. * Delete Compliance Mode Timer if so. diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 677f032482f..f2e57a1112c 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1794,16 +1794,6 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci) kfree(cur_cd); } - num_ports = HCS_MAX_PORTS(xhci->hcs_params1); - for (i = 0; i < num_ports && xhci->rh_bw; i++) { - struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table; - for (j = 0; j < XHCI_MAX_INTERVAL; j++) { - struct list_head *ep = &bwt->interval_bw[j].endpoints; - while (!list_empty(ep)) - list_del_init(ep->next); - } - } - for (i = 1; i < MAX_HC_SLOTS; ++i) xhci_free_virt_device(xhci, i); @@ -1844,6 +1834,16 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci) if (!xhci->rh_bw) goto no_bw; + num_ports = HCS_MAX_PORTS(xhci->hcs_params1); + for (i = 0; i < num_ports; i++) { + struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table; + for (j = 0; j < XHCI_MAX_INTERVAL; j++) { + struct list_head *ep = &bwt->interval_bw[j].endpoints; + while (!list_empty(ep)) + list_del_init(ep->next); + } + } + for (i = 0; i < num_ports; i++) { struct xhci_tt_bw_info *tt, *n; list_for_each_entry_safe(tt, n, &xhci->rh_bw[i].tts, tt_list) { diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 0e57bcb8e3f..159e3c6d92b 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -87,10 +87,6 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) /* AMD PLL quirk */ if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info()) xhci->quirks |= XHCI_AMD_PLL_FIX; - - if (pdev->vendor == PCI_VENDOR_ID_AMD) - xhci->quirks |= XHCI_TRUST_TX_LENGTH; - if (pdev->vendor == PCI_VENDOR_ID_INTEL) { xhci->quirks |= XHCI_LPM_SUPPORT; xhci->quirks |= XHCI_INTEL_HOST; @@ -117,9 +113,6 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) xhci_dbg(xhci, "QUIRK: Resetting on resume\n"); xhci->quirks |= XHCI_TRUST_TX_LENGTH; } - if (pdev->vendor == PCI_VENDOR_ID_RENESAS && - pdev->device == 0x0015) - xhci->quirks |= XHCI_RESET_ON_RESUME; if (pdev->vendor == PCI_VENDOR_ID_VIA) xhci->quirks |= XHCI_RESET_ON_RESUME; } @@ -163,10 +156,6 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) struct usb_hcd *hcd; driver = (struct hc_driver *)id->driver_data; - - /* Prevent runtime suspending between USB-2 and USB-3 initialization */ - pm_runtime_get_noresume(&dev->dev); - /* Register the USB 2.0 roothub. * FIXME: USB core must know to register the USB 2.0 roothub first. * This is sort of silly, because we could just set the HCD driver flags @@ -176,7 +165,7 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) retval = usb_hcd_pci_probe(dev, id); if (retval) - goto put_runtime_pm; + return retval; /* USB 2.0 roothub is stored in the PCI device now. */ hcd = dev_get_drvdata(&dev->dev); @@ -205,17 +194,12 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) if (xhci->quirks & XHCI_LPM_SUPPORT) hcd_to_bus(xhci->shared_hcd)->root_hub->lpm_capable = 1; - /* USB-2 and USB-3 roothubs initialized, allow runtime pm suspend */ - pm_runtime_put_noidle(&dev->dev); - return 0; put_usb3_hcd: usb_put_hcd(xhci->shared_hcd); dealloc_usb2_hcd: usb_hcd_pci_remove(dev); -put_runtime_pm: - pm_runtime_put_noidle(&dev->dev); return retval; } diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index e067fd1d2de..15c76fced8f 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2551,8 +2551,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, * last TRB of the previous TD. The command completion handle * will take care the rest. */ - if (!event_seg && (trb_comp_code == COMP_STOP || - trb_comp_code == COMP_STOP_INVAL)) { + if (!event_seg && trb_comp_code == COMP_STOP_INVAL) { ret = 0; goto cleanup; } @@ -3610,7 +3609,7 @@ static unsigned int xhci_get_burst_count(struct xhci_hcd *xhci, return 0; max_burst = urb->ep->ss_ep_comp.bMaxBurst; - return DIV_ROUND_UP(total_packet_count, max_burst + 1) - 1; + return roundup(total_packet_count, max_burst + 1) - 1; } /* diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 3d92b016f8b..1b62ee32000 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -394,16 +394,16 @@ static int xhci_try_enable_msi(struct usb_hcd *hcd) #else -static inline int xhci_try_enable_msi(struct usb_hcd *hcd) +static int xhci_try_enable_msi(struct usb_hcd *hcd) { return 0; } -static inline void xhci_cleanup_msix(struct xhci_hcd *xhci) +static void xhci_cleanup_msix(struct xhci_hcd *xhci) { } -static inline void xhci_msix_sync_irqs(struct xhci_hcd *xhci) +static void xhci_msix_sync_irqs(struct xhci_hcd *xhci) { } @@ -960,7 +960,7 @@ int xhci_suspend(struct xhci_hcd *xhci) */ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) { - u32 command, temp = 0, status; + u32 command, temp = 0; struct usb_hcd *hcd = xhci_to_hcd(xhci); struct usb_hcd *secondary_hcd; int retval = 0; @@ -1084,12 +1084,8 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) done: if (retval == 0) { - /* Resume root hubs only when have pending events. */ - status = readl(&xhci->op_regs->status); - if (status & STS_EINT) { - usb_hcd_resume_root_hub(hcd); - usb_hcd_resume_root_hub(xhci->shared_hcd); - } + usb_hcd_resume_root_hub(hcd); + usb_hcd_resume_root_hub(xhci->shared_hcd); } /* @@ -4437,21 +4433,13 @@ static int xhci_change_max_exit_latency(struct xhci_hcd *xhci, int ret; spin_lock_irqsave(&xhci->lock, flags); - - virt_dev = xhci->devs[udev->slot_id]; - - /* - * virt_dev might not exists yet if xHC resumed from hibernate (S4) and - * xHC was re-initialized. Exit latency will be set later after - * hub_port_finish_reset() is done and xhci->devs[] are re-allocated - */ - - if (!virt_dev || max_exit_latency == virt_dev->current_mel) { + if (max_exit_latency == xhci->devs[udev->slot_id]->current_mel) { spin_unlock_irqrestore(&xhci->lock, flags); return 0; } /* Attempt to issue an Evaluate Context command to change the MEL. */ + virt_dev = xhci->devs[udev->slot_id]; command = xhci->lpm_command; xhci_slot_copy(xhci, command->in_ctx, virt_dev->out_ctx); spin_unlock_irqrestore(&xhci->lock, flags); diff --git a/drivers/usb/misc/sisusbvga/sisusb.c b/drivers/usb/misc/sisusbvga/sisusb.c index 0aef801edbc..de98906f786 100644 --- a/drivers/usb/misc/sisusbvga/sisusb.c +++ b/drivers/usb/misc/sisusbvga/sisusb.c @@ -3248,7 +3248,6 @@ static const struct usb_device_id sisusb_table[] = { { USB_DEVICE(0x0711, 0x0918) }, { USB_DEVICE(0x0711, 0x0920) }, { USB_DEVICE(0x0711, 0x0950) }, - { USB_DEVICE(0x0711, 0x5200) }, { USB_DEVICE(0x182d, 0x021c) }, { USB_DEVICE(0x182d, 0x0269) }, { } diff --git a/drivers/usb/misc/usbtest.c b/drivers/usb/misc/usbtest.c index 98438b90838..8b4ca1cb450 100644 --- a/drivers/usb/misc/usbtest.c +++ b/drivers/usb/misc/usbtest.c @@ -7,10 +7,9 @@ #include <linux/moduleparam.h> #include <linux/scatterlist.h> #include <linux/mutex.h> -#include <linux/timer.h> + #include <linux/usb.h> -#define SIMPLE_IO_TIMEOUT 10000 /* in milliseconds */ /*-------------------------------------------------------------------------*/ @@ -367,7 +366,6 @@ static int simple_io( int max = urb->transfer_buffer_length; struct completion completion; int retval = 0; - unsigned long expire; urb->context = &completion; while (retval == 0 && iterations-- > 0) { @@ -380,15 +378,9 @@ static int simple_io( if (retval != 0) break; - expire = msecs_to_jiffies(SIMPLE_IO_TIMEOUT); - if (!wait_for_completion_timeout(&completion, expire)) { - usb_kill_urb(urb); - retval = (urb->status == -ENOENT ? - -ETIMEDOUT : urb->status); - } else { - retval = urb->status; - } - + /* NOTE: no timeouts; can't be broken out of by interrupt */ + wait_for_completion(&completion); + retval = urb->status; urb->dev = udev; if (retval == 0 && usb_pipein(urb->pipe)) retval = simple_check_buf(tdev, urb); @@ -484,14 +476,6 @@ alloc_sglist(int nents, int max, int vary) return sg; } -static void sg_timeout(unsigned long _req) -{ - struct usb_sg_request *req = (struct usb_sg_request *) _req; - - req->status = -ETIMEDOUT; - usb_sg_cancel(req); -} - static int perform_sglist( struct usbtest_dev *tdev, unsigned iterations, @@ -503,9 +487,6 @@ static int perform_sglist( { struct usb_device *udev = testdev_to_usbdev(tdev); int retval = 0; - struct timer_list sg_timer; - - setup_timer_on_stack(&sg_timer, sg_timeout, (unsigned long) req); while (retval == 0 && iterations-- > 0) { retval = usb_sg_init(req, udev, pipe, @@ -516,10 +497,7 @@ static int perform_sglist( if (retval) break; - mod_timer(&sg_timer, jiffies + - msecs_to_jiffies(SIMPLE_IO_TIMEOUT)); usb_sg_wait(req); - del_timer_sync(&sg_timer); retval = req->status; /* FIXME check resulting data pattern */ @@ -1171,11 +1149,6 @@ static int unlink1(struct usbtest_dev *dev, int pipe, int size, int async) urb->context = &completion; urb->complete = unlink1_callback; - if (usb_pipeout(urb->pipe)) { - simple_fill_buf(urb); - urb->transfer_flags |= URB_ZERO_PACKET; - } - /* keep the endpoint busy. there are lots of hc/hcd-internal * states, and testing should get to all of them over time. * @@ -1306,11 +1279,6 @@ static int unlink_queued(struct usbtest_dev *dev, int pipe, unsigned num, unlink_queued_callback, &ctx); ctx.urbs[i]->transfer_dma = buf_dma; ctx.urbs[i]->transfer_flags = URB_NO_TRANSFER_DMA_MAP; - - if (usb_pipeout(ctx.urbs[i]->pipe)) { - simple_fill_buf(ctx.urbs[i]); - ctx.urbs[i]->transfer_flags |= URB_ZERO_PACKET; - } } /* Submit all the URBs and then unlink URBs num - 4 and num - 2. */ diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c index da0caf3f4b2..37a261a6bb6 100644 --- a/drivers/usb/musb/musb_core.c +++ b/drivers/usb/musb/musb_core.c @@ -440,6 +440,7 @@ void musb_hnp_stop(struct musb *musb) static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb, u8 devctl) { + struct usb_otg *otg = musb->xceiv->otg; irqreturn_t handled = IRQ_NONE; dev_dbg(musb->controller, "<== DevCtl=%02x, int_usb=0x%x\n", devctl, @@ -654,7 +655,7 @@ static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb, break; case OTG_STATE_B_PERIPHERAL: musb_g_suspend(musb); - musb->is_active = musb->g.b_hnp_enable; + musb->is_active = otg->gadget->b_hnp_enable; if (musb->is_active) { musb->xceiv->state = OTG_STATE_B_WAIT_ACON; dev_dbg(musb->controller, "HNP: Setting timer for b_ase0_brst\n"); @@ -670,7 +671,7 @@ static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb, break; case OTG_STATE_A_HOST: musb->xceiv->state = OTG_STATE_A_SUSPEND; - musb->is_active = musb_to_hcd(musb)->self.b_hnp_enable; + musb->is_active = otg->host->b_hnp_enable; break; case OTG_STATE_B_HOST: /* Transition to B_PERIPHERAL, see 6.8.2.6 p 44 */ diff --git a/drivers/usb/phy/phy-isp1301-omap.c b/drivers/usb/phy/phy-isp1301-omap.c index 9201feb97e9..ae481afcb3e 100644 --- a/drivers/usb/phy/phy-isp1301-omap.c +++ b/drivers/usb/phy/phy-isp1301-omap.c @@ -1299,7 +1299,7 @@ isp1301_set_host(struct usb_otg *otg, struct usb_bus *host) return isp1301_otg_enable(isp); return 0; -#elif !IS_ENABLED(CONFIG_USB_OMAP) +#elif !defined(CONFIG_USB_GADGET_OMAP) // FIXME update its refcount otg->host = host; diff --git a/drivers/usb/phy/phy-ulpi.c b/drivers/usb/phy/phy-ulpi.c index 17ea3f271bd..217339dd7a9 100644 --- a/drivers/usb/phy/phy-ulpi.c +++ b/drivers/usb/phy/phy-ulpi.c @@ -47,8 +47,6 @@ struct ulpi_info { static struct ulpi_info ulpi_ids[] = { ULPI_INFO(ULPI_ID(0x04cc, 0x1504), "NXP ISP1504"), ULPI_INFO(ULPI_ID(0x0424, 0x0006), "SMSC USB331x"), - ULPI_INFO(ULPI_ID(0x0424, 0x0007), "SMSC USB3320"), - ULPI_INFO(ULPI_ID(0x0451, 0x1507), "TI TUSB1210"), }; static int ulpi_set_otg_flags(struct usb_phy *phy) diff --git a/drivers/usb/serial/bus.c b/drivers/usb/serial/bus.c index 7229b265870..3c4db6d196c 100644 --- a/drivers/usb/serial/bus.c +++ b/drivers/usb/serial/bus.c @@ -98,19 +98,13 @@ static int usb_serial_device_remove(struct device *dev) struct usb_serial_port *port; int retval = 0; int minor; - int autopm_err; port = to_usb_serial_port(dev); if (!port) return -ENODEV; - /* - * Make sure suspend/resume doesn't race against port_remove. - * - * Note that no further runtime PM callbacks will be made if - * autopm_get fails. - */ - autopm_err = usb_autopm_get_interface(port->serial->interface); + /* make sure suspend/resume doesn't race against port_remove */ + usb_autopm_get_interface(port->serial->interface); minor = port->number; tty_unregister_device(usb_serial_tty_driver, minor); @@ -124,9 +118,7 @@ static int usb_serial_device_remove(struct device *dev) dev_info(dev, "%s converter now disconnected from ttyUSB%d\n", driver->description, minor); - if (!autopm_err) - usb_autopm_put_interface(port->serial->interface); - + usb_autopm_put_interface(port->serial->interface); return retval; } diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c index e9183eda39e..c90d960e091 100644 --- a/drivers/usb/serial/cp210x.c +++ b/drivers/usb/serial/cp210x.c @@ -104,7 +104,6 @@ static const struct usb_device_id id_table[] = { { USB_DEVICE(0x10C4, 0x8218) }, /* Lipowsky Industrie Elektronik GmbH, HARP-1 */ { USB_DEVICE(0x10C4, 0x822B) }, /* Modem EDGE(GSM) Comander 2 */ { USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */ - { USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */ { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */ { USB_DEVICE(0x10C4, 0x82F9) }, /* Procyon AVS */ { USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */ @@ -122,7 +121,6 @@ static const struct usb_device_id id_table[] = { { USB_DEVICE(0x10C4, 0x8665) }, /* AC-Services OBD-IF */ { USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */ { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */ - { USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */ { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */ { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */ { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */ @@ -154,10 +152,7 @@ static const struct usb_device_id id_table[] = { { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */ { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ - { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ - { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */ { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ - { USB_DEVICE(0x1D6F, 0x0010) }, /* Seluxit ApS RF Dongle */ { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */ { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */ { USB_DEVICE(0x1FB9, 0x0100) }, /* Lake Shore Model 121 Current Source */ diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c index 768c2b4722d..b83da38bc91 100644 --- a/drivers/usb/serial/ftdi_sio.c +++ b/drivers/usb/serial/ftdi_sio.c @@ -148,14 +148,12 @@ static struct ftdi_sio_quirk ftdi_8u2232c_quirk = { * /sys/bus/usb/ftdi_sio/new_id, then send patch/report! */ static struct usb_device_id id_table_combined [] = { - { USB_DEVICE(FTDI_VID, FTDI_BRICK_PID) }, { USB_DEVICE(FTDI_VID, FTDI_ZEITCONTROL_TAGTRACE_MIFARE_PID) }, { USB_DEVICE(FTDI_VID, FTDI_CTI_MINI_PID) }, { USB_DEVICE(FTDI_VID, FTDI_CTI_NANO_PID) }, { USB_DEVICE(FTDI_VID, FTDI_AMC232_PID) }, { USB_DEVICE(FTDI_VID, FTDI_CANUSB_PID) }, { USB_DEVICE(FTDI_VID, FTDI_CANDAPTER_PID) }, - { USB_DEVICE(FTDI_VID, FTDI_BM_ATOM_NANO_PID) }, { USB_DEVICE(FTDI_VID, FTDI_NXTCAM_PID) }, { USB_DEVICE(FTDI_VID, FTDI_EV3CON_PID) }, { USB_DEVICE(FTDI_VID, FTDI_SCS_DEVICE_0_PID) }, @@ -585,8 +583,6 @@ static struct usb_device_id id_table_combined [] = { { USB_DEVICE(FTDI_VID, FTDI_TAVIR_STK500_PID) }, { USB_DEVICE(FTDI_VID, FTDI_TIAO_UMPA_PID), .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, - { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID), - .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, /* * ELV devices: */ @@ -678,10 +674,6 @@ static struct usb_device_id id_table_combined [] = { { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_5_PID) }, { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_6_PID) }, { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_7_PID) }, - { USB_DEVICE(XSENS_VID, XSENS_AWINDA_DONGLE_PID) }, - { USB_DEVICE(XSENS_VID, XSENS_AWINDA_STATION_PID) }, - { USB_DEVICE(XSENS_VID, XSENS_CONVERTER_PID) }, - { USB_DEVICE(XSENS_VID, XSENS_MTW_PID) }, { USB_DEVICE(FTDI_VID, FTDI_OMNI1509) }, { USB_DEVICE(MOBILITY_VID, MOBILITY_USB_SERIAL_PID) }, { USB_DEVICE(FTDI_VID, FTDI_ACTIVE_ROBOTS_PID) }, @@ -729,8 +721,7 @@ static struct usb_device_id id_table_combined [] = { { USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) }, { USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) }, { USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) }, - { USB_DEVICE(TESTO_VID, TESTO_1_PID) }, - { USB_DEVICE(TESTO_VID, TESTO_3_PID) }, + { USB_DEVICE(TESTO_VID, TESTO_USB_INTERFACE_PID) }, { USB_DEVICE(FTDI_VID, FTDI_GAMMA_SCOUT_PID) }, { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13M_PID) }, { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13S_PID) }, @@ -747,7 +738,6 @@ static struct usb_device_id id_table_combined [] = { { USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID), .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk }, { USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) }, - { USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) }, { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_S03_PID) }, { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_59_PID) }, { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_57A_PID) }, @@ -922,43 +912,6 @@ static struct usb_device_id id_table_combined [] = { { USB_DEVICE(FTDI_VID, FTDI_Z3X_PID) }, /* Cressi Devices */ { USB_DEVICE(FTDI_VID, FTDI_CRESSI_PID) }, - /* Brainboxes Devices */ - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_001_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_012_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_023_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_034_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_101_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_1_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_2_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_3_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_4_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_5_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_6_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_7_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_8_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_257_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_1_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_2_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_3_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_4_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_313_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_324_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_346_1_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_346_2_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_357_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_606_1_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_606_2_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_606_3_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_701_1_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_701_2_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_1_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) }, - { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) }, - /* ekey Devices */ - { USB_DEVICE(FTDI_VID, FTDI_EKEY_CONV_USB_PID) }, - /* Infineon Devices */ - { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, { }, /* Optional parameter entry */ { } /* Terminating entry */ }; @@ -1589,17 +1542,14 @@ static void ftdi_set_max_packet_size(struct usb_serial_port *port) struct usb_device *udev = serial->dev; struct usb_interface *interface = serial->interface; - struct usb_endpoint_descriptor *ep_desc; + struct usb_endpoint_descriptor *ep_desc = &interface->cur_altsetting->endpoint[1].desc; unsigned num_endpoints; - unsigned i; + int i; num_endpoints = interface->cur_altsetting->desc.bNumEndpoints; dev_info(&udev->dev, "Number of endpoints %d\n", num_endpoints); - if (!num_endpoints) - return; - /* NOTE: some customers have programmed FT232R/FT245R devices * with an endpoint size of 0 - not good. In this case, we * want to override the endpoint descriptor setting and use a diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h index 302ab9a71f0..e599fbfcde5 100644 --- a/drivers/usb/serial/ftdi_sio_ids.h +++ b/drivers/usb/serial/ftdi_sio_ids.h @@ -30,12 +30,6 @@ /*** third-party PIDs (using FTDI_VID) ***/ -/* - * Certain versions of the official Windows FTDI driver reprogrammed - * counterfeit FTDI devices to PID 0. Support these devices anyway. - */ -#define FTDI_BRICK_PID 0x0000 - #define FTDI_LUMEL_PD12_PID 0x6002 /* @@ -48,8 +42,6 @@ /* www.candapter.com Ewert Energy Systems CANdapter device */ #define FTDI_CANDAPTER_PID 0x9F80 /* Product Id */ -#define FTDI_BM_ATOM_NANO_PID 0xa559 /* Basic Micro ATOM Nano USB2Serial */ - /* * Texas Instruments XDS100v2 JTAG / BeagleBone A3 * http://processors.wiki.ti.com/index.php/XDS100 @@ -148,19 +140,12 @@ /* * Xsens Technologies BV products (http://www.xsens.com). */ -#define XSENS_VID 0x2639 -#define XSENS_AWINDA_STATION_PID 0x0101 -#define XSENS_AWINDA_DONGLE_PID 0x0102 -#define XSENS_MTW_PID 0x0200 /* Xsens MTw */ -#define XSENS_CONVERTER_PID 0xD00D /* Xsens USB-serial converter */ - -/* Xsens devices using FTDI VID */ -#define XSENS_CONVERTER_0_PID 0xD388 /* Xsens USB converter */ -#define XSENS_CONVERTER_1_PID 0xD389 /* Xsens Wireless Receiver */ +#define XSENS_CONVERTER_0_PID 0xD388 +#define XSENS_CONVERTER_1_PID 0xD389 #define XSENS_CONVERTER_2_PID 0xD38A -#define XSENS_CONVERTER_3_PID 0xD38B /* Xsens USB-serial converter */ -#define XSENS_CONVERTER_4_PID 0xD38C /* Xsens Wireless Receiver */ -#define XSENS_CONVERTER_5_PID 0xD38D /* Xsens Awinda Station */ +#define XSENS_CONVERTER_3_PID 0xD38B +#define XSENS_CONVERTER_4_PID 0xD38C +#define XSENS_CONVERTER_5_PID 0xD38D #define XSENS_CONVERTER_6_PID 0xD38E #define XSENS_CONVERTER_7_PID 0xD38F @@ -553,11 +538,6 @@ */ #define FTDI_TIAO_UMPA_PID 0x8a98 /* TIAO/DIYGADGET USB Multi-Protocol Adapter */ -/* - * NovaTech product ids (FTDI_VID) - */ -#define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */ - /********************************/ /** third-party VID/PID combos **/ @@ -599,12 +579,6 @@ #define RATOC_PRODUCT_ID_USB60F 0xb020 /* - * Infineon Technologies - */ -#define INFINEON_VID 0x058b -#define INFINEON_TRIBOARD_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */ - -/* * Acton Research Corp. */ #define ACTON_VID 0x0647 /* Vendor ID */ @@ -819,8 +793,7 @@ * Submitted by Colin Leroy */ #define TESTO_VID 0x128D -#define TESTO_1_PID 0x0001 -#define TESTO_3_PID 0x0003 +#define TESTO_USB_INTERFACE_PID 0x0001 /* * Mobility Electronics products. @@ -847,12 +820,6 @@ #define TELLDUS_TELLSTICK_PID 0x0C30 /* RF control dongle 433 MHz using FT232RL */ /* - * NOVITUS printers - */ -#define NOVITUS_VID 0x1a28 -#define NOVITUS_BONO_E_PID 0x6010 - -/* * RT Systems programming cables for various ham radios */ #define RTSYSTEMS_VID 0x2100 /* Vendor ID */ @@ -1359,45 +1326,3 @@ * Manufacturer: Cressi */ #define FTDI_CRESSI_PID 0x87d0 - -/* - * Brainboxes devices - */ -#define BRAINBOXES_VID 0x05d1 -#define BRAINBOXES_VX_001_PID 0x1001 /* VX-001 ExpressCard 1 Port RS232 */ -#define BRAINBOXES_VX_012_PID 0x1002 /* VX-012 ExpressCard 2 Port RS232 */ -#define BRAINBOXES_VX_023_PID 0x1003 /* VX-023 ExpressCard 1 Port RS422/485 */ -#define BRAINBOXES_VX_034_PID 0x1004 /* VX-034 ExpressCard 2 Port RS422/485 */ -#define BRAINBOXES_US_101_PID 0x1011 /* US-101 1xRS232 */ -#define BRAINBOXES_US_324_PID 0x1013 /* US-324 1xRS422/485 1Mbaud */ -#define BRAINBOXES_US_606_1_PID 0x2001 /* US-606 6 Port RS232 Serial Port 1 and 2 */ -#define BRAINBOXES_US_606_2_PID 0x2002 /* US-606 6 Port RS232 Serial Port 3 and 4 */ -#define BRAINBOXES_US_606_3_PID 0x2003 /* US-606 6 Port RS232 Serial Port 4 and 6 */ -#define BRAINBOXES_US_701_1_PID 0x2011 /* US-701 4xRS232 1Mbaud Port 1 and 2 */ -#define BRAINBOXES_US_701_2_PID 0x2012 /* US-701 4xRS422 1Mbaud Port 3 and 4 */ -#define BRAINBOXES_US_279_1_PID 0x2021 /* US-279 8xRS422 1Mbaud Port 1 and 2 */ -#define BRAINBOXES_US_279_2_PID 0x2022 /* US-279 8xRS422 1Mbaud Port 3 and 4 */ -#define BRAINBOXES_US_279_3_PID 0x2023 /* US-279 8xRS422 1Mbaud Port 5 and 6 */ -#define BRAINBOXES_US_279_4_PID 0x2024 /* US-279 8xRS422 1Mbaud Port 7 and 8 */ -#define BRAINBOXES_US_346_1_PID 0x3011 /* US-346 4xRS422/485 1Mbaud Port 1 and 2 */ -#define BRAINBOXES_US_346_2_PID 0x3012 /* US-346 4xRS422/485 1Mbaud Port 3 and 4 */ -#define BRAINBOXES_US_257_PID 0x5001 /* US-257 2xRS232 1Mbaud */ -#define BRAINBOXES_US_313_PID 0x6001 /* US-313 2xRS422/485 1Mbaud */ -#define BRAINBOXES_US_357_PID 0x7001 /* US_357 1xRS232/422/485 */ -#define BRAINBOXES_US_842_1_PID 0x8001 /* US-842 8xRS422/485 1Mbaud Port 1 and 2 */ -#define BRAINBOXES_US_842_2_PID 0x8002 /* US-842 8xRS422/485 1Mbaud Port 3 and 4 */ -#define BRAINBOXES_US_842_3_PID 0x8003 /* US-842 8xRS422/485 1Mbaud Port 5 and 6 */ -#define BRAINBOXES_US_842_4_PID 0x8004 /* US-842 8xRS422/485 1Mbaud Port 7 and 8 */ -#define BRAINBOXES_US_160_1_PID 0x9001 /* US-160 16xRS232 1Mbaud Port 1 and 2 */ -#define BRAINBOXES_US_160_2_PID 0x9002 /* US-160 16xRS232 1Mbaud Port 3 and 4 */ -#define BRAINBOXES_US_160_3_PID 0x9003 /* US-160 16xRS232 1Mbaud Port 5 and 6 */ -#define BRAINBOXES_US_160_4_PID 0x9004 /* US-160 16xRS232 1Mbaud Port 7 and 8 */ -#define BRAINBOXES_US_160_5_PID 0x9005 /* US-160 16xRS232 1Mbaud Port 9 and 10 */ -#define BRAINBOXES_US_160_6_PID 0x9006 /* US-160 16xRS232 1Mbaud Port 11 and 12 */ -#define BRAINBOXES_US_160_7_PID 0x9007 /* US-160 16xRS232 1Mbaud Port 13 and 14 */ -#define BRAINBOXES_US_160_8_PID 0x9008 /* US-160 16xRS232 1Mbaud Port 15 and 16 */ - -/* - * ekey biometric systems GmbH (http://ekey.net/) - */ -#define FTDI_EKEY_CONV_USB_PID 0xCB08 /* Converter USB */ diff --git a/drivers/usb/serial/io_ti.c b/drivers/usb/serial/io_ti.c index 8cd6479a8b4..1be6ba7bee2 100644 --- a/drivers/usb/serial/io_ti.c +++ b/drivers/usb/serial/io_ti.c @@ -29,7 +29,6 @@ #include <linux/spinlock.h> #include <linux/mutex.h> #include <linux/serial.h> -#include <linux/swab.h> #include <linux/kfifo.h> #include <linux/ioctl.h> #include <linux/firmware.h> @@ -285,7 +284,7 @@ static int read_download_mem(struct usb_device *dev, int start_address, { int status = 0; __u8 read_length; - u16 be_start_address; + __be16 be_start_address; dev_dbg(&dev->dev, "%s - @ %x for %d\n", __func__, start_address, length); @@ -301,14 +300,10 @@ static int read_download_mem(struct usb_device *dev, int start_address, if (read_length > 1) { dev_dbg(&dev->dev, "%s - @ %x for %d\n", __func__, start_address, read_length); } - /* - * NOTE: Must use swab as wIndex is sent in little-endian - * byte order regardless of host byte order. - */ - be_start_address = swab16((u16)start_address); + be_start_address = cpu_to_be16(start_address); status = ti_vread_sync(dev, UMPC_MEMORY_READ, (__u16)address_type, - be_start_address, + (__force __u16)be_start_address, buffer, read_length); if (status) { @@ -405,7 +400,7 @@ static int write_i2c_mem(struct edgeport_serial *serial, struct device *dev = &serial->serial->dev->dev; int status = 0; int write_length; - u16 be_start_address; + __be16 be_start_address; /* We can only send a maximum of 1 aligned byte page at a time */ @@ -420,16 +415,11 @@ static int write_i2c_mem(struct edgeport_serial *serial, __func__, start_address, write_length); usb_serial_debug_data(dev, __func__, write_length, buffer); - /* - * Write first page. - * - * NOTE: Must use swab as wIndex is sent in little-endian byte order - * regardless of host byte order. - */ - be_start_address = swab16((u16)start_address); + /* Write first page */ + be_start_address = cpu_to_be16(start_address); status = ti_vsend_sync(serial->serial->dev, UMPC_MEMORY_WRITE, (__u16)address_type, - be_start_address, + (__force __u16)be_start_address, buffer, write_length); if (status) { dev_dbg(dev, "%s - ERROR %d\n", __func__, status); @@ -452,16 +442,11 @@ static int write_i2c_mem(struct edgeport_serial *serial, __func__, start_address, write_length); usb_serial_debug_data(dev, __func__, write_length, buffer); - /* - * Write next page. - * - * NOTE: Must use swab as wIndex is sent in little-endian byte - * order regardless of host byte order. - */ - be_start_address = swab16((u16)start_address); + /* Write next page */ + be_start_address = cpu_to_be16(start_address); status = ti_vsend_sync(serial->serial->dev, UMPC_MEMORY_WRITE, (__u16)address_type, - be_start_address, + (__force __u16)be_start_address, buffer, write_length); if (status) { dev_err(dev, "%s - ERROR %d\n", __func__, status); @@ -608,8 +593,8 @@ static int get_descriptor_addr(struct edgeport_serial *serial, if (rom_desc->Type == desc_type) return start_address; - start_address = start_address + sizeof(struct ti_i2c_desc) + - le16_to_cpu(rom_desc->Size); + start_address = start_address + sizeof(struct ti_i2c_desc) + + rom_desc->Size; } while ((start_address < TI_MAX_I2C_SIZE) && rom_desc->Type); @@ -622,7 +607,7 @@ static int valid_csum(struct ti_i2c_desc *rom_desc, __u8 *buffer) __u16 i; __u8 cs = 0; - for (i = 0; i < le16_to_cpu(rom_desc->Size); i++) + for (i = 0; i < rom_desc->Size; i++) cs = (__u8)(cs + buffer[i]); if (cs != rom_desc->CheckSum) { @@ -676,7 +661,7 @@ static int check_i2c_image(struct edgeport_serial *serial) break; if ((start_address + sizeof(struct ti_i2c_desc) + - le16_to_cpu(rom_desc->Size)) > TI_MAX_I2C_SIZE) { + rom_desc->Size) > TI_MAX_I2C_SIZE) { status = -ENODEV; dev_dbg(dev, "%s - structure too big, erroring out.\n", __func__); break; @@ -691,8 +676,7 @@ static int check_i2c_image(struct edgeport_serial *serial) /* Read the descriptor data */ status = read_rom(serial, start_address + sizeof(struct ti_i2c_desc), - le16_to_cpu(rom_desc->Size), - buffer); + rom_desc->Size, buffer); if (status) break; @@ -701,7 +685,7 @@ static int check_i2c_image(struct edgeport_serial *serial) break; } start_address = start_address + sizeof(struct ti_i2c_desc) + - le16_to_cpu(rom_desc->Size); + rom_desc->Size; } while ((rom_desc->Type != I2C_DESC_TYPE_ION) && (start_address < TI_MAX_I2C_SIZE)); @@ -740,7 +724,7 @@ static int get_manuf_info(struct edgeport_serial *serial, __u8 *buffer) /* Read the descriptor data */ status = read_rom(serial, start_address+sizeof(struct ti_i2c_desc), - le16_to_cpu(rom_desc->Size), buffer); + rom_desc->Size, buffer); if (status) goto exit; @@ -835,7 +819,7 @@ static int build_i2c_fw_hdr(__u8 *header, struct device *dev) firmware_rec = (struct ti_i2c_firmware_rec*)i2c_header->Data; i2c_header->Type = I2C_DESC_TYPE_FIRMWARE_BLANK; - i2c_header->Size = cpu_to_le16(buffer_size); + i2c_header->Size = (__u16)buffer_size; i2c_header->CheckSum = cs; firmware_rec->Ver_Major = OperationalMajorVersion; firmware_rec->Ver_Minor = OperationalMinorVersion; diff --git a/drivers/usb/serial/io_usbvend.h b/drivers/usb/serial/io_usbvend.h index 6f6a856bc37..51f83fbb73b 100644 --- a/drivers/usb/serial/io_usbvend.h +++ b/drivers/usb/serial/io_usbvend.h @@ -594,7 +594,7 @@ struct edge_boot_descriptor { struct ti_i2c_desc { __u8 Type; // Type of descriptor - __le16 Size; // Size of data only not including header + __u16 Size; // Size of data only not including header __u8 CheckSum; // Checksum (8 bit sum of data only) __u8 Data[0]; // Data starts here } __attribute__((packed)); diff --git a/drivers/usb/serial/opticon.c b/drivers/usb/serial/opticon.c index b0eb1dfc601..5f4b0cd0f6e 100644 --- a/drivers/usb/serial/opticon.c +++ b/drivers/usb/serial/opticon.c @@ -219,7 +219,7 @@ static int opticon_write(struct tty_struct *tty, struct usb_serial_port *port, /* The conncected devices do not have a bulk write endpoint, * to transmit data to de barcode device the control endpoint is used */ - dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC); + dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_NOIO); if (!dr) { dev_err(&port->dev, "out of memory\n"); count = -ENOMEM; diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 8b3484134ab..68fc9fe6593 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -161,7 +161,6 @@ static void option_instat_callback(struct urb *urb); #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_FULLSPEED 0x9000 #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_HIGHSPEED 0x9001 #define NOVATELWIRELESS_PRODUCT_E362 0x9010 -#define NOVATELWIRELESS_PRODUCT_E371 0x9011 #define NOVATELWIRELESS_PRODUCT_G2 0xA010 #define NOVATELWIRELESS_PRODUCT_MC551 0xB001 @@ -235,31 +234,8 @@ static void option_instat_callback(struct urb *urb); #define QUALCOMM_VENDOR_ID 0x05C6 #define CMOTECH_VENDOR_ID 0x16d8 -#define CMOTECH_PRODUCT_6001 0x6001 -#define CMOTECH_PRODUCT_CMU_300 0x6002 -#define CMOTECH_PRODUCT_6003 0x6003 -#define CMOTECH_PRODUCT_6004 0x6004 -#define CMOTECH_PRODUCT_6005 0x6005 -#define CMOTECH_PRODUCT_CGU_628A 0x6006 -#define CMOTECH_PRODUCT_CHE_628S 0x6007 -#define CMOTECH_PRODUCT_CMU_301 0x6008 -#define CMOTECH_PRODUCT_CHU_628 0x6280 -#define CMOTECH_PRODUCT_CHU_628S 0x6281 -#define CMOTECH_PRODUCT_CDU_680 0x6803 -#define CMOTECH_PRODUCT_CDU_685A 0x6804 -#define CMOTECH_PRODUCT_CHU_720S 0x7001 -#define CMOTECH_PRODUCT_7002 0x7002 -#define CMOTECH_PRODUCT_CHU_629K 0x7003 -#define CMOTECH_PRODUCT_7004 0x7004 -#define CMOTECH_PRODUCT_7005 0x7005 -#define CMOTECH_PRODUCT_CGU_629 0x7006 -#define CMOTECH_PRODUCT_CHU_629S 0x700a -#define CMOTECH_PRODUCT_CHU_720I 0x7211 -#define CMOTECH_PRODUCT_7212 0x7212 -#define CMOTECH_PRODUCT_7213 0x7213 -#define CMOTECH_PRODUCT_7251 0x7251 -#define CMOTECH_PRODUCT_7252 0x7252 -#define CMOTECH_PRODUCT_7253 0x7253 +#define CMOTECH_PRODUCT_6008 0x6008 +#define CMOTECH_PRODUCT_6280 0x6280 #define TELIT_VENDOR_ID 0x1bc7 #define TELIT_PRODUCT_UC864E 0x1003 @@ -267,21 +243,15 @@ static void option_instat_callback(struct urb *urb); #define TELIT_PRODUCT_CC864_DUAL 0x1005 #define TELIT_PRODUCT_CC864_SINGLE 0x1006 #define TELIT_PRODUCT_DE910_DUAL 0x1010 -#define TELIT_PRODUCT_UE910_V2 0x1012 #define TELIT_PRODUCT_LE920 0x1200 -#define TELIT_PRODUCT_LE910 0x1201 /* ZTE PRODUCTS */ #define ZTE_VENDOR_ID 0x19d2 #define ZTE_PRODUCT_MF622 0x0001 #define ZTE_PRODUCT_MF628 0x0015 #define ZTE_PRODUCT_MF626 0x0031 -#define ZTE_PRODUCT_AC2726 0xfff1 -#define ZTE_PRODUCT_CDMA_TECH 0xfffe -#define ZTE_PRODUCT_AC8710T 0xffff #define ZTE_PRODUCT_MC2718 0xffe8 -#define ZTE_PRODUCT_AD3812 0xffeb -#define ZTE_PRODUCT_MC2716 0xffed +#define ZTE_PRODUCT_AC2726 0xfff1 #define BENQ_VENDOR_ID 0x04a5 #define BENQ_PRODUCT_H10 0x4068 @@ -316,7 +286,6 @@ static void option_instat_callback(struct urb *urb); #define ALCATEL_PRODUCT_X060S_X200 0x0000 #define ALCATEL_PRODUCT_X220_X500D 0x0017 #define ALCATEL_PRODUCT_L100V 0x011e -#define ALCATEL_PRODUCT_L800MA 0x0203 #define PIRELLI_VENDOR_ID 0x1266 #define PIRELLI_PRODUCT_C100_1 0x1002 @@ -357,12 +326,8 @@ static void option_instat_callback(struct urb *urb); /* Zoom */ #define ZOOM_PRODUCT_4597 0x9607 -/* SpeedUp SU9800 usb 3g modem */ -#define SPEEDUP_PRODUCT_SU9800 0x9800 - /* Haier products */ #define HAIER_VENDOR_ID 0x201e -#define HAIER_PRODUCT_CE81B 0x10f8 #define HAIER_PRODUCT_CE100 0x2009 /* Cinterion (formerly Siemens) products */ @@ -381,13 +346,8 @@ static void option_instat_callback(struct urb *urb); /* Olivetti products */ #define OLIVETTI_VENDOR_ID 0x0b3c #define OLIVETTI_PRODUCT_OLICARD100 0xc000 -#define OLIVETTI_PRODUCT_OLICARD120 0xc001 -#define OLIVETTI_PRODUCT_OLICARD140 0xc002 #define OLIVETTI_PRODUCT_OLICARD145 0xc003 -#define OLIVETTI_PRODUCT_OLICARD155 0xc004 #define OLIVETTI_PRODUCT_OLICARD200 0xc005 -#define OLIVETTI_PRODUCT_OLICARD160 0xc00a -#define OLIVETTI_PRODUCT_OLICARD500 0xc00b /* Celot products */ #define CELOT_VENDOR_ID 0x211f @@ -500,10 +460,6 @@ static void option_instat_callback(struct urb *urb); #define INOVIA_VENDOR_ID 0x20a6 #define INOVIA_SEW858 0x1105 -/* VIA Telecom */ -#define VIATELECOM_VENDOR_ID 0x15eb -#define VIATELECOM_PRODUCT_CDS7 0x0001 - /* some devices interfaces need special handling due to a number of reasons */ enum option_blacklist_reason { OPTION_BLACKLIST_NONE = 0, @@ -537,26 +493,14 @@ static const struct option_blacklist_info zte_k3765_z_blacklist = { .reserved = BIT(4), }; -static const struct option_blacklist_info zte_ad3812_z_blacklist = { - .sendsetup = BIT(0) | BIT(1) | BIT(2), -}; - static const struct option_blacklist_info zte_mc2718_z_blacklist = { .sendsetup = BIT(1) | BIT(2) | BIT(3) | BIT(4), }; -static const struct option_blacklist_info zte_mc2716_z_blacklist = { - .sendsetup = BIT(1) | BIT(2) | BIT(3), -}; - static const struct option_blacklist_info huawei_cdc12_blacklist = { .reserved = BIT(1) | BIT(2), }; -static const struct option_blacklist_info net_intf0_blacklist = { - .reserved = BIT(0), -}; - static const struct option_blacklist_info net_intf1_blacklist = { .reserved = BIT(1), }; @@ -590,11 +534,6 @@ static const struct option_blacklist_info zte_1255_blacklist = { .reserved = BIT(3) | BIT(4), }; -static const struct option_blacklist_info telit_le910_blacklist = { - .sendsetup = BIT(0), - .reserved = BIT(1) | BIT(2), -}; - static const struct option_blacklist_info telit_le920_blacklist = { .sendsetup = BIT(0), .reserved = BIT(1) | BIT(5), @@ -1043,7 +982,6 @@ static const struct usb_device_id option_ids[] = { /* Novatel Ovation MC551 a.k.a. Verizon USB551L */ { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E362, 0xff, 0xff, 0xff) }, - { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E371, 0xff, 0xff, 0xff) }, { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) }, { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) }, @@ -1093,59 +1031,16 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_INTERFACE_CLASS(BANDRICH_VENDOR_ID, BANDRICH_PRODUCT_1012, 0xff) }, { USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC650) }, { USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC680) }, - { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6000)}, /* ZTE AC8700 */ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */ - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6004) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6005) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CGU_628A) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHE_628S), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_301), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_628), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_628S) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CDU_680) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CDU_685A) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_720S), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7002), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_629K), - .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7004), - .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7005) }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CGU_629), - .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_629S), - .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_720I), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7212), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7213), - .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7251), - .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7252), - .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, - { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7253), - .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, + { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6280) }, /* BP3-USB & BP3-EXT HSDPA */ + { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6008) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864E) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864G) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_DUAL) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) }, - { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) }, - { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), - .driver_info = (kernel_ulong_t)&telit_le910_blacklist }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920), .driver_info = (kernel_ulong_t)&telit_le920_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ @@ -1513,8 +1408,6 @@ static const struct usb_device_id option_ids[] = { .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */ .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ - .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) }, @@ -1570,18 +1463,13 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff93, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff94, 0xff, 0xff, 0xff) }, - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_CDMA_TECH, 0xff, 0xff, 0xff) }, - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) }, - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC8710T, 0xff, 0xff, 0xff) }, + /* NOTE: most ZTE CDMA devices should be driven by zte_ev, not option */ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2718, 0xff, 0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_mc2718_z_blacklist }, - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AD3812, 0xff, 0xff, 0xff), - .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist }, - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff), - .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist }, { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) }, { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) }, { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) }, { USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) }, { USB_DEVICE(DLINK_VENDOR_ID, DLINK_PRODUCT_DWM_652) }, @@ -1610,18 +1498,14 @@ static const struct usb_device_id option_ids[] = { .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_L100V), .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, - { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_L800MA), - .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE(AIRPLUS_VENDOR_ID, AIRPLUS_PRODUCT_MCD650) }, { USB_DEVICE(TLAYTECH_VENDOR_ID, TLAYTECH_PRODUCT_TEU800) }, { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14), .driver_info = (kernel_ulong_t)&four_g_w14_blacklist }, - { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, SPEEDUP_PRODUCT_SU9800, 0xff) }, { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) }, { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) }, { USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) }, - { USB_DEVICE_AND_INTERFACE_INFO(HAIER_VENDOR_ID, HAIER_PRODUCT_CE81B, 0xff, 0xff, 0xff) }, /* Pirelli */ { USB_DEVICE_INTERFACE_CLASS(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_1, 0xff) }, { USB_DEVICE_INTERFACE_CLASS(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_2, 0xff) }, @@ -1653,21 +1537,12 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDMNET) }, { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100), - .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120), - .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD140), - .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, + + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD145) }, - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD155), - .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200), - .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD160), - .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD500), - .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, + .driver_info = (kernel_ulong_t)&net_intf6_blacklist + }, { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) }, @@ -1756,7 +1631,6 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */ { USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) }, - { USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) }, { } /* Terminating entry */ }; MODULE_DEVICE_TABLE(usb, option_ids); @@ -1950,8 +1824,6 @@ static void option_instat_callback(struct urb *urb) dev_dbg(dev, "%s: type %x req %x\n", __func__, req_pkt->bRequestType, req_pkt->bRequest); } - } else if (status == -ENOENT || status == -ESHUTDOWN) { - dev_dbg(dev, "%s: urb stopped: %d\n", __func__, status); } else dev_err(dev, "%s: error %d\n", __func__, status); @@ -1976,7 +1848,6 @@ static int option_send_setup(struct usb_serial_port *port) struct option_private *priv = intfdata->private; struct usb_wwan_port_private *portdata; int val = 0; - int res; portdata = usb_get_serial_port_data(port); @@ -1985,17 +1856,9 @@ static int option_send_setup(struct usb_serial_port *port) if (portdata->rts_state) val |= 0x02; - res = usb_autopm_get_interface(serial->interface); - if (res) - return res; - - res = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0), + return usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0), 0x22, 0x21, val, priv->bInterfaceNumber, NULL, 0, USB_CTRL_SET_TIMEOUT); - - usb_autopm_put_interface(serial->interface); - - return res; } MODULE_AUTHOR(DRIVER_AUTHOR); diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c index de3e15d8eb1..4471f464ca2 100644 --- a/drivers/usb/serial/pl2303.c +++ b/drivers/usb/serial/pl2303.c @@ -47,7 +47,6 @@ static const struct usb_device_id id_table[] = { { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_GPRS) }, { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_HCR331) }, { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_MOTOROLA) }, - { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_ZTEK) }, { USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID) }, { USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID_RSAQ5) }, { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID) }, @@ -83,9 +82,6 @@ static const struct usb_device_id id_table[] = { { USB_DEVICE(YCCABLE_VENDOR_ID, YCCABLE_PRODUCT_ID) }, { USB_DEVICE(SUPERIAL_VENDOR_ID, SUPERIAL_PRODUCT_ID) }, { USB_DEVICE(HP_VENDOR_ID, HP_LD220_PRODUCT_ID) }, - { USB_DEVICE(HP_VENDOR_ID, HP_LD960_PRODUCT_ID) }, - { USB_DEVICE(HP_VENDOR_ID, HP_LCM220_PRODUCT_ID) }, - { USB_DEVICE(HP_VENDOR_ID, HP_LCM960_PRODUCT_ID) }, { USB_DEVICE(CRESSI_VENDOR_ID, CRESSI_EDY_PRODUCT_ID) }, { USB_DEVICE(ZEAGLE_VENDOR_ID, ZEAGLE_N2ITION3_PRODUCT_ID) }, { USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) }, diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h index 71fd9da1d6e..c38b8c00c06 100644 --- a/drivers/usb/serial/pl2303.h +++ b/drivers/usb/serial/pl2303.h @@ -22,7 +22,6 @@ #define PL2303_PRODUCT_ID_GPRS 0x0609 #define PL2303_PRODUCT_ID_HCR331 0x331a #define PL2303_PRODUCT_ID_MOTOROLA 0x0307 -#define PL2303_PRODUCT_ID_ZTEK 0xe1f1 #define ATEN_VENDOR_ID 0x0557 #define ATEN_VENDOR_ID2 0x0547 @@ -122,11 +121,8 @@ #define SUPERIAL_VENDOR_ID 0x5372 #define SUPERIAL_PRODUCT_ID 0x2303 -/* Hewlett-Packard POS Pole Displays */ +/* Hewlett-Packard LD220-HP POS Pole Display */ #define HP_VENDOR_ID 0x03f0 -#define HP_LD960_PRODUCT_ID 0x0b39 -#define HP_LCM220_PRODUCT_ID 0x3139 -#define HP_LCM960_PRODUCT_ID 0x3239 #define HP_LD220_PRODUCT_ID 0x3524 /* Cressi Edy (diving computer) PC interface */ diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c index 43d93dbf7d7..968a40201e5 100644 --- a/drivers/usb/serial/qcserial.c +++ b/drivers/usb/serial/qcserial.c @@ -136,57 +136,12 @@ static const struct usb_device_id id_table[] = { {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x68a2, 0)}, /* Sierra Wireless MC7710 Device Management */ {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x68a2, 2)}, /* Sierra Wireless MC7710 NMEA */ {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x68a2, 3)}, /* Sierra Wireless MC7710 Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x68c0, 0)}, /* Sierra Wireless MC73xx Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x68c0, 2)}, /* Sierra Wireless MC73xx NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x68c0, 3)}, /* Sierra Wireless MC73xx Modem */ {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x901c, 0)}, /* Sierra Wireless EM7700 Device Management */ {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x901c, 2)}, /* Sierra Wireless EM7700 NMEA */ {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x901c, 3)}, /* Sierra Wireless EM7700 Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x901f, 0)}, /* Sierra Wireless EM7355 Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x901f, 2)}, /* Sierra Wireless EM7355 NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x901f, 3)}, /* Sierra Wireless EM7355 Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9040, 0)}, /* Sierra Wireless Modem Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9040, 2)}, /* Sierra Wireless Modem NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9040, 3)}, /* Sierra Wireless Modem Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9041, 0)}, /* Sierra Wireless MC7305/MC7355 Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9041, 2)}, /* Sierra Wireless MC7305/MC7355 NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9041, 3)}, /* Sierra Wireless MC7305/MC7355 Modem */ {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9051, 0)}, /* Netgear AirCard 340U Device Management */ {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9051, 2)}, /* Netgear AirCard 340U NMEA */ {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9051, 3)}, /* Netgear AirCard 340U Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9053, 0)}, /* Sierra Wireless Modem Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9053, 2)}, /* Sierra Wireless Modem NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9053, 3)}, /* Sierra Wireless Modem Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9054, 0)}, /* Sierra Wireless Modem Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9054, 2)}, /* Sierra Wireless Modem NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9054, 3)}, /* Sierra Wireless Modem Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9055, 0)}, /* Netgear AirCard 341U Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9055, 2)}, /* Netgear AirCard 341U NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9055, 3)}, /* Netgear AirCard 341U Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9056, 0)}, /* Sierra Wireless Modem Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9056, 2)}, /* Sierra Wireless Modem NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9056, 3)}, /* Sierra Wireless Modem Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9060, 0)}, /* Sierra Wireless Modem Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9060, 2)}, /* Sierra Wireless Modem NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9060, 3)}, /* Sierra Wireless Modem Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9061, 0)}, /* Sierra Wireless Modem Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9061, 2)}, /* Sierra Wireless Modem NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9061, 3)}, /* Sierra Wireless Modem Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a2, 0)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a2, 2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a2, 3)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a3, 0)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a3, 2)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a3, 3)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a4, 0)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a4, 2)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a4, 3)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a8, 0)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a8, 2)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a8, 3)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card Modem */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a9, 0)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card Device Management */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a9, 2)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card NMEA */ - {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a9, 3)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card Modem */ { } /* Terminating entry */ }; diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c index 5aaa2b67511..8894665cd61 100644 --- a/drivers/usb/serial/sierra.c +++ b/drivers/usb/serial/sierra.c @@ -58,7 +58,6 @@ struct sierra_intf_private { spinlock_t susp_lock; unsigned int suspended:1; int in_flight; - unsigned int open_ports; }; static int sierra_set_power_state(struct usb_device *udev, __u16 swiState) @@ -282,21 +281,17 @@ static const struct usb_device_id id_table[] = { /* Sierra Wireless HSPA Non-Composite Device */ { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x6892, 0xFF, 0xFF, 0xFF)}, { USB_DEVICE(0x1199, 0x6893) }, /* Sierra Wireless Device */ - /* Sierra Wireless Direct IP modems */ - { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68A3, 0xFF, 0xFF, 0xFF), - .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist - }, - { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68AA, 0xFF, 0xFF, 0xFF), + { USB_DEVICE(0x1199, 0x68A3), /* Sierra Wireless Direct IP modems */ .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist }, /* AT&T Direct IP LTE modems */ { USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68AA, 0xFF, 0xFF, 0xFF), .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist }, - /* Airprime/Sierra Wireless Direct IP modems */ - { USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68A3, 0xFF, 0xFF, 0xFF), + { USB_DEVICE(0x0f3d, 0x68A3), /* Airprime/Sierra Wireless Direct IP modems */ .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist }, + { USB_DEVICE(0x413C, 0x08133) }, /* Dell Computer Corp. Wireless 5720 VZW Mobile Broadband (EVDO Rev-A) Minicard GPS Port */ { } }; @@ -773,7 +768,6 @@ static void sierra_close(struct usb_serial_port *port) struct usb_serial *serial = port->serial; struct sierra_port_private *portdata; struct sierra_intf_private *intfdata = port->serial->private; - struct urb *urb; portdata = usb_get_serial_port_data(port); @@ -782,6 +776,7 @@ static void sierra_close(struct usb_serial_port *port) mutex_lock(&serial->disc_mutex); if (!serial->disconnected) { + serial->interface->needs_remote_wakeup = 0; /* odd error handling due to pm counters */ if (!usb_autopm_get_interface(serial->interface)) sierra_send_setup(port); @@ -792,22 +787,8 @@ static void sierra_close(struct usb_serial_port *port) mutex_unlock(&serial->disc_mutex); spin_lock_irq(&intfdata->susp_lock); portdata->opened = 0; - if (--intfdata->open_ports == 0) - serial->interface->needs_remote_wakeup = 0; spin_unlock_irq(&intfdata->susp_lock); - for (;;) { - urb = usb_get_from_anchor(&portdata->delayed); - if (!urb) - break; - kfree(urb->transfer_buffer); - usb_free_urb(urb); - usb_autopm_put_interface_async(serial->interface); - spin_lock(&portdata->lock); - portdata->outstanding_urbs--; - spin_unlock(&portdata->lock); - } - sierra_stop_rx_urbs(port); for (i = 0; i < portdata->num_in_urbs; i++) { sierra_release_urb(portdata->in_urbs[i]); @@ -844,29 +825,23 @@ static int sierra_open(struct tty_struct *tty, struct usb_serial_port *port) usb_sndbulkpipe(serial->dev, endpoint) | USB_DIR_IN); err = sierra_submit_rx_urbs(port, GFP_KERNEL); - if (err) - goto err_submit; - + if (err) { + /* get rid of everything as in close */ + sierra_close(port); + /* restore balance for autopm */ + if (!serial->disconnected) + usb_autopm_put_interface(serial->interface); + return err; + } sierra_send_setup(port); + serial->interface->needs_remote_wakeup = 1; spin_lock_irq(&intfdata->susp_lock); portdata->opened = 1; - if (++intfdata->open_ports == 1) - serial->interface->needs_remote_wakeup = 1; spin_unlock_irq(&intfdata->susp_lock); usb_autopm_put_interface(serial->interface); return 0; - -err_submit: - sierra_stop_rx_urbs(port); - - for (i = 0; i < portdata->num_in_urbs; i++) { - sierra_release_urb(portdata->in_urbs[i]); - portdata->in_urbs[i] = NULL; - } - - return err; } @@ -962,7 +937,6 @@ static int sierra_port_remove(struct usb_serial_port *port) struct sierra_port_private *portdata; portdata = usb_get_serial_port_data(port); - usb_set_serial_port_data(port, NULL); kfree(portdata); return 0; @@ -979,8 +953,6 @@ static void stop_read_write_urbs(struct usb_serial *serial) for (i = 0; i < serial->num_ports; ++i) { port = serial->port[i]; portdata = usb_get_serial_port_data(port); - if (!portdata) - continue; sierra_stop_rx_urbs(port); usb_kill_anchored_urbs(&portdata->active); } @@ -1023,9 +995,6 @@ static int sierra_resume(struct usb_serial *serial) port = serial->port[i]; portdata = usb_get_serial_port_data(port); - if (!portdata) - continue; - while ((urb = usb_get_from_anchor(&portdata->delayed))) { usb_anchor_urb(urb, &portdata->active); intfdata->in_flight++; @@ -1033,12 +1002,8 @@ static int sierra_resume(struct usb_serial *serial) if (err < 0) { intfdata->in_flight--; usb_unanchor_urb(urb); - kfree(urb->transfer_buffer); - usb_free_urb(urb); - spin_lock(&portdata->lock); - portdata->outstanding_urbs--; - spin_unlock(&portdata->lock); - continue; + usb_scuttle_anchored_urbs(&portdata->delayed); + break; } } diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c index 80d689f0fda..5f6b1ff9d29 100644 --- a/drivers/usb/serial/usb-serial.c +++ b/drivers/usb/serial/usb-serial.c @@ -778,39 +778,29 @@ static int usb_serial_probe(struct usb_interface *interface, if (usb_endpoint_is_bulk_in(endpoint)) { /* we found a bulk in endpoint */ dev_dbg(ddev, "found bulk in on endpoint %d\n", i); - if (num_bulk_in < MAX_NUM_PORTS) { - bulk_in_endpoint[num_bulk_in] = endpoint; - ++num_bulk_in; - } + bulk_in_endpoint[num_bulk_in] = endpoint; + ++num_bulk_in; } if (usb_endpoint_is_bulk_out(endpoint)) { /* we found a bulk out endpoint */ dev_dbg(ddev, "found bulk out on endpoint %d\n", i); - if (num_bulk_out < MAX_NUM_PORTS) { - bulk_out_endpoint[num_bulk_out] = endpoint; - ++num_bulk_out; - } + bulk_out_endpoint[num_bulk_out] = endpoint; + ++num_bulk_out; } if (usb_endpoint_is_int_in(endpoint)) { /* we found a interrupt in endpoint */ dev_dbg(ddev, "found interrupt in on endpoint %d\n", i); - if (num_interrupt_in < MAX_NUM_PORTS) { - interrupt_in_endpoint[num_interrupt_in] = - endpoint; - ++num_interrupt_in; - } + interrupt_in_endpoint[num_interrupt_in] = endpoint; + ++num_interrupt_in; } if (usb_endpoint_is_int_out(endpoint)) { /* we found an interrupt out endpoint */ dev_dbg(ddev, "found interrupt out on endpoint %d\n", i); - if (num_interrupt_out < MAX_NUM_PORTS) { - interrupt_out_endpoint[num_interrupt_out] = - endpoint; - ++num_interrupt_out; - } + interrupt_out_endpoint[num_interrupt_out] = endpoint; + ++num_interrupt_out; } } @@ -833,10 +823,8 @@ static int usb_serial_probe(struct usb_interface *interface, if (usb_endpoint_is_int_in(endpoint)) { /* we found a interrupt in endpoint */ dev_dbg(ddev, "found interrupt in for Prolific device on separate interface\n"); - if (num_interrupt_in < MAX_NUM_PORTS) { - interrupt_in_endpoint[num_interrupt_in] = endpoint; - ++num_interrupt_in; - } + interrupt_in_endpoint[num_interrupt_in] = endpoint; + ++num_interrupt_in; } } } @@ -876,11 +864,6 @@ static int usb_serial_probe(struct usb_interface *interface, num_ports = type->num_ports; } - if (num_ports > MAX_NUM_PORTS) { - dev_warn(ddev, "too many ports requested: %d\n", num_ports); - num_ports = MAX_NUM_PORTS; - } - serial->num_ports = num_ports; serial->num_bulk_in = num_bulk_in; serial->num_bulk_out = num_bulk_out; @@ -1384,12 +1367,10 @@ static int usb_serial_register(struct usb_serial_driver *driver) static void usb_serial_deregister(struct usb_serial_driver *device) { pr_info("USB Serial deregistering driver %s\n", device->description); - mutex_lock(&table_lock); list_del(&device->driver_list); - mutex_unlock(&table_lock); - usb_serial_bus_deregister(device); + mutex_unlock(&table_lock); } /** diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c index 36f6b6a5690..db0cf536de1 100644 --- a/drivers/usb/serial/usb_wwan.c +++ b/drivers/usb/serial/usb_wwan.c @@ -228,10 +228,8 @@ int usb_wwan_write(struct tty_struct *tty, struct usb_serial_port *port, usb_pipeendpoint(this_urb->pipe), i); err = usb_autopm_get_interface_async(port->serial->interface); - if (err < 0) { - clear_bit(i, &portdata->out_busy); + if (err < 0) break; - } /* send the data */ memcpy(this_urb->transfer_buffer, buf, todo); @@ -388,14 +386,6 @@ int usb_wwan_open(struct tty_struct *tty, struct usb_serial_port *port) portdata = usb_get_serial_port_data(port); intfdata = serial->private; - if (port->interrupt_in_urb) { - err = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL); - if (err) { - dev_dbg(&port->dev, "%s: submit int urb failed: %d\n", - __func__, err); - } - } - /* Start reading from the IN endpoint */ for (i = 0; i < N_IN_URB; i++) { urb = portdata->in_urbs[i]; @@ -422,26 +412,12 @@ int usb_wwan_open(struct tty_struct *tty, struct usb_serial_port *port) } EXPORT_SYMBOL(usb_wwan_open); -static void unbusy_queued_urb(struct urb *urb, - struct usb_wwan_port_private *portdata) -{ - int i; - - for (i = 0; i < N_OUT_URB; i++) { - if (urb == portdata->out_urbs[i]) { - clear_bit(i, &portdata->out_busy); - break; - } - } -} - void usb_wwan_close(struct usb_serial_port *port) { int i; struct usb_serial *serial = port->serial; struct usb_wwan_port_private *portdata; struct usb_wwan_intf_private *intfdata = port->serial->private; - struct urb *urb; portdata = usb_get_serial_port_data(port); @@ -450,19 +426,10 @@ void usb_wwan_close(struct usb_serial_port *port) portdata->opened = 0; spin_unlock_irq(&intfdata->susp_lock); - for (;;) { - urb = usb_get_from_anchor(&portdata->delayed); - if (!urb) - break; - unbusy_queued_urb(urb, portdata); - usb_autopm_put_interface_async(serial->interface); - } - for (i = 0; i < N_IN_URB; i++) usb_kill_urb(portdata->in_urbs[i]); for (i = 0; i < N_OUT_URB; i++) usb_kill_urb(portdata->out_urbs[i]); - usb_kill_urb(port->interrupt_in_urb); /* balancing - important as an error cannot be handled*/ usb_autopm_get_interface_no_resume(serial->interface); @@ -500,11 +467,9 @@ int usb_wwan_port_probe(struct usb_serial_port *port) struct usb_wwan_port_private *portdata; struct urb *urb; u8 *buffer; + int err; int i; - if (!port->bulk_in_size || !port->bulk_out_size) - return -ENODEV; - portdata = kzalloc(sizeof(*portdata), GFP_KERNEL); if (!portdata) return -ENOMEM; @@ -512,6 +477,9 @@ int usb_wwan_port_probe(struct usb_serial_port *port) init_usb_anchor(&portdata->delayed); for (i = 0; i < N_IN_URB; i++) { + if (!port->bulk_in_size) + break; + buffer = (u8 *)__get_free_page(GFP_KERNEL); if (!buffer) goto bail_out_error; @@ -525,6 +493,9 @@ int usb_wwan_port_probe(struct usb_serial_port *port) } for (i = 0; i < N_OUT_URB; i++) { + if (!port->bulk_out_size) + break; + buffer = kmalloc(OUT_BUFLEN, GFP_KERNEL); if (!buffer) goto bail_out_error2; @@ -539,6 +510,13 @@ int usb_wwan_port_probe(struct usb_serial_port *port) usb_set_serial_port_data(port, portdata); + if (port->interrupt_in_urb) { + err = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL); + if (err) + dev_dbg(&port->dev, "%s: submit irq_in urb failed %d\n", + __func__, err); + } + return 0; bail_out_error2: @@ -606,29 +584,44 @@ static void stop_read_write_urbs(struct usb_serial *serial) int usb_wwan_suspend(struct usb_serial *serial, pm_message_t message) { struct usb_wwan_intf_private *intfdata = serial->private; + int b; - spin_lock_irq(&intfdata->susp_lock); if (PMSG_IS_AUTO(message)) { - if (intfdata->in_flight) { - spin_unlock_irq(&intfdata->susp_lock); + spin_lock_irq(&intfdata->susp_lock); + b = intfdata->in_flight; + spin_unlock_irq(&intfdata->susp_lock); + + if (b) return -EBUSY; - } } + + spin_lock_irq(&intfdata->susp_lock); intfdata->suspended = 1; spin_unlock_irq(&intfdata->susp_lock); - stop_read_write_urbs(serial); return 0; } EXPORT_SYMBOL(usb_wwan_suspend); -static int play_delayed(struct usb_serial_port *port) +static void unbusy_queued_urb(struct urb *urb, struct usb_wwan_port_private *portdata) +{ + int i; + + for (i = 0; i < N_OUT_URB; i++) { + if (urb == portdata->out_urbs[i]) { + clear_bit(i, &portdata->out_busy); + break; + } + } +} + +static void play_delayed(struct usb_serial_port *port) { struct usb_wwan_intf_private *data; struct usb_wwan_port_private *portdata; struct urb *urb; - int err = 0; + int err; portdata = usb_get_serial_port_data(port); data = port->serial->private; @@ -645,8 +638,6 @@ static int play_delayed(struct usb_serial_port *port) break; } } - - return err; } int usb_wwan_resume(struct usb_serial *serial) @@ -656,51 +647,54 @@ int usb_wwan_resume(struct usb_serial *serial) struct usb_wwan_intf_private *intfdata = serial->private; struct usb_wwan_port_private *portdata; struct urb *urb; - int err; - int err_count = 0; + int err = 0; + + /* get the interrupt URBs resubmitted unconditionally */ + for (i = 0; i < serial->num_ports; i++) { + port = serial->port[i]; + if (!port->interrupt_in_urb) { + dev_dbg(&port->dev, "%s: No interrupt URB for port\n", __func__); + continue; + } + err = usb_submit_urb(port->interrupt_in_urb, GFP_NOIO); + dev_dbg(&port->dev, "Submitted interrupt URB for port (result %d)\n", err); + if (err < 0) { + dev_err(&port->dev, "%s: Error %d for interrupt URB\n", + __func__, err); + goto err_out; + } + } - spin_lock_irq(&intfdata->susp_lock); for (i = 0; i < serial->num_ports; i++) { /* walk all ports */ port = serial->port[i]; portdata = usb_get_serial_port_data(port); /* skip closed ports */ - if (!portdata || !portdata->opened) + spin_lock_irq(&intfdata->susp_lock); + if (!portdata || !portdata->opened) { + spin_unlock_irq(&intfdata->susp_lock); continue; - - if (port->interrupt_in_urb) { - err = usb_submit_urb(port->interrupt_in_urb, - GFP_ATOMIC); - if (err) { - dev_err(&port->dev, - "%s: submit int urb failed: %d\n", - __func__, err); - err_count++; - } } - err = play_delayed(port); - if (err) - err_count++; - for (j = 0; j < N_IN_URB; j++) { urb = portdata->in_urbs[j]; err = usb_submit_urb(urb, GFP_ATOMIC); if (err < 0) { dev_err(&port->dev, "%s: Error %d for bulk URB %d\n", __func__, err, i); - err_count++; + spin_unlock_irq(&intfdata->susp_lock); + goto err_out; } } + play_delayed(port); + spin_unlock_irq(&intfdata->susp_lock); } + spin_lock_irq(&intfdata->susp_lock); intfdata->suspended = 0; spin_unlock_irq(&intfdata->susp_lock); - - if (err_count) - return -EIO; - - return 0; +err_out: + return err; } EXPORT_SYMBOL(usb_wwan_resume); #endif diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c index 5e3dd9f87ff..347caad47a1 100644 --- a/drivers/usb/serial/whiteheat.c +++ b/drivers/usb/serial/whiteheat.c @@ -521,10 +521,6 @@ static void command_port_read_callback(struct urb *urb) dev_dbg(&urb->dev->dev, "%s - command_info is NULL, exiting.\n", __func__); return; } - if (!urb->actual_length) { - dev_dbg(&urb->dev->dev, "%s - empty response, exiting.\n", __func__); - return; - } if (status) { dev_dbg(&urb->dev->dev, "%s - nonzero urb status: %d\n", __func__, status); if (status != -ENOENT) @@ -545,8 +541,7 @@ static void command_port_read_callback(struct urb *urb) /* These are unsolicited reports from the firmware, hence no waiting command to wakeup */ dev_dbg(&urb->dev->dev, "%s - event received\n", __func__); - } else if ((data[0] == WHITEHEAT_GET_DTR_RTS) && - (urb->actual_length - 1 <= sizeof(command_info->result_buffer))) { + } else if (data[0] == WHITEHEAT_GET_DTR_RTS) { memcpy(command_info->result_buffer, &data[1], urb->actual_length - 1); command_info->command_finished = WHITEHEAT_CMD_COMPLETE; diff --git a/drivers/usb/serial/zte_ev.c b/drivers/usb/serial/zte_ev.c index d6a3fbd029b..eae2c873b39 100644 --- a/drivers/usb/serial/zte_ev.c +++ b/drivers/usb/serial/zte_ev.c @@ -273,16 +273,28 @@ static void zte_ev_usb_serial_close(struct usb_serial_port *port) } static const struct usb_device_id id_table[] = { - { USB_DEVICE(0x19d2, 0xffec) }, - { USB_DEVICE(0x19d2, 0xffee) }, + /* AC8710, AC8710T */ + { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xffff, 0xff, 0xff, 0xff) }, + /* AC8700 */ + { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xfffe, 0xff, 0xff, 0xff) }, + /* MG880 */ + { USB_DEVICE(0x19d2, 0xfffd) }, + { USB_DEVICE(0x19d2, 0xfffc) }, + { USB_DEVICE(0x19d2, 0xfffb) }, + /* AC8710_V3 */ { USB_DEVICE(0x19d2, 0xfff6) }, { USB_DEVICE(0x19d2, 0xfff7) }, { USB_DEVICE(0x19d2, 0xfff8) }, { USB_DEVICE(0x19d2, 0xfff9) }, - { USB_DEVICE(0x19d2, 0xfffb) }, - { USB_DEVICE(0x19d2, 0xfffc) }, - /* MG880 */ - { USB_DEVICE(0x19d2, 0xfffd) }, + { USB_DEVICE(0x19d2, 0xffee) }, + /* AC2716, MC2716 */ + { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xffed, 0xff, 0xff, 0xff) }, + /* AD3812 */ + { USB_DEVICE_AND_INTERFACE_INFO(0x19d2, 0xffeb, 0xff, 0xff, 0xff) }, + { USB_DEVICE(0x19d2, 0xffec) }, + { USB_DEVICE(0x05C6, 0x3197) }, + { USB_DEVICE(0x05C6, 0x6000) }, + { USB_DEVICE(0x05C6, 0x9008) }, { }, }; MODULE_DEVICE_TABLE(usb, id_table); diff --git a/drivers/usb/storage/shuttle_usbat.c b/drivers/usb/storage/shuttle_usbat.c index 008d805c3d2..4ef2a80728f 100644 --- a/drivers/usb/storage/shuttle_usbat.c +++ b/drivers/usb/storage/shuttle_usbat.c @@ -1851,7 +1851,7 @@ static int usbat_probe(struct usb_interface *intf, us->transport_name = "Shuttle USBAT"; us->transport = usbat_flash_transport; us->transport_reset = usb_stor_CB_reset; - us->max_lun = 0; + us->max_lun = 1; result = usb_stor_probe2(us); return result; diff --git a/drivers/usb/storage/transport.c b/drivers/usb/storage/transport.c index b1d815eb6d0..22c7d4360fa 100644 --- a/drivers/usb/storage/transport.c +++ b/drivers/usb/storage/transport.c @@ -1118,31 +1118,6 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us) */ if (result == USB_STOR_XFER_LONG) fake_sense = 1; - - /* - * Sometimes a device will mistakenly skip the data phase - * and go directly to the status phase without sending a - * zero-length packet. If we get a 13-byte response here, - * check whether it really is a CSW. - */ - if (result == USB_STOR_XFER_SHORT && - srb->sc_data_direction == DMA_FROM_DEVICE && - transfer_length - scsi_get_resid(srb) == - US_BULK_CS_WRAP_LEN) { - struct scatterlist *sg = NULL; - unsigned int offset = 0; - - if (usb_stor_access_xfer_buf((unsigned char *) bcs, - US_BULK_CS_WRAP_LEN, srb, &sg, - &offset, FROM_XFER_BUF) == - US_BULK_CS_WRAP_LEN && - bcs->Signature == - cpu_to_le32(US_BULK_CS_SIGN)) { - usb_stor_dbg(us, "Device skipped data phase\n"); - scsi_set_resid(srb, transfer_length); - goto skipped_data_phase; - } - } } /* See flow chart on pg 15 of the Bulk Only Transport spec for @@ -1178,7 +1153,6 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us) if (result != USB_STOR_XFER_GOOD) return USB_STOR_TRANSPORT_ERROR; - skipped_data_phase: /* check bulk status */ residue = le32_to_cpu(bcs->Residue); usb_stor_dbg(us, "Bulk Status S 0x%x T 0x%x R %u Stat 0x%x\n", diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h index 7f625306ea8..adbeb255616 100644 --- a/drivers/usb/storage/unusual_devs.h +++ b/drivers/usb/storage/unusual_devs.h @@ -101,12 +101,6 @@ UNUSUAL_DEV( 0x03f0, 0x4002, 0x0001, 0x0001, "PhotoSmart R707", USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_FIX_CAPACITY), -UNUSUAL_DEV( 0x03f3, 0x0001, 0x0000, 0x9999, - "Adaptec", - "USBConnect 2000", - USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, - US_FL_SCM_MULT_TARG ), - /* Reported by Sebastian Kapfer <sebastian_kapfer@gmx.net> * and Olaf Hering <olh@suse.de> (different bcd's, same vendor/product) * for USB floppies that need the SINGLE_LUN enforcement. @@ -240,20 +234,6 @@ UNUSUAL_DEV( 0x0421, 0x0495, 0x0370, 0x0370, USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_MAX_SECTORS_64 ), -/* Reported by Daniele Forsi <dforsi@gmail.com> */ -UNUSUAL_DEV( 0x0421, 0x04b9, 0x0350, 0x0350, - "Nokia", - "5300", - USB_SC_DEVICE, USB_PR_DEVICE, NULL, - US_FL_MAX_SECTORS_64 ), - -/* Patch submitted by Victor A. Santos <victoraur.santos@gmail.com> */ -UNUSUAL_DEV( 0x0421, 0x05af, 0x0742, 0x0742, - "Nokia", - "305", - USB_SC_DEVICE, USB_PR_DEVICE, NULL, - US_FL_MAX_SECTORS_64), - /* Patch submitted by Mikhail Zolotaryov <lebon@lebon.org.ua> */ UNUSUAL_DEV( 0x0421, 0x06aa, 0x1110, 0x1110, "Nokia", @@ -747,12 +727,6 @@ UNUSUAL_DEV( 0x059b, 0x0001, 0x0100, 0x0100, USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_SINGLE_LUN ), -UNUSUAL_DEV( 0x059b, 0x0040, 0x0100, 0x0100, - "Iomega", - "Jaz USB Adapter", - USB_SC_DEVICE, USB_PR_DEVICE, NULL, - US_FL_SINGLE_LUN ), - /* Reported by <Hendryk.Pfeiffer@gmx.de> */ UNUSUAL_DEV( 0x059f, 0x0643, 0x0000, 0x0000, "LaCie", @@ -1125,18 +1099,6 @@ UNUSUAL_DEV( 0x0851, 0x1543, 0x0200, 0x0200, USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_NOT_LOCKABLE), -UNUSUAL_DEV( 0x085a, 0x0026, 0x0100, 0x0133, - "Xircom", - "PortGear USB-SCSI (Mac USB Dock)", - USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, - US_FL_SCM_MULT_TARG ), - -UNUSUAL_DEV( 0x085a, 0x0028, 0x0100, 0x0133, - "Xircom", - "PortGear USB to SCSI Converter", - USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, - US_FL_SCM_MULT_TARG ), - /* Submitted by Jan De Luyck <lkml@kcore.org> */ UNUSUAL_DEV( 0x08bd, 0x1100, 0x0000, 0x0000, "CITIZEN", @@ -1969,14 +1931,6 @@ UNUSUAL_DEV( 0x152d, 0x2329, 0x0100, 0x0100, USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_IGNORE_RESIDUE | US_FL_SANE_SENSE ), -/* Entrega Technologies U1-SC25 (later Xircom PortGear PGSCSI) - * and Mac USB Dock USB-SCSI */ -UNUSUAL_DEV( 0x1645, 0x0007, 0x0100, 0x0133, - "Entrega Technologies", - "USB to SCSI Converter", - USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, - US_FL_SCM_MULT_TARG ), - /* Reported by Robert Schedel <r.schedel@yahoo.de> * Note: this is a 'super top' device like the above 14cd/6600 device */ UNUSUAL_DEV( 0x1652, 0x6600, 0x0201, 0x0201, @@ -1999,12 +1953,6 @@ UNUSUAL_DEV( 0x177f, 0x0400, 0x0000, 0x0000, USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_BULK_IGNORE_TAG | US_FL_MAX_SECTORS_64 ), -UNUSUAL_DEV( 0x1822, 0x0001, 0x0000, 0x9999, - "Ariston Technologies", - "iConnect USB to SCSI adapter", - USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, - US_FL_SCM_MULT_TARG ), - /* Reported by Hans de Goede <hdegoede@redhat.com> * These Appotech controllers are found in Picture Frames, they provide a * (buggy) emulation of a cdrom drive which contains the windows software diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 22080eb6aff..6f3fbc48a6c 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -138,12 +138,12 @@ static bool is_invalid_reserved_pfn(unsigned long pfn) if (pfn_valid(pfn)) { bool reserved; struct page *tail = pfn_to_page(pfn); - struct page *head = compound_head(tail); + struct page *head = compound_trans_head(tail); reserved = !!(PageReserved(head)); if (head != tail) { /* * "head" is not a dangling pointer - * (compound_head takes care of that) + * (compound_trans_head takes care of that) * but the hugepage may have been split * from under us (and we may not hold a * reference count on the head page so it can diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index c7fdabd0e5d..d6a518ce4d6 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -513,13 +513,9 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, r = -ENOBUFS; goto err; } - r = vhost_get_vq_desc(vq->dev, vq, vq->iov + seg, + d = vhost_get_vq_desc(vq->dev, vq, vq->iov + seg, ARRAY_SIZE(vq->iov) - seg, &out, &in, log, log_num); - if (unlikely(r < 0)) - goto err; - - d = r; if (d == vq->num) { r = 0; goto err; @@ -544,12 +540,6 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, *iovcount = seg; if (unlikely(log)) *log_num = nlogs; - - /* Detect overrun */ - if (unlikely(datalen > 0)) { - r = UIO_MAXIOV + 1; - goto err; - } return headcount; err: vhost_discard_vq_desc(vq, headcount); @@ -605,14 +595,6 @@ static void handle_rx(struct vhost_net *net) /* On error, stop handling until the next kick. */ if (unlikely(headcount < 0)) break; - /* On overrun, truncate and discard */ - if (unlikely(headcount > UIO_MAXIOV)) { - msg.msg_iovlen = 1; - err = sock->ops->recvmsg(NULL, sock, &msg, - 1, MSG_DONTWAIT | MSG_TRUNC); - pr_debug("Discarded rx packet: len %zd\n", sock_len); - continue; - } /* OK, now we need to know about added descriptors. */ if (!headcount) { if (unlikely(vhost_enable_notify(&net->dev, vq))) { diff --git a/drivers/video/aty/mach64_accel.c b/drivers/video/aty/mach64_accel.c index 182bd680141..e45833ce975 100644 --- a/drivers/video/aty/mach64_accel.c +++ b/drivers/video/aty/mach64_accel.c @@ -4,7 +4,6 @@ */ #include <linux/delay.h> -#include <asm/unaligned.h> #include <linux/fb.h> #include <video/mach64.h> #include "atyfb.h" @@ -420,7 +419,7 @@ void atyfb_imageblit(struct fb_info *info, const struct fb_image *image) u32 *pbitmap, dwords = (src_bytes + 3) / 4; for (pbitmap = (u32*)(image->data); dwords; dwords--, pbitmap++) { wait_for_fifo(1, par); - aty_st_le32(HOST_DATA0, get_unaligned_le32(pbitmap), par); + aty_st_le32(HOST_DATA0, le32_to_cpup(pbitmap), par); } } diff --git a/drivers/video/aty/mach64_cursor.c b/drivers/video/aty/mach64_cursor.c index 0fe02e22d9a..95ec042ddbf 100644 --- a/drivers/video/aty/mach64_cursor.c +++ b/drivers/video/aty/mach64_cursor.c @@ -5,7 +5,6 @@ #include <linux/fb.h> #include <linux/init.h> #include <linux/string.h> -#include "../fb_draw.h" #include <asm/io.h> @@ -158,33 +157,24 @@ static int atyfb_cursor(struct fb_info *info, struct fb_cursor *cursor) for (i = 0; i < height; i++) { for (j = 0; j < width; j++) { - u16 l = 0xaaaa; b = *src++; m = *msk++; switch (cursor->rop) { case ROP_XOR: // Upper 4 bits of mask data - l = cursor_bits_lookup[(b ^ m) >> 4] | + fb_writeb(cursor_bits_lookup[(b ^ m) >> 4], dst++); // Lower 4 bits of mask - (cursor_bits_lookup[(b ^ m) & 0x0f] << 8); + fb_writeb(cursor_bits_lookup[(b ^ m) & 0x0f], + dst++); break; case ROP_COPY: // Upper 4 bits of mask data - l = cursor_bits_lookup[(b & m) >> 4] | + fb_writeb(cursor_bits_lookup[(b & m) >> 4], dst++); // Lower 4 bits of mask - (cursor_bits_lookup[(b & m) & 0x0f] << 8); + fb_writeb(cursor_bits_lookup[(b & m) & 0x0f], + dst++); break; } - /* - * If cursor size is not a multiple of 8 characters - * we must pad it with transparent pattern (0xaaaa). - */ - if ((j + 1) * 8 > cursor->image.width) { - l = comp(l, 0xaaaa, - (1 << ((cursor->image.width & 7) * 2)) - 1); - } - fb_writeb(l & 0xff, dst++); - fb_writeb(l >> 8, dst++); } dst += offset; } diff --git a/drivers/video/cfbcopyarea.c b/drivers/video/cfbcopyarea.c index bcb57235fcc..bb5a96b1645 100644 --- a/drivers/video/cfbcopyarea.c +++ b/drivers/video/cfbcopyarea.c @@ -43,22 +43,13 @@ */ static void -bitcpy(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, - const unsigned long __iomem *src, unsigned src_idx, int bits, +bitcpy(struct fb_info *p, unsigned long __iomem *dst, int dst_idx, + const unsigned long __iomem *src, int src_idx, int bits, unsigned n, u32 bswapmask) { unsigned long first, last; int const shift = dst_idx-src_idx; - -#if 0 - /* - * If you suspect bug in this function, compare it with this simple - * memmove implementation. - */ - fb_memmove((char *)dst + ((dst_idx & (bits - 1))) / 8, - (char *)src + ((src_idx & (bits - 1))) / 8, n / 8); - return; -#endif + int left, right; first = fb_shifted_pixels_mask_long(p, dst_idx, bswapmask); last = ~fb_shifted_pixels_mask_long(p, (dst_idx+n) % bits, bswapmask); @@ -107,8 +98,9 @@ bitcpy(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, unsigned long d0, d1; int m; - int const left = shift & (bits - 1); - int const right = -shift & (bits - 1); + right = shift & (bits - 1); + left = -shift & (bits - 1); + bswapmask &= shift; if (dst_idx+n <= bits) { // Single destination word @@ -118,15 +110,15 @@ bitcpy(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, d0 = fb_rev_pixels_in_long(d0, bswapmask); if (shift > 0) { // Single source word - d0 <<= left; + d0 >>= right; } else if (src_idx+n <= bits) { // Single source word - d0 >>= right; + d0 <<= left; } else { // 2 source words d1 = FB_READL(src + 1); d1 = fb_rev_pixels_in_long(d1, bswapmask); - d0 = d0 >> right | d1 << left; + d0 = d0<<left | d1>>right; } d0 = fb_rev_pixels_in_long(d0, bswapmask); FB_WRITEL(comp(d0, FB_READL(dst), first), dst); @@ -143,59 +135,60 @@ bitcpy(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, if (shift > 0) { // Single source word d1 = d0; - d0 <<= left; + d0 >>= right; + dst++; n -= bits - dst_idx; } else { // 2 source words d1 = FB_READL(src++); d1 = fb_rev_pixels_in_long(d1, bswapmask); - d0 = d0 >> right | d1 << left; + d0 = d0<<left | d1>>right; + dst++; n -= bits - dst_idx; } d0 = fb_rev_pixels_in_long(d0, bswapmask); FB_WRITEL(comp(d0, FB_READL(dst), first), dst); d0 = d1; - dst++; // Main chunk m = n % bits; n /= bits; while ((n >= 4) && !bswapmask) { d1 = FB_READL(src++); - FB_WRITEL(d0 >> right | d1 << left, dst++); + FB_WRITEL(d0 << left | d1 >> right, dst++); d0 = d1; d1 = FB_READL(src++); - FB_WRITEL(d0 >> right | d1 << left, dst++); + FB_WRITEL(d0 << left | d1 >> right, dst++); d0 = d1; d1 = FB_READL(src++); - FB_WRITEL(d0 >> right | d1 << left, dst++); + FB_WRITEL(d0 << left | d1 >> right, dst++); d0 = d1; d1 = FB_READL(src++); - FB_WRITEL(d0 >> right | d1 << left, dst++); + FB_WRITEL(d0 << left | d1 >> right, dst++); d0 = d1; n -= 4; } while (n--) { d1 = FB_READL(src++); d1 = fb_rev_pixels_in_long(d1, bswapmask); - d0 = d0 >> right | d1 << left; + d0 = d0 << left | d1 >> right; d0 = fb_rev_pixels_in_long(d0, bswapmask); FB_WRITEL(d0, dst++); d0 = d1; } // Trailing bits - if (m) { - if (m <= bits - right) { + if (last) { + if (m <= right) { // Single source word - d0 >>= right; + d0 <<= left; } else { // 2 source words d1 = FB_READL(src); d1 = fb_rev_pixels_in_long(d1, bswapmask); - d0 = d0 >> right | d1 << left; + d0 = d0<<left | d1>>right; } d0 = fb_rev_pixels_in_long(d0, bswapmask); FB_WRITEL(comp(d0, FB_READL(dst), last), dst); @@ -209,46 +202,43 @@ bitcpy(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, */ static void -bitcpy_rev(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, - const unsigned long __iomem *src, unsigned src_idx, int bits, +bitcpy_rev(struct fb_info *p, unsigned long __iomem *dst, int dst_idx, + const unsigned long __iomem *src, int src_idx, int bits, unsigned n, u32 bswapmask) { unsigned long first, last; int shift; -#if 0 - /* - * If you suspect bug in this function, compare it with this simple - * memmove implementation. - */ - fb_memmove((char *)dst + ((dst_idx & (bits - 1))) / 8, - (char *)src + ((src_idx & (bits - 1))) / 8, n / 8); - return; -#endif - - dst += (dst_idx + n - 1) / bits; - src += (src_idx + n - 1) / bits; - dst_idx = (dst_idx + n - 1) % bits; - src_idx = (src_idx + n - 1) % bits; + dst += (n-1)/bits; + src += (n-1)/bits; + if ((n-1) % bits) { + dst_idx += (n-1) % bits; + dst += dst_idx >> (ffs(bits) - 1); + dst_idx &= bits - 1; + src_idx += (n-1) % bits; + src += src_idx >> (ffs(bits) - 1); + src_idx &= bits - 1; + } shift = dst_idx-src_idx; - first = ~fb_shifted_pixels_mask_long(p, (dst_idx + 1) % bits, bswapmask); - last = fb_shifted_pixels_mask_long(p, (bits + dst_idx + 1 - n) % bits, bswapmask); + first = fb_shifted_pixels_mask_long(p, bits - 1 - dst_idx, bswapmask); + last = ~fb_shifted_pixels_mask_long(p, bits - 1 - ((dst_idx-n) % bits), + bswapmask); if (!shift) { // Same alignment for source and dest if ((unsigned long)dst_idx+1 >= n) { // Single word - if (first) - last &= first; - FB_WRITEL( comp( FB_READL(src), FB_READL(dst), last), dst); + if (last) + first &= last; + FB_WRITEL( comp( FB_READL(src), FB_READL(dst), first), dst); } else { // Multiple destination words // Leading bits - if (first) { + if (first != ~0UL) { FB_WRITEL( comp( FB_READL(src), FB_READL(dst), first), dst); dst--; src--; @@ -272,7 +262,7 @@ bitcpy_rev(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, FB_WRITEL(FB_READL(src--), dst--); // Trailing bits - if (last != -1UL) + if (last) FB_WRITEL( comp( FB_READL(src), FB_READL(dst), last), dst); } } else { @@ -280,28 +270,29 @@ bitcpy_rev(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, unsigned long d0, d1; int m; - int const left = shift & (bits-1); - int const right = -shift & (bits-1); + int const left = -shift & (bits-1); + int const right = shift & (bits-1); + bswapmask &= shift; if ((unsigned long)dst_idx+1 >= n) { // Single destination word - if (first) - last &= first; + if (last) + first &= last; d0 = FB_READL(src); if (shift < 0) { // Single source word - d0 >>= right; + d0 <<= left; } else if (1+(unsigned long)src_idx >= n) { // Single source word - d0 <<= left; + d0 >>= right; } else { // 2 source words d1 = FB_READL(src - 1); d1 = fb_rev_pixels_in_long(d1, bswapmask); - d0 = d0 << left | d1 >> right; + d0 = d0>>right | d1<<left; } d0 = fb_rev_pixels_in_long(d0, bswapmask); - FB_WRITEL(comp(d0, FB_READL(dst), last), dst); + FB_WRITEL(comp(d0, FB_READL(dst), first), dst); } else { // Multiple destination words /** We must always remember the last value read, because in case @@ -316,12 +307,12 @@ bitcpy_rev(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, if (shift < 0) { // Single source word d1 = d0; - d0 >>= right; + d0 <<= left; } else { // 2 source words d1 = FB_READL(src--); d1 = fb_rev_pixels_in_long(d1, bswapmask); - d0 = d0 << left | d1 >> right; + d0 = d0>>right | d1<<left; } d0 = fb_rev_pixels_in_long(d0, bswapmask); FB_WRITEL(comp(d0, FB_READL(dst), first), dst); @@ -334,39 +325,39 @@ bitcpy_rev(struct fb_info *p, unsigned long __iomem *dst, unsigned dst_idx, n /= bits; while ((n >= 4) && !bswapmask) { d1 = FB_READL(src--); - FB_WRITEL(d0 << left | d1 >> right, dst--); + FB_WRITEL(d0 >> right | d1 << left, dst--); d0 = d1; d1 = FB_READL(src--); - FB_WRITEL(d0 << left | d1 >> right, dst--); + FB_WRITEL(d0 >> right | d1 << left, dst--); d0 = d1; d1 = FB_READL(src--); - FB_WRITEL(d0 << left | d1 >> right, dst--); + FB_WRITEL(d0 >> right | d1 << left, dst--); d0 = d1; d1 = FB_READL(src--); - FB_WRITEL(d0 << left | d1 >> right, dst--); + FB_WRITEL(d0 >> right | d1 << left, dst--); d0 = d1; n -= 4; } while (n--) { d1 = FB_READL(src--); d1 = fb_rev_pixels_in_long(d1, bswapmask); - d0 = d0 << left | d1 >> right; + d0 = d0 >> right | d1 << left; d0 = fb_rev_pixels_in_long(d0, bswapmask); FB_WRITEL(d0, dst--); d0 = d1; } // Trailing bits - if (m) { - if (m <= bits - left) { + if (last) { + if (m <= left) { // Single source word - d0 <<= left; + d0 >>= right; } else { // 2 source words d1 = FB_READL(src); d1 = fb_rev_pixels_in_long(d1, bswapmask); - d0 = d0 << left | d1 >> right; + d0 = d0>>right | d1<<left; } d0 = fb_rev_pixels_in_long(d0, bswapmask); FB_WRITEL(comp(d0, FB_READL(dst), last), dst); @@ -380,9 +371,9 @@ void cfb_copyarea(struct fb_info *p, const struct fb_copyarea *area) u32 dx = area->dx, dy = area->dy, sx = area->sx, sy = area->sy; u32 height = area->height, width = area->width; unsigned long const bits_per_line = p->fix.line_length*8u; - unsigned long __iomem *base = NULL; + unsigned long __iomem *dst = NULL, *src = NULL; int bits = BITS_PER_LONG, bytes = bits >> 3; - unsigned dst_idx = 0, src_idx = 0, rev_copy = 0; + int dst_idx = 0, src_idx = 0, rev_copy = 0; u32 bswapmask = fb_compute_bswapmask(p); if (p->state != FBINFO_STATE_RUNNING) @@ -398,7 +389,7 @@ void cfb_copyarea(struct fb_info *p, const struct fb_copyarea *area) // split the base of the framebuffer into a long-aligned address and the // index of the first bit - base = (unsigned long __iomem *)((unsigned long)p->screen_base & ~(bytes-1)); + dst = src = (unsigned long __iomem *)((unsigned long)p->screen_base & ~(bytes-1)); dst_idx = src_idx = 8*((unsigned long)p->screen_base & (bytes-1)); // add offset of source and target area dst_idx += dy*bits_per_line + dx*p->var.bits_per_pixel; @@ -411,14 +402,20 @@ void cfb_copyarea(struct fb_info *p, const struct fb_copyarea *area) while (height--) { dst_idx -= bits_per_line; src_idx -= bits_per_line; - bitcpy_rev(p, base + (dst_idx / bits), dst_idx % bits, - base + (src_idx / bits), src_idx % bits, bits, + dst += dst_idx >> (ffs(bits) - 1); + dst_idx &= (bytes - 1); + src += src_idx >> (ffs(bits) - 1); + src_idx &= (bytes - 1); + bitcpy_rev(p, dst, dst_idx, src, src_idx, bits, width*p->var.bits_per_pixel, bswapmask); } } else { while (height--) { - bitcpy(p, base + (dst_idx / bits), dst_idx % bits, - base + (src_idx / bits), src_idx % bits, bits, + dst += dst_idx >> (ffs(bits) - 1); + dst_idx &= (bytes - 1); + src += src_idx >> (ffs(bits) - 1); + src_idx &= (bytes - 1); + bitcpy(p, dst, dst_idx, src, src_idx, bits, width*p->var.bits_per_pixel, bswapmask); dst_idx += bits_per_line; src_idx += bits_per_line; diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c index dbfe4eecf12..61b182bf32a 100644 --- a/drivers/video/console/bitblit.c +++ b/drivers/video/console/bitblit.c @@ -205,6 +205,7 @@ static void bit_putcs(struct vc_data *vc, struct fb_info *info, static void bit_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) { + int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; unsigned int cw = vc->vc_font.width; unsigned int ch = vc->vc_font.height; unsigned int rw = info->var.xres - (vc->vc_cols*cw); @@ -213,7 +214,7 @@ static void bit_clear_margins(struct vc_data *vc, struct fb_info *info, unsigned int bs = info->var.yres - bh; struct fb_fillrect region; - region.color = 0; + region.color = attr_bgcol_ec(bgshift, vc, info); region.rop = ROP_COPY; if (rw && !bottom_only) { diff --git a/drivers/video/console/fbcon_ccw.c b/drivers/video/console/fbcon_ccw.c index 5a3cbf6dff4..41b32ae23da 100644 --- a/drivers/video/console/fbcon_ccw.c +++ b/drivers/video/console/fbcon_ccw.c @@ -197,8 +197,9 @@ static void ccw_clear_margins(struct vc_data *vc, struct fb_info *info, unsigned int bh = info->var.xres - (vc->vc_rows*ch); unsigned int bs = vc->vc_rows*ch; struct fb_fillrect region; + int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; - region.color = 0; + region.color = attr_bgcol_ec(bgshift,vc,info); region.rop = ROP_COPY; if (rw && !bottom_only) { diff --git a/drivers/video/console/fbcon_cw.c b/drivers/video/console/fbcon_cw.c index e7ee44db4e9..a93670ef7f8 100644 --- a/drivers/video/console/fbcon_cw.c +++ b/drivers/video/console/fbcon_cw.c @@ -180,8 +180,9 @@ static void cw_clear_margins(struct vc_data *vc, struct fb_info *info, unsigned int bh = info->var.xres - (vc->vc_rows*ch); unsigned int rs = info->var.yres - rw; struct fb_fillrect region; + int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; - region.color = 0; + region.color = attr_bgcol_ec(bgshift,vc,info); region.rop = ROP_COPY; if (rw && !bottom_only) { diff --git a/drivers/video/console/fbcon_ud.c b/drivers/video/console/fbcon_ud.c index 19e3714abfe..ff0872c0498 100644 --- a/drivers/video/console/fbcon_ud.c +++ b/drivers/video/console/fbcon_ud.c @@ -227,8 +227,9 @@ static void ud_clear_margins(struct vc_data *vc, struct fb_info *info, unsigned int rw = info->var.xres - (vc->vc_cols*cw); unsigned int bh = info->var.yres - (vc->vc_rows*ch); struct fb_fillrect region; + int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; - region.color = 0; + region.color = attr_bgcol_ec(bgshift,vc,info); region.rop = ROP_COPY; if (rw && !bottom_only) { diff --git a/drivers/video/fb-puv3.c b/drivers/video/fb-puv3.c index 520112531eb..27fc956166f 100644 --- a/drivers/video/fb-puv3.c +++ b/drivers/video/fb-puv3.c @@ -18,10 +18,8 @@ #include <linux/fb.h> #include <linux/init.h> #include <linux/console.h> -#include <linux/mm.h> #include <asm/sizes.h> -#include <asm/pgtable.h> #include <mach/hardware.h> /* Platform_data reserved for unifb registers. */ diff --git a/drivers/video/matrox/matroxfb_accel.c b/drivers/video/matrox/matroxfb_accel.c index 0d5cb85d071..8335a6fe303 100644 --- a/drivers/video/matrox/matroxfb_accel.c +++ b/drivers/video/matrox/matroxfb_accel.c @@ -192,18 +192,10 @@ void matrox_cfbX_init(struct matrox_fb_info *minfo) minfo->accel.m_dwg_rect = M_DWG_TRAP | M_DWG_SOLID | M_DWG_ARZERO | M_DWG_SGNZERO | M_DWG_SHIFTZERO; if (isMilleniumII(minfo)) minfo->accel.m_dwg_rect |= M_DWG_TRANSC; minfo->accel.m_opmode = mopmode; - minfo->accel.m_access = maccess; - minfo->accel.m_pitch = mpitch; } EXPORT_SYMBOL(matrox_cfbX_init); -static void matrox_accel_restore_maccess(struct matrox_fb_info *minfo) -{ - mga_outl(M_MACCESS, minfo->accel.m_access); - mga_outl(M_PITCH, minfo->accel.m_pitch); -} - static void matrox_accel_bmove(struct matrox_fb_info *minfo, int vxres, int sy, int sx, int dy, int dx, int height, int width) { @@ -215,8 +207,7 @@ static void matrox_accel_bmove(struct matrox_fb_info *minfo, int vxres, int sy, CRITBEGIN if ((dy < sy) || ((dy == sy) && (dx <= sx))) { - mga_fifo(4); - matrox_accel_restore_maccess(minfo); + mga_fifo(2); mga_outl(M_DWGCTL, M_DWG_BITBLT | M_DWG_SHIFTZERO | M_DWG_SGNZERO | M_DWG_BFCOL | M_DWG_REPLACE); mga_outl(M_AR5, vxres); @@ -224,8 +215,7 @@ static void matrox_accel_bmove(struct matrox_fb_info *minfo, int vxres, int sy, start = sy*vxres+sx+curr_ydstorg(minfo); end = start+width; } else { - mga_fifo(5); - matrox_accel_restore_maccess(minfo); + mga_fifo(3); mga_outl(M_DWGCTL, M_DWG_BITBLT | M_DWG_SHIFTZERO | M_DWG_BFCOL | M_DWG_REPLACE); mga_outl(M_SGN, 5); mga_outl(M_AR5, -vxres); @@ -234,8 +224,7 @@ static void matrox_accel_bmove(struct matrox_fb_info *minfo, int vxres, int sy, start = end+width; dy += height-1; } - mga_fifo(6); - matrox_accel_restore_maccess(minfo); + mga_fifo(4); mga_outl(M_AR0, end); mga_outl(M_AR3, start); mga_outl(M_FXBNDRY, ((dx+width)<<16) | dx); @@ -257,8 +246,7 @@ static void matrox_accel_bmove_lin(struct matrox_fb_info *minfo, int vxres, CRITBEGIN if ((dy < sy) || ((dy == sy) && (dx <= sx))) { - mga_fifo(4); - matrox_accel_restore_maccess(minfo); + mga_fifo(2); mga_outl(M_DWGCTL, M_DWG_BITBLT | M_DWG_SHIFTZERO | M_DWG_SGNZERO | M_DWG_BFCOL | M_DWG_REPLACE); mga_outl(M_AR5, vxres); @@ -266,8 +254,7 @@ static void matrox_accel_bmove_lin(struct matrox_fb_info *minfo, int vxres, start = sy*vxres+sx+curr_ydstorg(minfo); end = start+width; } else { - mga_fifo(5); - matrox_accel_restore_maccess(minfo); + mga_fifo(3); mga_outl(M_DWGCTL, M_DWG_BITBLT | M_DWG_SHIFTZERO | M_DWG_BFCOL | M_DWG_REPLACE); mga_outl(M_SGN, 5); mga_outl(M_AR5, -vxres); @@ -276,8 +263,7 @@ static void matrox_accel_bmove_lin(struct matrox_fb_info *minfo, int vxres, start = end+width; dy += height-1; } - mga_fifo(7); - matrox_accel_restore_maccess(minfo); + mga_fifo(5); mga_outl(M_AR0, end); mga_outl(M_AR3, start); mga_outl(M_FXBNDRY, ((dx+width)<<16) | dx); @@ -312,8 +298,7 @@ static void matroxfb_accel_clear(struct matrox_fb_info *minfo, u_int32_t color, CRITBEGIN - mga_fifo(7); - matrox_accel_restore_maccess(minfo); + mga_fifo(5); mga_outl(M_DWGCTL, minfo->accel.m_dwg_rect | M_DWG_REPLACE); mga_outl(M_FCOL, color); mga_outl(M_FXBNDRY, ((sx + width) << 16) | sx); @@ -356,8 +341,7 @@ static void matroxfb_cfb4_clear(struct matrox_fb_info *minfo, u_int32_t bgx, width >>= 1; sx >>= 1; if (width) { - mga_fifo(7); - matrox_accel_restore_maccess(minfo); + mga_fifo(5); mga_outl(M_DWGCTL, minfo->accel.m_dwg_rect | M_DWG_REPLACE2); mga_outl(M_FCOL, bgx); mga_outl(M_FXBNDRY, ((sx + width) << 16) | sx); @@ -431,8 +415,7 @@ static void matroxfb_1bpp_imageblit(struct matrox_fb_info *minfo, u_int32_t fgx, CRITBEGIN - mga_fifo(5); - matrox_accel_restore_maccess(minfo); + mga_fifo(3); if (easy) mga_outl(M_DWGCTL, M_DWG_ILOAD | M_DWG_SGNZERO | M_DWG_SHIFTZERO | M_DWG_BMONOWF | M_DWG_LINEAR | M_DWG_REPLACE); else @@ -442,8 +425,7 @@ static void matroxfb_1bpp_imageblit(struct matrox_fb_info *minfo, u_int32_t fgx, fxbndry = ((xx + width - 1) << 16) | xx; mmio = minfo->mmio.vbase; - mga_fifo(8); - matrox_accel_restore_maccess(minfo); + mga_fifo(6); mga_writel(mmio, M_FXBNDRY, fxbndry); mga_writel(mmio, M_AR0, ar0); mga_writel(mmio, M_AR3, 0); diff --git a/drivers/video/matrox/matroxfb_base.h b/drivers/video/matrox/matroxfb_base.h index 89a8a89a5eb..11ed57bb704 100644 --- a/drivers/video/matrox/matroxfb_base.h +++ b/drivers/video/matrox/matroxfb_base.h @@ -307,8 +307,6 @@ struct matrox_accel_data { #endif u_int32_t m_dwg_rect; u_int32_t m_opmode; - u_int32_t m_access; - u_int32_t m_pitch; }; struct v4l2_queryctrl; @@ -698,7 +696,7 @@ void matroxfb_unregister_driver(struct matroxfb_driver* drv); #define mga_fifo(n) do {} while ((mga_inl(M_FIFOSTATUS) & 0xFF) < (n)) -#define WaitTillIdle() do { mga_inl(M_STATUS); do {} while (mga_inl(M_STATUS) & 0x10000); } while (0) +#define WaitTillIdle() do {} while (mga_inl(M_STATUS) & 0x10000) /* code speedup */ #ifdef CONFIG_FB_MATROX_MILLENIUM diff --git a/drivers/video/tgafb.c b/drivers/video/tgafb.c index a78ca6a0109..c9c8e5a1fde 100644 --- a/drivers/video/tgafb.c +++ b/drivers/video/tgafb.c @@ -188,8 +188,6 @@ tgafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info) if (var->xres_virtual != var->xres || var->yres_virtual != var->yres) return -EINVAL; - if (var->xres * var->yres * (var->bits_per_pixel >> 3) > info->fix.smem_len) - return -EINVAL; if (var->nonstd) return -EINVAL; if (1000000000 / var->pixclock > TGA_PLL_MAX_FREQ) @@ -270,7 +268,6 @@ tgafb_set_par(struct fb_info *info) par->yres = info->var.yres; par->pll_freq = pll_freq = 1000000000 / info->var.pixclock; par->bits_per_pixel = info->var.bits_per_pixel; - info->fix.line_length = par->xres * (par->bits_per_pixel >> 3); tga_type = par->tga_type; @@ -1145,57 +1142,222 @@ copyarea_line_32bpp(struct fb_info *info, u32 dy, u32 sy, __raw_writel(TGA_MODE_SBM_24BPP|TGA_MODE_SIMPLE, tga_regs+TGA_MODE_REG); } -/* The (almost) general case of backward copy in 8bpp mode. */ +/* The general case of forward copy in 8bpp mode. */ static inline void -copyarea_8bpp(struct fb_info *info, u32 dx, u32 dy, u32 sx, u32 sy, - u32 height, u32 width, u32 line_length, - const struct fb_copyarea *area) +copyarea_foreward_8bpp(struct fb_info *info, u32 dx, u32 dy, u32 sx, u32 sy, + u32 height, u32 width, u32 line_length) { struct tga_par *par = (struct tga_par *) info->par; - unsigned i, yincr; - int depos, sepos, backward, last_step, step; - u32 mask_last; - unsigned n32; + unsigned long i, copied, left; + unsigned long dpos, spos, dalign, salign, yincr; + u32 smask_first, dmask_first, dmask_last; + int pixel_shift, need_prime, need_second; + unsigned long n64, n32, xincr_first; void __iomem *tga_regs; void __iomem *tga_fb; - /* Do acceleration only if we are aligned on 8 pixels */ - if ((dx | sx | width) & 7) { - cfb_copyarea(info, area); - return; + yincr = line_length; + if (dy > sy) { + dy += height - 1; + sy += height - 1; + yincr = -yincr; } + /* Compute the offsets and alignments in the frame buffer. + More than anything else, these control how we do copies. */ + dpos = dy * line_length + dx; + spos = sy * line_length + sx; + dalign = dpos & 7; + salign = spos & 7; + dpos &= -8; + spos &= -8; + + /* Compute the value for the PIXELSHIFT register. This controls + both non-co-aligned source and destination and copy direction. */ + if (dalign >= salign) + pixel_shift = dalign - salign; + else + pixel_shift = 8 - (salign - dalign); + + /* Figure out if we need an additional priming step for the + residue register. */ + need_prime = (salign > dalign); + if (need_prime) + dpos -= 8; + + /* Begin by copying the leading unaligned destination. Copy enough + to make the next destination address 32-byte aligned. */ + copied = 32 - (dalign + (dpos & 31)); + if (copied == 32) + copied = 0; + xincr_first = (copied + 7) & -8; + smask_first = dmask_first = (1ul << copied) - 1; + smask_first <<= salign; + dmask_first <<= dalign + need_prime*8; + if (need_prime && copied > 24) + copied -= 8; + left = width - copied; + + /* Care for small copies. */ + if (copied > width) { + u32 t; + t = (1ul << width) - 1; + t <<= dalign + need_prime*8; + dmask_first &= t; + left = 0; + } + + /* Attempt to use 64-byte copies. This is only possible if the + source and destination are co-aligned at 64 bytes. */ + n64 = need_second = 0; + if ((dpos & 63) == (spos & 63) + && (height == 1 || line_length % 64 == 0)) { + /* We may need a 32-byte copy to ensure 64 byte alignment. */ + need_second = (dpos + xincr_first) & 63; + if ((need_second & 32) != need_second) + printk(KERN_ERR "tgafb: need_second wrong\n"); + if (left >= need_second + 64) { + left -= need_second; + n64 = left / 64; + left %= 64; + } else + need_second = 0; + } + + /* Copy trailing full 32-byte sections. This will be the main + loop if the 64 byte loop can't be used. */ + n32 = left / 32; + left %= 32; + + /* Copy the trailing unaligned destination. */ + dmask_last = (1ul << left) - 1; + + tga_regs = par->tga_regs_base; + tga_fb = par->tga_fb_base; + + /* Set up the MODE and PIXELSHIFT registers. */ + __raw_writel(TGA_MODE_SBM_8BPP|TGA_MODE_COPY, tga_regs+TGA_MODE_REG); + __raw_writel(pixel_shift, tga_regs+TGA_PIXELSHIFT_REG); + wmb(); + + for (i = 0; i < height; ++i) { + unsigned long j; + void __iomem *sfb; + void __iomem *dfb; + + sfb = tga_fb + spos; + dfb = tga_fb + dpos; + if (dmask_first) { + __raw_writel(smask_first, sfb); + wmb(); + __raw_writel(dmask_first, dfb); + wmb(); + sfb += xincr_first; + dfb += xincr_first; + } + + if (need_second) { + __raw_writel(0xffffffff, sfb); + wmb(); + __raw_writel(0xffffffff, dfb); + wmb(); + sfb += 32; + dfb += 32; + } + + if (n64 && (((unsigned long)sfb | (unsigned long)dfb) & 63)) + printk(KERN_ERR + "tgafb: misaligned copy64 (s:%p, d:%p)\n", + sfb, dfb); + + for (j = 0; j < n64; ++j) { + __raw_writel(sfb - tga_fb, tga_regs+TGA_COPY64_SRC); + wmb(); + __raw_writel(dfb - tga_fb, tga_regs+TGA_COPY64_DST); + wmb(); + sfb += 64; + dfb += 64; + } + + for (j = 0; j < n32; ++j) { + __raw_writel(0xffffffff, sfb); + wmb(); + __raw_writel(0xffffffff, dfb); + wmb(); + sfb += 32; + dfb += 32; + } + + if (dmask_last) { + __raw_writel(0xffffffff, sfb); + wmb(); + __raw_writel(dmask_last, dfb); + wmb(); + } + + spos += yincr; + dpos += yincr; + } + + /* Reset the MODE register to normal. */ + __raw_writel(TGA_MODE_SBM_8BPP|TGA_MODE_SIMPLE, tga_regs+TGA_MODE_REG); +} + +/* The (almost) general case of backward copy in 8bpp mode. */ +static inline void +copyarea_backward_8bpp(struct fb_info *info, u32 dx, u32 dy, u32 sx, u32 sy, + u32 height, u32 width, u32 line_length, + const struct fb_copyarea *area) +{ + struct tga_par *par = (struct tga_par *) info->par; + unsigned long i, left, yincr; + unsigned long depos, sepos, dealign, sealign; + u32 mask_first, mask_last; + unsigned long n32; + void __iomem *tga_regs; + void __iomem *tga_fb; + yincr = line_length; if (dy > sy) { dy += height - 1; sy += height - 1; yincr = -yincr; } - backward = dy == sy && dx > sx && dx < sx + width; /* Compute the offsets and alignments in the frame buffer. More than anything else, these control how we do copies. */ - depos = dy * line_length + dx; - sepos = sy * line_length + sx; - if (backward) - depos += width, sepos += width; + depos = dy * line_length + dx + width; + sepos = sy * line_length + sx + width; + dealign = depos & 7; + sealign = sepos & 7; + + /* ??? The documentation appears to be incorrect (or very + misleading) wrt how pixel shifting works in backward copy + mode, i.e. when PIXELSHIFT is negative. I give up for now. + Do handle the common case of co-aligned backward copies, + but frob everything else back on generic code. */ + if (dealign != sealign) { + cfb_copyarea(info, area); + return; + } + + /* We begin the copy with the trailing pixels of the + unaligned destination. */ + mask_first = (1ul << dealign) - 1; + left = width - dealign; + + /* Care for small copies. */ + if (dealign > width) { + mask_first ^= (1ul << (dealign - width)) - 1; + left = 0; + } /* Next copy full words at a time. */ - n32 = width / 32; - last_step = width % 32; + n32 = left / 32; + left %= 32; /* Finally copy the unaligned head of the span. */ - mask_last = (1ul << last_step) - 1; - - if (!backward) { - step = 32; - last_step = 32; - } else { - step = -32; - last_step = -last_step; - sepos -= 32; - depos -= 32; - } + mask_last = -1 << (32 - left); tga_regs = par->tga_regs_base; tga_fb = par->tga_fb_base; @@ -1212,33 +1374,25 @@ copyarea_8bpp(struct fb_info *info, u32 dx, u32 dy, u32 sx, u32 sy, sfb = tga_fb + sepos; dfb = tga_fb + depos; + if (mask_first) { + __raw_writel(mask_first, sfb); + wmb(); + __raw_writel(mask_first, dfb); + wmb(); + } - for (j = 0; j < n32; j++) { - if (j < 2 && j + 1 < n32 && !backward && - !(((unsigned long)sfb | (unsigned long)dfb) & 63)) { - do { - __raw_writel(sfb - tga_fb, tga_regs+TGA_COPY64_SRC); - wmb(); - __raw_writel(dfb - tga_fb, tga_regs+TGA_COPY64_DST); - wmb(); - sfb += 64; - dfb += 64; - j += 2; - } while (j + 1 < n32); - j--; - continue; - } + for (j = 0; j < n32; ++j) { + sfb -= 32; + dfb -= 32; __raw_writel(0xffffffff, sfb); wmb(); __raw_writel(0xffffffff, dfb); wmb(); - sfb += step; - dfb += step; } if (mask_last) { - sfb += last_step - step; - dfb += last_step - step; + sfb -= 32; + dfb -= 32; __raw_writel(mask_last, sfb); wmb(); __raw_writel(mask_last, dfb); @@ -1299,9 +1453,14 @@ tgafb_copyarea(struct fb_info *info, const struct fb_copyarea *area) else if (bpp == 32) cfb_copyarea(info, area); + /* Detect overlapping source and destination that requires + a backward copy. */ + else if (dy == sy && dx > sx && dx < sx + width) + copyarea_backward_8bpp(info, dx, dy, sx, sy, height, + width, line_length, area); else - copyarea_8bpp(info, dx, dy, sx, sy, height, - width, line_length, area); + copyarea_foreward_8bpp(info, dx, dy, sx, sy, height, + width, line_length); } @@ -1317,7 +1476,6 @@ tgafb_init_fix(struct fb_info *info) int tga_bus_tc = TGA_BUS_TC(par->dev); u8 tga_type = par->tga_type; const char *tga_type_name = NULL; - unsigned memory_size; switch (tga_type) { case TGA_TYPE_8PLANE: @@ -1325,25 +1483,21 @@ tgafb_init_fix(struct fb_info *info) tga_type_name = "Digital ZLXp-E1"; if (tga_bus_tc) tga_type_name = "Digital ZLX-E1"; - memory_size = 2097152; break; case TGA_TYPE_24PLANE: if (tga_bus_pci) tga_type_name = "Digital ZLXp-E2"; if (tga_bus_tc) tga_type_name = "Digital ZLX-E2"; - memory_size = 8388608; break; case TGA_TYPE_24PLUSZ: if (tga_bus_pci) tga_type_name = "Digital ZLXp-E3"; if (tga_bus_tc) tga_type_name = "Digital ZLX-E3"; - memory_size = 16777216; break; default: tga_type_name = "Unknown"; - memory_size = 16777216; break; } @@ -1355,8 +1509,9 @@ tgafb_init_fix(struct fb_info *info) ? FB_VISUAL_PSEUDOCOLOR : FB_VISUAL_DIRECTCOLOR); + info->fix.line_length = par->xres * (par->bits_per_pixel >> 3); info->fix.smem_start = (size_t) par->tga_fb_base; - info->fix.smem_len = memory_size; + info->fix.smem_len = info->fix.line_length * par->yres; info->fix.mmio_start = (size_t) par->tga_regs_base; info->fix.mmio_len = 512; @@ -1480,9 +1635,6 @@ static int tgafb_register(struct device *dev) modedb_tga = &modedb_tc; modedbsize_tga = 1; } - - tgafb_init_fix(info); - ret = fb_find_mode(&info->var, info, mode_option ? mode_option : mode_option_tga, modedb_tga, modedbsize_tga, NULL, @@ -1500,6 +1652,7 @@ static int tgafb_register(struct device *dev) } tgafb_set_par(info); + tgafb_init_fix(info); if (register_framebuffer(info) < 0) { printk(KERN_ERR "tgafb: Could not register framebuffer\n"); diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 7d7add5ceba..71af7b5abe0 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -311,12 +311,6 @@ static int balloon(void *_vballoon) else if (diff < 0) leak_balloon(vb, -diff); update_balloon_size(vb); - - /* - * For large balloon changes, we could spend a lot of time - * and always have work to do. Be nice if preempt disabled. - */ - cond_resched(); } return 0; } diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c index 933241a6ab1..a7ce73029f5 100644 --- a/drivers/virtio/virtio_pci.c +++ b/drivers/virtio/virtio_pci.c @@ -791,7 +791,6 @@ static int virtio_pci_restore(struct device *dev) struct pci_dev *pci_dev = to_pci_dev(dev); struct virtio_pci_device *vp_dev = pci_get_drvdata(pci_dev); struct virtio_driver *drv; - unsigned status = 0; int ret; drv = container_of(vp_dev->vdev.dev.driver, @@ -802,40 +801,14 @@ static int virtio_pci_restore(struct device *dev) return ret; pci_set_master(pci_dev); - /* We always start by resetting the device, in case a previous - * driver messed it up. */ - vp_reset(&vp_dev->vdev); - - /* Acknowledge that we've seen the device. */ - status |= VIRTIO_CONFIG_S_ACKNOWLEDGE; - vp_set_status(&vp_dev->vdev, status); - - /* Maybe driver failed before freeze. - * Restore the failed status, for debugging. */ - status |= vp_dev->saved_status & VIRTIO_CONFIG_S_FAILED; - vp_set_status(&vp_dev->vdev, status); - - if (!drv) - return 0; - - /* We have a driver! */ - status |= VIRTIO_CONFIG_S_DRIVER; - vp_set_status(&vp_dev->vdev, status); - vp_finalize_features(&vp_dev->vdev); - if (drv->restore) { + if (drv && drv->restore) ret = drv->restore(&vp_dev->vdev); - if (ret) { - status |= VIRTIO_CONFIG_S_FAILED; - vp_set_status(&vp_dev->vdev, status); - return ret; - } - } /* Finally, tell the device we're all set */ - status |= VIRTIO_CONFIG_S_DRIVER_OK; - vp_set_status(&vp_dev->vdev, status); + if (!ret) + vp_set_status(&vp_dev->vdev, vp_dev->saved_status); return ret; } diff --git a/drivers/w1/w1_netlink.c b/drivers/w1/w1_netlink.c index 73705aff53c..40788c925d1 100644 --- a/drivers/w1/w1_netlink.c +++ b/drivers/w1/w1_netlink.c @@ -54,29 +54,28 @@ static void w1_send_slave(struct w1_master *dev, u64 rn) struct w1_netlink_msg *hdr = (struct w1_netlink_msg *)(msg + 1); struct w1_netlink_cmd *cmd = (struct w1_netlink_cmd *)(hdr + 1); int avail; - u64 *data; /* update kernel slave list */ w1_slave_found(dev, rn); avail = dev->priv_size - cmd->len; - if (avail < 8) { - msg->ack++; - cn_netlink_send(msg, 0, GFP_KERNEL); + if (avail > 8) { + u64 *data = (void *)(cmd + 1) + cmd->len; - msg->len = sizeof(struct w1_netlink_msg) + - sizeof(struct w1_netlink_cmd); - hdr->len = sizeof(struct w1_netlink_cmd); - cmd->len = 0; + *data = rn; + cmd->len += 8; + hdr->len += 8; + msg->len += 8; + return; } - data = (void *)(cmd + 1) + cmd->len; + msg->ack++; + cn_netlink_send(msg, 0, GFP_KERNEL); - *data = rn; - cmd->len += 8; - hdr->len += 8; - msg->len += 8; + msg->len = sizeof(struct w1_netlink_msg) + sizeof(struct w1_netlink_cmd); + hdr->len = sizeof(struct w1_netlink_cmd); + cmd->len = 0; } static int w1_process_search_command(struct w1_master *dev, struct cn_msg *msg, diff --git a/drivers/watchdog/ath79_wdt.c b/drivers/watchdog/ath79_wdt.c index c97a47ca897..37cb09b27b6 100644 --- a/drivers/watchdog/ath79_wdt.c +++ b/drivers/watchdog/ath79_wdt.c @@ -20,7 +20,6 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/bitops.h> -#include <linux/delay.h> #include <linux/errno.h> #include <linux/fs.h> #include <linux/init.h> @@ -92,15 +91,6 @@ static inline void ath79_wdt_keepalive(void) static inline void ath79_wdt_enable(void) { ath79_wdt_keepalive(); - - /* - * Updating the TIMER register requires a few microseconds - * on the AR934x SoCs at least. Use a small delay to ensure - * that the TIMER register is updated within the hardware - * before enabling the watchdog. - */ - udelay(2); - ath79_wdt_wr(WDOG_REG_CTRL, WDOG_CTRL_ACTION_FCR); /* flush write */ ath79_wdt_rr(WDOG_REG_CTRL); diff --git a/drivers/watchdog/sp805_wdt.c b/drivers/watchdog/sp805_wdt.c index e42118213ba..8872642505c 100644 --- a/drivers/watchdog/sp805_wdt.c +++ b/drivers/watchdog/sp805_wdt.c @@ -60,6 +60,7 @@ * @adev: amba device structure of wdt * @status: current status of wdt * @load_val: load value to be set for current timeout + * @timeout: current programmed timeout */ struct sp805_wdt { struct watchdog_device wdd; @@ -68,6 +69,7 @@ struct sp805_wdt { struct clk *clk; struct amba_device *adev; unsigned int load_val; + unsigned int timeout; }; static bool nowayout = WATCHDOG_NOWAYOUT; @@ -97,7 +99,7 @@ static int wdt_setload(struct watchdog_device *wdd, unsigned int timeout) spin_lock(&wdt->lock); wdt->load_val = load; /* roundup timeout to closest positive integer value */ - wdd->timeout = div_u64((load + 1) * 2 + (rate / 2), rate); + wdt->timeout = div_u64((load + 1) * 2 + (rate / 2), rate); spin_unlock(&wdt->lock); return 0; @@ -310,6 +310,7 @@ static void free_ioctx(struct kioctx *ctx) avail = (head <= ctx->tail ? ctx->tail : ctx->nr_events) - head; + atomic_sub(avail, &ctx->reqs_active); head += avail; head %= ctx->nr_events; } @@ -677,7 +678,6 @@ void aio_complete(struct kiocb *iocb, long res, long res2) put_rq: /* everything turned out well, dispose of the aiocb. */ aio_put_req(iocb); - atomic_dec(&ctx->reqs_active); /* * We have to order our ring_info tail store above and test @@ -717,8 +717,6 @@ static long aio_read_events_ring(struct kioctx *ctx, if (head == ctx->tail) goto out; - head %= ctx->nr_events; - while (ret < nr) { long avail; struct io_event *ev; @@ -757,6 +755,8 @@ static long aio_read_events_ring(struct kioctx *ctx, flush_dcache_page(ctx->ring_pages[0]); pr_debug("%li h%u t%u\n", ret, head, ctx->tail); + + atomic_sub(ret, &ctx->reqs_active); out: mutex_unlock(&ctx->ring_lock); diff --git a/fs/attr.c b/fs/attr.c index 66fa6251c39..8dd5825ec70 100644 --- a/fs/attr.c +++ b/fs/attr.c @@ -50,14 +50,14 @@ int inode_change_ok(const struct inode *inode, struct iattr *attr) if ((ia_valid & ATTR_UID) && (!uid_eq(current_fsuid(), inode->i_uid) || !uid_eq(attr->ia_uid, inode->i_uid)) && - !capable_wrt_inode_uidgid(inode, CAP_CHOWN)) + !inode_capable(inode, CAP_CHOWN)) return -EPERM; /* Make sure caller can chgrp. */ if ((ia_valid & ATTR_GID) && (!uid_eq(current_fsuid(), inode->i_uid) || (!in_group_p(attr->ia_gid) && !gid_eq(attr->ia_gid, inode->i_gid))) && - !capable_wrt_inode_uidgid(inode, CAP_CHOWN)) + !inode_capable(inode, CAP_CHOWN)) return -EPERM; /* Make sure a caller can chmod. */ @@ -67,7 +67,7 @@ int inode_change_ok(const struct inode *inode, struct iattr *attr) /* Also check the setgid bit! */ if (!in_group_p((ia_valid & ATTR_GID) ? attr->ia_gid : inode->i_gid) && - !capable_wrt_inode_uidgid(inode, CAP_FSETID)) + !inode_capable(inode, CAP_FSETID)) attr->ia_mode &= ~S_ISGID; } @@ -160,7 +160,7 @@ void setattr_copy(struct inode *inode, const struct iattr *attr) umode_t mode = attr->ia_mode; if (!in_group_p(inode->i_gid) && - !capable_wrt_inode_uidgid(inode, CAP_FSETID)) + !inode_capable(inode, CAP_FSETID)) mode &= ~S_ISGID; inode->i_mode = mode; } diff --git a/fs/bio-integrity.c b/fs/bio-integrity.c index 433c3b828e1..8dccf73025b 100644 --- a/fs/bio-integrity.c +++ b/fs/bio-integrity.c @@ -458,7 +458,7 @@ static int bio_integrity_verify(struct bio *bio) bix.disk_name = bio->bi_bdev->bd_disk->disk_name; bix.sector_size = bi->sector_size; - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_segment(bv, bio, i) { void *kaddr = kmap_atomic(bv->bv_page); bix.data_buf = kaddr + bv->bv_offset; bix.data_size = bv->bv_len; diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c index d85f90c92bb..290e347b6db 100644 --- a/fs/btrfs/backref.c +++ b/fs/btrfs/backref.c @@ -1347,10 +1347,9 @@ int extent_from_logical(struct btrfs_fs_info *fs_info, u64 logical, * returns <0 on error */ static int __get_extent_inline_ref(unsigned long *ptr, struct extent_buffer *eb, - struct btrfs_key *key, - struct btrfs_extent_item *ei, u32 item_size, - struct btrfs_extent_inline_ref **out_eiref, - int *out_type) + struct btrfs_extent_item *ei, u32 item_size, + struct btrfs_extent_inline_ref **out_eiref, + int *out_type) { unsigned long end; u64 flags; @@ -1360,26 +1359,19 @@ static int __get_extent_inline_ref(unsigned long *ptr, struct extent_buffer *eb, /* first call */ flags = btrfs_extent_flags(eb, ei); if (flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) { - if (key->type == BTRFS_METADATA_ITEM_KEY) { - /* a skinny metadata extent */ - *out_eiref = - (struct btrfs_extent_inline_ref *)(ei + 1); - } else { - WARN_ON(key->type != BTRFS_EXTENT_ITEM_KEY); - info = (struct btrfs_tree_block_info *)(ei + 1); - *out_eiref = - (struct btrfs_extent_inline_ref *)(info + 1); - } + info = (struct btrfs_tree_block_info *)(ei + 1); + *out_eiref = + (struct btrfs_extent_inline_ref *)(info + 1); } else { *out_eiref = (struct btrfs_extent_inline_ref *)(ei + 1); } *ptr = (unsigned long)*out_eiref; - if ((unsigned long)(*ptr) >= (unsigned long)ei + item_size) + if ((void *)*ptr >= (void *)ei + item_size) return -ENOENT; } end = (unsigned long)ei + item_size; - *out_eiref = (struct btrfs_extent_inline_ref *)(*ptr); + *out_eiref = (struct btrfs_extent_inline_ref *)*ptr; *out_type = btrfs_extent_inline_ref_type(eb, *out_eiref); *ptr += btrfs_extent_inline_ref_size(*out_type); @@ -1398,8 +1390,8 @@ static int __get_extent_inline_ref(unsigned long *ptr, struct extent_buffer *eb, * <0 on error. */ int tree_backref_for_extent(unsigned long *ptr, struct extent_buffer *eb, - struct btrfs_key *key, struct btrfs_extent_item *ei, - u32 item_size, u64 *out_root, u8 *out_level) + struct btrfs_extent_item *ei, u32 item_size, + u64 *out_root, u8 *out_level) { int ret; int type; @@ -1410,8 +1402,8 @@ int tree_backref_for_extent(unsigned long *ptr, struct extent_buffer *eb, return 1; while (1) { - ret = __get_extent_inline_ref(ptr, eb, key, ei, item_size, - &eiref, &type); + ret = __get_extent_inline_ref(ptr, eb, ei, item_size, + &eiref, &type); if (ret < 0) return ret; diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h index 526d09e70c9..0f446d7ca2c 100644 --- a/fs/btrfs/backref.h +++ b/fs/btrfs/backref.h @@ -42,8 +42,8 @@ int extent_from_logical(struct btrfs_fs_info *fs_info, u64 logical, u64 *flags); int tree_backref_for_extent(unsigned long *ptr, struct extent_buffer *eb, - struct btrfs_key *key, struct btrfs_extent_item *ei, - u32 item_size, u64 *out_root, u8 *out_level); + struct btrfs_extent_item *ei, u32 item_size, + u64 *out_root, u8 *out_level); int iterate_extent_inodes(struct btrfs_fs_info *fs_info, u64 extent_item_objectid, diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index ce7067881d3..b189bd1e7a3 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -1009,8 +1009,6 @@ int btrfs_decompress_buf2page(char *buf, unsigned long buf_start, bytes = min(bytes, working_bytes); kaddr = kmap_atomic(page_out); memcpy(kaddr + *pg_offset, buf + buf_offset, bytes); - if (*pg_index == (vcnt - 1) && *pg_offset == 0) - memset(kaddr + bytes, 0, PAGE_CACHE_SIZE - bytes); kunmap_atomic(kaddr); flush_dcache_page(page_out); diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index abecce39935..b8b60b660c8 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -3161,8 +3161,6 @@ static int barrier_all_devices(struct btrfs_fs_info *info) /* send down all the barriers */ head = &info->fs_devices->devices; list_for_each_entry_rcu(dev, head, dev_list) { - if (dev->missing) - continue; if (!dev->bdev) { errors_send++; continue; @@ -3177,8 +3175,6 @@ static int barrier_all_devices(struct btrfs_fs_info *info) /* wait for all the barriers */ list_for_each_entry_rcu(dev, head, dev_list) { - if (dev->missing) - continue; if (!dev->bdev) { errors_wait++; continue; @@ -3518,11 +3514,6 @@ int close_ctree(struct btrfs_root *root) btrfs_free_block_groups(fs_info); - /* - * we must make sure there is not any read request to - * submit after we stopping all workers. - */ - invalidate_inode_pages2(fs_info->btree_inode->i_mapping); btrfs_stop_all_workers(fs_info); del_fs_roots(fs_info); diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 84ceff6abbc..e7e7afb4a87 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1624,7 +1624,6 @@ again: * shortening the size of the delalloc range we're searching */ free_extent_state(cached_state); - cached_state = NULL; if (!loops) { unsigned long offset = (*start) & (PAGE_CACHE_SIZE - 1); max_bytes = PAGE_CACHE_SIZE - offset; @@ -2357,7 +2356,7 @@ int end_extent_writepage(struct page *page, int err, u64 start, u64 end) { int uptodate = (err == 0); struct extent_io_tree *tree; - int ret = 0; + int ret; tree = &BTRFS_I(page->mapping->host)->io_tree; @@ -2371,8 +2370,6 @@ int end_extent_writepage(struct page *page, int err, u64 start, u64 end) if (!uptodate) { ClearPageUptodate(page); SetPageError(page); - ret = ret < 0 ? ret : -EIO; - mapping_set_error(page->mapping, ret); } return 0; } diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c index e4bcfec7787..b193bf324a4 100644 --- a/fs/btrfs/file-item.c +++ b/fs/btrfs/file-item.c @@ -403,7 +403,7 @@ int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end, ret = 0; fail: while (ret < 0 && !list_empty(&tmplist)) { - sums = list_entry(tmplist.next, struct btrfs_ordered_sum, list); + sums = list_entry(&tmplist, struct btrfs_ordered_sum, list); list_del(&sums->list); kfree(sums); } @@ -754,7 +754,7 @@ again: found_next = 1; if (ret != 0) goto insert; - slot = path->slots[0]; + slot = 0; } btrfs_item_key_to_cpu(path->nodes[0], &found_key, slot); if (found_key.objectid != BTRFS_EXTENT_CSUM_OBJECTID || diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index 0cbe95dc811..e53009657f0 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -835,7 +835,7 @@ int load_free_space_cache(struct btrfs_fs_info *fs_info, if (!matched) { __btrfs_remove_free_space_cache(ctl); - btrfs_warn(fs_info, "block group %llu has wrong amount of free space", + btrfs_err(fs_info, "block group %llu has wrong amount of free space", block_group->key.objectid); ret = -1; } @@ -847,7 +847,7 @@ out: spin_unlock(&block_group->lock); ret = 0; - btrfs_warn(fs_info, "failed to load free space cache for block group %llu, rebuild it now", + btrfs_err(fs_info, "failed to load free space cache for block group %llu", block_group->key.objectid); } diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 187911fbabc..8fcd2424e7f 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -3545,8 +3545,7 @@ noinline int btrfs_update_inode(struct btrfs_trans_handle *trans, * without delay */ if (!btrfs_is_free_space_inode(inode) - && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID - && !root->fs_info->log_root_recovering) { + && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID) { btrfs_update_root_times(trans, root); ret = btrfs_delayed_update_inode(trans, root, inode); diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index 0e7f7765b3b..b3896d5f233 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -967,11 +967,8 @@ again: need_check = false; list_add_tail(&edge->list[UPPER], &list); - } else { - if (upper->checked) - need_check = true; + } else INIT_LIST_HEAD(&edge->list[UPPER]); - } } else { upper = rb_entry(rb_node, struct backref_node, rb_node); diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index e4f69e3b78b..eb84c2db1ac 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -545,9 +545,8 @@ static void scrub_print_warning(const char *errstr, struct scrub_block *sblock) if (flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) { do { - ret = tree_backref_for_extent(&ptr, eb, &found_key, ei, - item_size, &ref_root, - &ref_level); + ret = tree_backref_for_extent(&ptr, eb, ei, item_size, + &ref_root, &ref_level); printk_in_rcu(KERN_WARNING "btrfs: %s at logical %llu on dev %s, " "sector %llu: metadata %s (level %d) in tree " diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index 414c1b9eb89..256a9a46d54 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -1550,10 +1550,6 @@ static int lookup_dir_item_inode(struct btrfs_root *root, goto out; } btrfs_dir_item_key_to_cpu(path->nodes[0], di, &key); - if (key.type == BTRFS_ROOT_ITEM_KEY) { - ret = -ENOENT; - goto out; - } *found_inode = key.objectid; *found_type = btrfs_dir_type(path->nodes[0], di); diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index 1f214689fa5..0544587d74f 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -524,6 +524,7 @@ int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid) if (transid <= root->fs_info->last_trans_committed) goto out; + ret = -EINVAL; /* find specified transaction */ spin_lock(&root->fs_info->trans_lock); list_for_each_entry(t, &root->fs_info->trans_list, list) { @@ -539,16 +540,9 @@ int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid) } } spin_unlock(&root->fs_info->trans_lock); - - /* - * The specified transaction doesn't exist, or we - * raced with btrfs_commit_transaction - */ - if (!cur_trans) { - if (transid > root->fs_info->last_trans_committed) - ret = -EINVAL; + /* The specified transaction doesn't exist */ + if (!cur_trans) goto out; - } } else { /* find newest transaction that is committing | committed */ spin_lock(&root->fs_info->trans_lock); diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 7fc774639a7..b6c23c4abae 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -1384,22 +1384,6 @@ out: return ret; } -/* - * Function to update ctime/mtime for a given device path. - * Mainly used for ctime/mtime based probe like libblkid. - */ -static void update_dev_time(char *path_name) -{ - struct file *filp; - - filp = filp_open(path_name, O_RDWR, 0); - if (!filp) - return; - file_update_time(filp); - filp_close(filp, NULL); - return; -} - static int btrfs_rm_dev_item(struct btrfs_root *root, struct btrfs_device *device) { @@ -1628,12 +1612,11 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path) struct btrfs_fs_devices *fs_devices; fs_devices = root->fs_info->fs_devices; while (fs_devices) { - if (fs_devices->seed == cur_devices) { - fs_devices->seed = cur_devices->seed; + if (fs_devices->seed == cur_devices) break; - } fs_devices = fs_devices->seed; } + fs_devices->seed = cur_devices->seed; cur_devices->seed = NULL; lock_chunks(root); __btrfs_close_devices(cur_devices); @@ -1659,14 +1642,10 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path) ret = 0; - if (bdev) { - /* Notify udev that device has changed */ + /* Notify udev that device has changed */ + if (bdev) btrfs_kobject_uevent(bdev, KOBJ_CHANGE); - /* Update ctime/mtime for device path for libblkid */ - update_dev_time(device_path); - } - error_brelse: brelse(bh); if (bdev) @@ -1838,6 +1817,7 @@ static int btrfs_prepare_sprout(struct btrfs_root *root) fs_devices->seeding = 0; fs_devices->num_devices = 0; fs_devices->open_devices = 0; + fs_devices->total_devices = 0; fs_devices->seed = seed_devices; generate_random_uuid(fs_devices->fsid); @@ -2109,8 +2089,6 @@ int btrfs_init_new_device(struct btrfs_root *root, char *device_path) ret = btrfs_commit_transaction(trans, root); } - /* Update ctime/mtime for libblkid */ - update_dev_time(device_path); return ret; error_trans: diff --git a/fs/buffer.c b/fs/buffer.c index 6f7bc7196a7..25cd38378ca 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -985,8 +985,7 @@ grow_dev_page(struct block_device *bdev, sector_t block, bh = page_buffers(page); if (bh->b_size == size) { end_block = init_page_buffers(page, bdev, - (sector_t)index << sizebits, - size); + index << sizebits, size); goto done; } if (!try_to_free_buffers(page)) @@ -1007,8 +1006,7 @@ grow_dev_page(struct block_device *bdev, sector_t block, */ spin_lock(&inode->i_mapping->private_lock); link_dev_buffers(page, bh); - end_block = init_page_buffers(page, bdev, (sector_t)index << sizebits, - size); + end_block = init_page_buffers(page, bdev, index << sizebits, size); spin_unlock(&inode->i_mapping->private_lock); done: ret = (block < end_block) ? 1 : -ENXIO; @@ -2018,7 +2016,6 @@ int generic_write_end(struct file *file, struct address_space *mapping, struct page *page, void *fsdata) { struct inode *inode = mapping->host; - loff_t old_size = inode->i_size; int i_size_changed = 0; copied = block_write_end(file, mapping, pos, len, copied, page, fsdata); @@ -2038,8 +2035,6 @@ int generic_write_end(struct file *file, struct address_space *mapping, unlock_page(page); page_cache_release(page); - if (old_size < pos) - pagecache_isize_extended(inode, old_size, pos); /* * Don't mark the inode dirty under page lock. First, it unnecessarily * makes the holding time of page lock longer. Second, it forces lock @@ -2257,11 +2252,6 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping, err = 0; balance_dirty_pages_ratelimited(mapping); - - if (unlikely(fatal_signal_pending(current))) { - err = -EINTR; - goto out; - } } /* page covers the boundary, find the boundary offset */ diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c index 15e9505aa35..0227b45ef00 100644 --- a/fs/cifs/cifs_unicode.c +++ b/fs/cifs/cifs_unicode.c @@ -290,8 +290,7 @@ int cifsConvertToUTF16(__le16 *target, const char *source, int srclen, const struct nls_table *cp, int mapChars) { - int i, charlen; - int j = 0; + int i, j, charlen; char src_char; __le16 dst_char; wchar_t tmp; @@ -299,11 +298,12 @@ cifsConvertToUTF16(__le16 *target, const char *source, int srclen, if (!mapChars) return cifs_strtoUTF16(target, source, PATH_MAX, cp); - for (i = 0; i < srclen; j++) { + for (i = 0, j = 0; i < srclen; j++) { src_char = source[i]; charlen = 1; switch (src_char) { case 0: + put_unaligned(0, &target[j]); goto ctoUTF16_out; case ':': dst_char = cpu_to_le16(UNI_COLON); @@ -350,7 +350,6 @@ cifsConvertToUTF16(__le16 *target, const char *source, int srclen, } ctoUTF16_out: - put_unaligned(0, &target[j]); /* Null terminate target unicode string */ return j; } diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index f74dfa89c4c..e2c2d96491f 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -74,6 +74,11 @@ #define SERVER_NAME_LENGTH 40 #define SERVER_NAME_LEN_WITH_NULL (SERVER_NAME_LENGTH + 1) +/* used to define string lengths for reversing unicode strings */ +/* (256+1)*2 = 514 */ +/* (max path length + 1 for null) * 2 for unicode */ +#define MAX_NAME 514 + /* SMB echo "timeout" -- FIXME: tunable? */ #define SMB_ECHO_INTERVAL (60 * HZ) @@ -375,8 +380,6 @@ struct smb_version_operations { const char *, u32 *); int (*set_acl)(struct cifs_ntsd *, __u32, struct inode *, const char *, int); - /* check if we need to issue closedir */ - bool (*dir_needs_close)(struct cifsFileInfo *); }; struct smb_version_values { diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 5fcc10fa62b..8b0c656f2ab 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -735,7 +735,7 @@ int cifs_closedir(struct inode *inode, struct file *file) cifs_dbg(FYI, "Freeing private data in close dir\n"); spin_lock(&cifs_file_list_lock); - if (server->ops->dir_needs_close(cfile)) { + if (!cfile->srch_inf.endOfSearch && !cfile->invalidHandle) { cfile->invalidHandle = true; spin_unlock(&cifs_file_list_lock); if (server->ops->close_dir) @@ -2809,7 +2809,7 @@ cifs_uncached_read_into_pages(struct TCP_Server_Info *server, total_read += result; } - return total_read > 0 && result != -EAGAIN ? total_read : result; + return total_read > 0 ? total_read : result; } static ssize_t @@ -3232,7 +3232,7 @@ cifs_readpages_read_into_pages(struct TCP_Server_Info *server, total_read += result; } - return total_read > 0 && result != -EAGAIN ? total_read : result; + return total_read > 0 ? total_read : result; } static int cifs_readpages(struct file *file, struct address_space *mapping, diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c index 0dee93706c9..9d463501348 100644 --- a/fs/cifs/inode.c +++ b/fs/cifs/inode.c @@ -1640,22 +1640,13 @@ cifs_rename(struct inode *source_dir, struct dentry *source_dentry, unlink_target: /* Try unlinking the target dentry if it's not negative */ if (target_dentry->d_inode && (rc == -EACCES || rc == -EEXIST)) { - if (S_ISDIR(target_dentry->d_inode->i_mode)) - tmprc = cifs_rmdir(target_dir, target_dentry); - else - tmprc = cifs_unlink(target_dir, target_dentry); + tmprc = cifs_unlink(target_dir, target_dentry); if (tmprc) goto cifs_rename_exit; rc = cifs_do_rename(xid, source_dentry, from_name, target_dentry, to_name); } - /* force revalidate to go get info when needed */ - CIFS_I(source_dir)->time = CIFS_I(target_dir)->time = 0; - - source_dir->i_ctime = source_dir->i_mtime = target_dir->i_ctime = - target_dir->i_mtime = current_fs_time(source_dir->i_sb); - cifs_rename_exit: kfree(info_buf_source); kfree(from_name); diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c index 85ebdaa2101..036279c064f 100644 --- a/fs/cifs/readdir.c +++ b/fs/cifs/readdir.c @@ -582,11 +582,11 @@ find_cifs_entry(const unsigned int xid, struct cifs_tcon *tcon, /* close and restart search */ cifs_dbg(FYI, "search backing up - close and restart search\n"); spin_lock(&cifs_file_list_lock); - if (server->ops->dir_needs_close(cfile)) { + if (!cfile->srch_inf.endOfSearch && !cfile->invalidHandle) { cfile->invalidHandle = true; spin_unlock(&cifs_file_list_lock); - if (server->ops->close_dir) - server->ops->close_dir(xid, tcon, &cfile->fid); + if (server->ops->close) + server->ops->close(xid, tcon, &cfile->fid); } else spin_unlock(&cifs_file_list_lock); if (cfile->srch_inf.ntwrk_buf_start) { diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c index 610c6c24d41..4885a40f321 100644 --- a/fs/cifs/smb1ops.c +++ b/fs/cifs/smb1ops.c @@ -885,12 +885,6 @@ cifs_mand_lock(const unsigned int xid, struct cifsFileInfo *cfile, __u64 offset, (__u8)type, wait, 0); } -static bool -cifs_dir_needs_close(struct cifsFileInfo *cfile) -{ - return !cfile->srch_inf.endOfSearch && !cfile->invalidHandle; -} - struct smb_version_operations smb1_operations = { .send_cancel = send_nt_cancel, .compare_fids = cifs_compare_fids, @@ -954,7 +948,6 @@ struct smb_version_operations smb1_operations = { .mand_lock = cifs_mand_lock, .mand_unlock_range = cifs_unlock_range, .push_mand_locks = cifs_push_mandatory_locks, - .dir_needs_close = cifs_dir_needs_close, #ifdef CONFIG_CIFS_XATTR .query_all_EAs = CIFSSMBQAllEAs, .set_EA = CIFSSMBSetEA, diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c index d801f63cddd..5da1b55a225 100644 --- a/fs/cifs/smb2file.c +++ b/fs/cifs/smb2file.c @@ -73,7 +73,7 @@ smb2_open_file(const unsigned int xid, struct cifs_tcon *tcon, const char *path, goto out; } - smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2, + smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2, GFP_KERNEL); if (smb2_data == NULL) { rc = -ENOMEM; diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c index 6d535797ec7..fff6dfba620 100644 --- a/fs/cifs/smb2inode.c +++ b/fs/cifs/smb2inode.c @@ -123,7 +123,7 @@ smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon, *adjust_tz = false; - smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2, + smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2, GFP_KERNEL); if (smb2_data == NULL) return -ENOMEM; diff --git a/fs/cifs/smb2maperror.c b/fs/cifs/smb2maperror.c index 4768cf8be6e..7c2f45c06fc 100644 --- a/fs/cifs/smb2maperror.c +++ b/fs/cifs/smb2maperror.c @@ -214,7 +214,7 @@ static const struct status_to_posix_error smb2_error_map_table[] = { {STATUS_BREAKPOINT, -EIO, "STATUS_BREAKPOINT"}, {STATUS_SINGLE_STEP, -EIO, "STATUS_SINGLE_STEP"}, {STATUS_BUFFER_OVERFLOW, -EIO, "STATUS_BUFFER_OVERFLOW"}, - {STATUS_NO_MORE_FILES, -ENODATA, "STATUS_NO_MORE_FILES"}, + {STATUS_NO_MORE_FILES, -EIO, "STATUS_NO_MORE_FILES"}, {STATUS_WAKE_SYSTEM_DEBUGGER, -EIO, "STATUS_WAKE_SYSTEM_DEBUGGER"}, {STATUS_HANDLES_CLOSED, -EIO, "STATUS_HANDLES_CLOSED"}, {STATUS_NO_INHERITANCE, -EIO, "STATUS_NO_INHERITANCE"}, @@ -605,7 +605,7 @@ static const struct status_to_posix_error smb2_error_map_table[] = { {STATUS_MAPPED_FILE_SIZE_ZERO, -EIO, "STATUS_MAPPED_FILE_SIZE_ZERO"}, {STATUS_TOO_MANY_OPENED_FILES, -EMFILE, "STATUS_TOO_MANY_OPENED_FILES"}, {STATUS_CANCELLED, -EIO, "STATUS_CANCELLED"}, - {STATUS_CANNOT_DELETE, -EACCES, "STATUS_CANNOT_DELETE"}, + {STATUS_CANNOT_DELETE, -EIO, "STATUS_CANNOT_DELETE"}, {STATUS_INVALID_COMPUTER_NAME, -EIO, "STATUS_INVALID_COMPUTER_NAME"}, {STATUS_FILE_DELETED, -EIO, "STATUS_FILE_DELETED"}, {STATUS_SPECIAL_ACCOUNT, -EIO, "STATUS_SPECIAL_ACCOUNT"}, diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index e12f258a5ff..e2756bb40b4 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -243,7 +243,7 @@ smb2_query_file_info(const unsigned int xid, struct cifs_tcon *tcon, int rc; struct smb2_file_all_info *smb2_data; - smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2, + smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2, GFP_KERNEL); if (smb2_data == NULL) return -ENOMEM; @@ -554,12 +554,6 @@ smb2_new_lease_key(struct cifs_fid *fid) get_random_bytes(fid->lease_key, SMB2_LEASE_KEY_SIZE); } -static bool -smb2_dir_needs_close(struct cifsFileInfo *cfile) -{ - return !cfile->invalidHandle; -} - struct smb_version_operations smb21_operations = { .compare_fids = smb2_compare_fids, .setup_request = smb2_setup_request, @@ -624,7 +618,6 @@ struct smb_version_operations smb21_operations = { .set_lease_key = smb2_set_lease_key, .new_lease_key = smb2_new_lease_key, .calc_signature = smb2_calc_signature, - .dir_needs_close = smb2_dir_needs_close, }; @@ -692,7 +685,6 @@ struct smb_version_operations smb30_operations = { .set_lease_key = smb2_set_lease_key, .new_lease_key = smb2_new_lease_key, .calc_signature = smb3_calc_signature, - .dir_needs_close = smb2_dir_needs_close, }; struct smb_version_values smb20_values = { diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c index eb0de4c3ca7..c7a6fd87bb6 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -809,8 +809,7 @@ tcon_exit: tcon_error_exit: if (rsp->hdr.Status == STATUS_BAD_NETWORK_NAME) { cifs_dbg(VFS, "BAD_NETWORK_NAME: %s\n", tree); - if (tcon) - tcon->bad_network_name = true; + tcon->bad_network_name = true; } goto tcon_exit; } @@ -1204,7 +1203,7 @@ SMB2_query_info(const unsigned int xid, struct cifs_tcon *tcon, { return query_info(xid, tcon, persistent_fid, volatile_fid, FILE_ALL_INFORMATION, - sizeof(struct smb2_file_all_info) + PATH_MAX * 2, + sizeof(struct smb2_file_all_info) + MAX_NAME * 2, sizeof(struct smb2_file_all_info), data); } @@ -1800,10 +1799,6 @@ SMB2_query_directory(const unsigned int xid, struct cifs_tcon *tcon, rsp = (struct smb2_query_directory_rsp *)iov[0].iov_base; if (rc) { - if (rc == -ENODATA && rsp->hdr.Status == STATUS_NO_MORE_FILES) { - srch_inf->endOfSearch = true; - rc = 0; - } cifs_stats_fail_inc(tcon, SMB2_QUERY_DIRECTORY_HE); goto qdir_exit; } @@ -1841,6 +1836,11 @@ SMB2_query_directory(const unsigned int xid, struct cifs_tcon *tcon, else cifs_dbg(VFS, "illegal search buffer type\n"); + if (rsp->hdr.Status == STATUS_NO_MORE_FILES) + srch_inf->endOfSearch = 1; + else + srch_inf->endOfSearch = 0; + return rc; qdir_exit: diff --git a/fs/coredump.c b/fs/coredump.c index 1d402ce5b72..dafafbafa73 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -299,7 +299,7 @@ static int zap_threads(struct task_struct *tsk, struct mm_struct *mm, if (unlikely(nr < 0)) return nr; - tsk->flags |= PF_DUMPCORE; + tsk->flags = PF_DUMPCORE; if (atomic_read(&mm->mm_users) == nr + 1) goto done; /* diff --git a/fs/dcache.c b/fs/dcache.c index 25c0a1b5f6c..9a59653d344 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -96,6 +96,8 @@ static struct kmem_cache *dentry_cache __read_mostly; * This hash-function tries to avoid losing too many bits of hash * information, yet avoid using a prime hash-size or similar. */ +#define D_HASHBITS d_hash_shift +#define D_HASHMASK d_hash_mask static unsigned int d_hash_mask __read_mostly; static unsigned int d_hash_shift __read_mostly; @@ -106,7 +108,8 @@ static inline struct hlist_bl_head *d_hash(const struct dentry *parent, unsigned int hash) { hash += (unsigned long) parent / L1_CACHE_BYTES; - return dentry_hashtable + hash_32(hash, d_hash_shift); + hash = hash + (hash >> D_HASHBITS); + return dentry_hashtable + (hash & D_HASHMASK); } /* Statistics gathering. */ diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c index 41baf8b5e0e..5eab400e259 100644 --- a/fs/ecryptfs/inode.c +++ b/fs/ecryptfs/inode.c @@ -1051,7 +1051,7 @@ ecryptfs_setxattr(struct dentry *dentry, const char *name, const void *value, } rc = vfs_setxattr(lower_dentry, name, value, size, flags); - if (!rc && dentry->d_inode) + if (!rc) fsstack_copy_attr_all(dentry->d_inode, lower_dentry->d_inode); out: return rc; diff --git a/fs/exec.c b/fs/exec.c index dd6aa61c854..bb60cda5ee3 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -654,10 +654,10 @@ int setup_arg_pages(struct linux_binprm *bprm, unsigned long rlim_stack; #ifdef CONFIG_STACK_GROWSUP - /* Limit stack size */ + /* Limit stack size to 1GB */ stack_base = rlimit_max(RLIMIT_STACK); - if (stack_base > STACK_SIZE_MAX) - stack_base = STACK_SIZE_MAX; + if (stack_base > (1 << 30)) + stack_base = 1 << 30; /* Make sure we didn't let the argument array grow too large. */ if (vma->vm_end - vma->vm_start > stack_base) diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c index 99d84ce038b..0a87bb10998 100644 --- a/fs/ext2/inode.c +++ b/fs/ext2/inode.c @@ -632,8 +632,6 @@ static int ext2_get_blocks(struct inode *inode, int count = 0; ext2_fsblk_t first_block = 0; - BUG_ON(maxblocks == 0); - depth = ext2_block_to_path(inode,iblock,offsets,&blocks_to_boundary); if (depth == 0) diff --git a/fs/ext2/xip.c b/fs/ext2/xip.c index e98171a11cf..1c3312858fc 100644 --- a/fs/ext2/xip.c +++ b/fs/ext2/xip.c @@ -35,7 +35,6 @@ __ext2_get_block(struct inode *inode, pgoff_t pgoff, int create, int rc; memset(&tmp, 0, sizeof(struct buffer_head)); - tmp.b_size = 1 << inode->i_blkbits; rc = ext2_get_block(inode, pgoff, &tmp, create); *result = tmp.b_blocknr; diff --git a/fs/ext3/super.c b/fs/ext3/super.c index 882d4bdfd42..6356665a74b 100644 --- a/fs/ext3/super.c +++ b/fs/ext3/super.c @@ -1300,6 +1300,13 @@ set_qf_format: "not specified."); return 0; } + } else { + if (sbi->s_jquota_fmt) { + ext3_msg(sb, KERN_ERR, "error: journaled quota format " + "specified with no journaling " + "enabled."); + return 0; + } } #endif return 1; diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index e4c4ac07cc3..790b14c5f26 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -2088,7 +2088,6 @@ int do_journal_get_write_access(handle_t *handle, #define CONVERT_INLINE_DATA 2 extern struct inode *ext4_iget(struct super_block *, unsigned long); -extern struct inode *ext4_iget_normal(struct super_block *, unsigned long); extern int ext4_write_inode(struct inode *, struct writeback_control *); extern int ext4_setattr(struct dentry *, struct iattr *); extern int ext4_getattr(struct vfsmount *mnt, struct dentry *dentry, @@ -2261,8 +2260,8 @@ extern int ext4_register_li_request(struct super_block *sb, static inline int ext4_has_group_desc_csum(struct super_block *sb) { return EXT4_HAS_RO_COMPAT_FEATURE(sb, - EXT4_FEATURE_RO_COMPAT_GDT_CSUM) || - (EXT4_SB(sb)->s_chksum_driver != NULL); + EXT4_FEATURE_RO_COMPAT_GDT_CSUM | + EXT4_FEATURE_RO_COMPAT_METADATA_CSUM); } static inline ext4_fsblk_t ext4_blocks_count(struct ext4_super_block *es) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 84d817b842a..a2b625e279d 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -2511,27 +2511,6 @@ ext4_ext_rm_leaf(handle_t *handle, struct inode *inode, ex_ee_block = le32_to_cpu(ex->ee_block); ex_ee_len = ext4_ext_get_actual_len(ex); - /* - * If we're starting with an extent other than the last one in the - * node, we need to see if it shares a cluster with the extent to - * the right (towards the end of the file). If its leftmost cluster - * is this extent's rightmost cluster and it is not cluster aligned, - * we'll mark it as a partial that is not to be deallocated. - */ - - if (ex != EXT_LAST_EXTENT(eh)) { - ext4_fsblk_t current_pblk, right_pblk; - long long current_cluster, right_cluster; - - current_pblk = ext4_ext_pblock(ex) + ex_ee_len - 1; - current_cluster = (long long)EXT4_B2C(sbi, current_pblk); - right_pblk = ext4_ext_pblock(ex + 1); - right_cluster = (long long)EXT4_B2C(sbi, right_pblk); - if (current_cluster == right_cluster && - EXT4_PBLK_COFF(sbi, right_pblk)) - *partial_cluster = -right_cluster; - } - trace_ext4_ext_rm_leaf(inode, start, ex, *partial_cluster); while (ex >= EXT_FIRST_EXTENT(eh) && @@ -4053,7 +4032,7 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode, struct ext4_extent newex, *ex, *ex2; struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); ext4_fsblk_t newblock = 0; - int free_on_err = 0, err = 0, depth, ret; + int free_on_err = 0, err = 0, depth; unsigned int allocated = 0, offset = 0; unsigned int allocated_clusters = 0; struct ext4_allocation_request ar; @@ -4114,13 +4093,9 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode, if (!ext4_ext_is_uninitialized(ex)) goto out; - ret = ext4_ext_handle_uninitialized_extents( + allocated = ext4_ext_handle_uninitialized_extents( handle, inode, map, path, flags, allocated, newblock); - if (ret < 0) - err = ret; - else - allocated = ret; goto out3; } } diff --git a/fs/ext4/file.c b/fs/ext4/file.c index 4635788e14b..b19f0a457f3 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -82,7 +82,7 @@ ext4_unaligned_aio(struct inode *inode, const struct iovec *iov, size_t count = iov_length(iov, nr_segs); loff_t final_size = pos + count; - if (pos >= i_size_read(inode)) + if (pos >= inode->i_size) return 0; if ((pos & blockmask) || (final_size & blockmask)) diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c index 4d4718cf25a..3da3bf1b2cd 100644 --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -780,23 +780,12 @@ got: goto out; } - BUFFER_TRACE(group_desc_bh, "get_write_access"); - err = ext4_journal_get_write_access(handle, group_desc_bh); - if (err) { - ext4_std_error(sb, err); - goto out; - } - /* We may have to initialize the block bitmap if it isn't already */ if (ext4_has_group_desc_csum(sb) && gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { struct buffer_head *block_bitmap_bh; block_bitmap_bh = ext4_read_block_bitmap(sb, group); - if (!block_bitmap_bh) { - err = -EIO; - goto out; - } BUFFER_TRACE(block_bitmap_bh, "get block bitmap access"); err = ext4_journal_get_write_access(handle, block_bitmap_bh); if (err) { @@ -827,6 +816,13 @@ got: } } + BUFFER_TRACE(group_desc_bh, "get_write_access"); + err = ext4_journal_get_write_access(handle, group_desc_bh); + if (err) { + ext4_std_error(sb, err); + goto out; + } + /* Update the relevant bg descriptor fields */ if (ext4_has_group_desc_csum(sb)) { int free; diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c index 58906146968..b8d5d351e24 100644 --- a/fs/ext4/indirect.c +++ b/fs/ext4/indirect.c @@ -390,13 +390,7 @@ static int ext4_alloc_branch(handle_t *handle, struct inode *inode, return 0; failed: for (; i >= 0; i--) { - /* - * We want to ext4_forget() only freshly allocated indirect - * blocks. Buffer for new_blocks[i-1] is at branch[i].bh and - * buffer at branch[0].bh is indirect block / inode already - * existing before ext4_alloc_branch() was called. - */ - if (i > 0 && i != indirect_blks && branch[i].bh) + if (i != indirect_blks && branch[i].bh) ext4_forget(handle, 1, inode, branch[i].bh, branch[i].bh->b_blocknr); ext4_free_blocks(handle, inode, NULL, new_blocks[i], @@ -1331,24 +1325,16 @@ static int free_hole_blocks(handle_t *handle, struct inode *inode, blk = *i_data; if (level > 0) { ext4_lblk_t first2; - ext4_lblk_t count2; - bh = sb_bread(inode->i_sb, le32_to_cpu(blk)); if (!bh) { EXT4_ERROR_INODE_BLOCK(inode, le32_to_cpu(blk), "Read failure"); return -EIO; } - if (first > offset) { - first2 = first - offset; - count2 = count; - } else { - first2 = 0; - count2 = count - (offset - first); - } + first2 = (first > offset) ? first - offset : 0; ret = free_hole_blocks(handle, inode, bh, (__le32 *)bh->b_data, level - 1, - first2, count2, + first2, count - offset, inode->i_sb->s_blocksize >> 2); if (ret) { brelse(bh); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index e48bd5a1814..21dff8f236f 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -38,7 +38,6 @@ #include <linux/slab.h> #include <linux/ratelimit.h> #include <linux/aio.h> -#include <linux/bitops.h> #include "ext4_jbd2.h" #include "xattr.h" @@ -2647,20 +2646,6 @@ static int ext4_nonda_switch(struct super_block *sb) return 0; } -/* We always reserve for an inode update; the superblock could be there too */ -static int ext4_da_write_credits(struct inode *inode, loff_t pos, unsigned len) -{ - if (likely(EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, - EXT4_FEATURE_RO_COMPAT_LARGE_FILE))) - return 1; - - if (pos + len <= 0x7fffffffULL) - return 1; - - /* We might need to update the superblock to set LARGE_FILE */ - return 2; -} - static int ext4_da_write_begin(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned flags, struct page **pagep, void **fsdata) @@ -2711,8 +2696,7 @@ retry_grab: * of file which has an already mapped buffer. */ retry_journal: - handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, - ext4_da_write_credits(inode, pos, len)); + handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, 1); if (IS_ERR(handle)) { page_cache_release(page); return PTR_ERR(handle); @@ -4060,20 +4044,18 @@ int ext4_get_inode_loc(struct inode *inode, struct ext4_iloc *iloc) void ext4_set_inode_flags(struct inode *inode) { unsigned int flags = EXT4_I(inode)->i_flags; - unsigned int new_fl = 0; + inode->i_flags &= ~(S_SYNC|S_APPEND|S_IMMUTABLE|S_NOATIME|S_DIRSYNC); if (flags & EXT4_SYNC_FL) - new_fl |= S_SYNC; + inode->i_flags |= S_SYNC; if (flags & EXT4_APPEND_FL) - new_fl |= S_APPEND; + inode->i_flags |= S_APPEND; if (flags & EXT4_IMMUTABLE_FL) - new_fl |= S_IMMUTABLE; + inode->i_flags |= S_IMMUTABLE; if (flags & EXT4_NOATIME_FL) - new_fl |= S_NOATIME; + inode->i_flags |= S_NOATIME; if (flags & EXT4_DIRSYNC_FL) - new_fl |= S_DIRSYNC; - set_mask_bits(&inode->i_flags, - S_SYNC|S_APPEND|S_IMMUTABLE|S_NOATIME|S_DIRSYNC, new_fl); + inode->i_flags |= S_DIRSYNC; } /* Propagate flags from i_flags to EXT4_I(inode)->i_flags */ @@ -4366,13 +4348,6 @@ bad_inode: return ERR_PTR(ret); } -struct inode *ext4_iget_normal(struct super_block *sb, unsigned long ino) -{ - if (ino < EXT4_FIRST_INO(sb) && ino != EXT4_ROOT_INO) - return ERR_PTR(-EIO); - return ext4_iget(sb, ino); -} - static int ext4_inode_blocks_set(handle_t *handle, struct ext4_inode *raw_inode, struct ext4_inode_info *ei) diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c index d4fd81c44f5..42624a995b0 100644 --- a/fs/ext4/ioctl.c +++ b/fs/ext4/ioctl.c @@ -549,17 +549,9 @@ group_add_out: } case EXT4_IOC_SWAP_BOOT: - { - int err; if (!(filp->f_mode & FMODE_WRITE)) return -EBADF; - err = mnt_want_write_file(filp); - if (err) - return err; - err = swap_inode_boot_loader(sb, inode); - mnt_drop_write_file(filp); - return err; - } + return swap_inode_boot_loader(sb, inode); case EXT4_IOC_RESIZE_FS: { ext4_fsblk_t n_blocks_count; diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 162b80d527a..fba960ee26d 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -1396,8 +1396,6 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b, int last = first + count - 1; struct super_block *sb = e4b->bd_sb; - if (WARN_ON(count == 0)) - return; BUG_ON(last >= (sb->s_blocksize << 3)); assert_spin_locked(ext4_group_lock_ptr(sb, e4b->bd_group)); mb_check_buddy(e4b); @@ -3118,7 +3116,7 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac, } BUG_ON(start + size <= ac->ac_o_ex.fe_logical && start > ac->ac_o_ex.fe_logical); - BUG_ON(size <= 0 || size > EXT4_BLOCKS_PER_GROUP(ac->ac_sb)); + BUG_ON(size <= 0 || size > EXT4_CLUSTERS_PER_GROUP(ac->ac_sb)); /* now prepare goal request */ @@ -3179,30 +3177,8 @@ static void ext4_mb_collect_stats(struct ext4_allocation_context *ac) static void ext4_discard_allocated_blocks(struct ext4_allocation_context *ac) { struct ext4_prealloc_space *pa = ac->ac_pa; - struct ext4_buddy e4b; - int err; - if (pa == NULL) { - if (ac->ac_f_ex.fe_len == 0) - return; - err = ext4_mb_load_buddy(ac->ac_sb, ac->ac_f_ex.fe_group, &e4b); - if (err) { - /* - * This should never happen since we pin the - * pages in the ext4_allocation_context so - * ext4_mb_load_buddy() should never fail. - */ - WARN(1, "mb_load_buddy failed (%d)", err); - return; - } - ext4_lock_group(ac->ac_sb, ac->ac_f_ex.fe_group); - mb_free_blocks(ac->ac_inode, &e4b, ac->ac_f_ex.fe_start, - ac->ac_f_ex.fe_len); - ext4_unlock_group(ac->ac_sb, ac->ac_f_ex.fe_group); - ext4_mb_unload_buddy(&e4b); - return; - } - if (pa->pa_type == MB_INODE_PA) + if (pa && pa->pa_type == MB_INODE_PA) pa->pa_free += ac->ac_b_ex.fe_len; } diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index f1312173fa9..ab2f6dc44b3 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -1430,7 +1430,7 @@ static struct dentry *ext4_lookup(struct inode *dir, struct dentry *dentry, unsi dentry->d_name.name); return ERR_PTR(-EIO); } - inode = ext4_iget_normal(dir->i_sb, ino); + inode = ext4_iget(dir->i_sb, ino); if (inode == ERR_PTR(-ESTALE)) { EXT4_ERROR_INODE(dir, "deleted inode referenced: %u", @@ -1461,7 +1461,7 @@ struct dentry *ext4_get_parent(struct dentry *child) return ERR_PTR(-EIO); } - return d_obtain_alias(ext4_iget_normal(child->d_inode->i_sb, ino)); + return d_obtain_alias(ext4_iget(child->d_inode->i_sb, ino)); } /* diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index b12a4427aed..4acf1f78881 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -384,17 +384,6 @@ int ext4_bio_write_page(struct ext4_io_submit *io, ClearPageError(page); /* - * Comments copied from block_write_full_page_endio: - * - * The page straddles i_size. It must be zeroed out on each and every - * writepage invocation because it may be mmapped. "A file is mapped - * in multiples of the page size. For a file that is not a multiple of - * the page size, the remaining memory is zeroed when mapped, and - * writes to that region are not written out to the file." - */ - if (len < PAGE_CACHE_SIZE) - zero_user_segment(page, len, PAGE_CACHE_SIZE); - /* * In the first loop we prepare and mark buffers to submit. We have to * mark all buffers in the page before submitting so that * end_page_writeback() cannot be called from ext4_bio_end_io() when IO @@ -405,6 +394,19 @@ int ext4_bio_write_page(struct ext4_io_submit *io, do { block_start = bh_offset(bh); if (block_start >= len) { + /* + * Comments copied from block_write_full_page_endio: + * + * The page straddles i_size. It must be zeroed out on + * each and every writepage invocation because it may + * be mmapped. "A file is mapped in multiples of the + * page size. For a file that is not a multiple of + * the page size, the remaining memory is zeroed when + * mapped, and writes to that region are not written + * out to the file." + */ + zero_user_segment(page, block_start, + block_start + blocksize); clear_buffer_dirty(bh); set_buffer_uptodate(bh); continue; diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c index a69bd74ed39..c503850a61a 100644 --- a/fs/ext4/resize.c +++ b/fs/ext4/resize.c @@ -1066,7 +1066,7 @@ static void update_backups(struct super_block *sb, int blk_off, char *data, break; if (meta_bg == 0) - backup_block = ((ext4_fsblk_t)group) * bpg + blk_off; + backup_block = group * bpg + blk_off; else backup_block = (ext4_group_first_block_no(sb, group) + ext4_bg_has_super(sb, group)); diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 356572ffabb..468e26500df 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -964,7 +964,7 @@ static struct inode *ext4_nfs_get_inode(struct super_block *sb, * Currently we don't know the generation for parent directory, so * a generation of 0 means "accept any" */ - inode = ext4_iget_normal(sb, ino); + inode = ext4_iget(sb, ino); if (IS_ERR(inode)) return ERR_CAST(inode); if (generation && inode->i_generation != generation) { @@ -1483,6 +1483,8 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token, arg = JBD2_DEFAULT_MAX_COMMIT_AGE; sbi->s_commit_interval = HZ * arg; } else if (token == Opt_max_batch_time) { + if (arg == 0) + arg = EXT4_DEF_MAX_BATCH_TIME; sbi->s_max_batch_time = arg; } else if (token == Opt_min_batch_time) { sbi->s_min_batch_time = arg; @@ -1632,6 +1634,13 @@ static int parse_options(char *options, struct super_block *sb, "not specified"); return 0; } + } else { + if (sbi->s_jquota_fmt) { + ext4_msg(sb, KERN_ERR, "journaled quota format " + "specified with no journaling " + "enabled"); + return 0; + } } #endif if (test_opt(sb, DIOREAD_NOLOCK)) { @@ -1950,10 +1959,6 @@ static __le16 ext4_group_desc_csum(struct ext4_sb_info *sbi, __u32 block_group, } /* old crc16 code */ - if (!(sbi->s_es->s_feature_ro_compat & - cpu_to_le32(EXT4_FEATURE_RO_COMPAT_GDT_CSUM))) - return 0; - offset = offsetof(struct ext4_group_desc, bg_checksum); crc = crc16(~0, sbi->s_es->s_uuid, sizeof(sbi->s_es->s_uuid)); @@ -2682,11 +2687,10 @@ static void print_daily_error_info(unsigned long arg) es = sbi->s_es; if (es->s_error_count) - /* fsck newer than v1.41.13 is needed to clean this condition. */ - ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u", + ext4_msg(sb, KERN_NOTICE, "error count: %u", le32_to_cpu(es->s_error_count)); if (es->s_first_error_time) { - printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d", + printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d", sb->s_id, le32_to_cpu(es->s_first_error_time), (int) sizeof(es->s_first_error_func), es->s_first_error_func, @@ -2700,7 +2704,7 @@ static void print_daily_error_info(unsigned long arg) printk("\n"); } if (es->s_last_error_time) { - printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d", + printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d", sb->s_id, le32_to_cpu(es->s_last_error_time), (int) sizeof(es->s_last_error_func), es->s_last_error_func, diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c index a20816e7eb3..1423c4816a4 100644 --- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -189,28 +189,14 @@ ext4_listxattr(struct dentry *dentry, char *buffer, size_t size) } static int -ext4_xattr_check_names(struct ext4_xattr_entry *entry, void *end, - void *value_start) +ext4_xattr_check_names(struct ext4_xattr_entry *entry, void *end) { - struct ext4_xattr_entry *e = entry; - - while (!IS_LAST_ENTRY(e)) { - struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(e); - if ((void *)next >= end) - return -EIO; - e = next; - } - while (!IS_LAST_ENTRY(entry)) { - if (entry->e_value_size != 0 && - (value_start + le16_to_cpu(entry->e_value_offs) < - (void *)e + sizeof(__u32) || - value_start + le16_to_cpu(entry->e_value_offs) + - le32_to_cpu(entry->e_value_size) > end)) + struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(entry); + if ((void *)next >= end) return -EIO; - entry = EXT4_XATTR_NEXT(entry); + entry = next; } - return 0; } @@ -227,8 +213,7 @@ ext4_xattr_check_block(struct inode *inode, struct buffer_head *bh) return -EIO; if (!ext4_xattr_block_csum_verify(inode, bh->b_blocknr, BHDR(bh))) return -EIO; - error = ext4_xattr_check_names(BFIRST(bh), bh->b_data + bh->b_size, - bh->b_data); + error = ext4_xattr_check_names(BFIRST(bh), bh->b_data + bh->b_size); if (!error) set_buffer_verified(bh); return error; @@ -344,7 +329,7 @@ ext4_xattr_ibody_get(struct inode *inode, int name_index, const char *name, header = IHDR(inode, raw_inode); entry = IFIRST(header); end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size; - error = ext4_xattr_check_names(entry, end, entry); + error = ext4_xattr_check_names(entry, end); if (error) goto cleanup; error = ext4_xattr_find_entry(&entry, name_index, name, @@ -472,7 +457,7 @@ ext4_xattr_ibody_list(struct dentry *dentry, char *buffer, size_t buffer_size) raw_inode = ext4_raw_inode(&iloc); header = IHDR(inode, raw_inode); end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size; - error = ext4_xattr_check_names(IFIRST(header), end, IFIRST(header)); + error = ext4_xattr_check_names(IFIRST(header), end); if (error) goto cleanup; error = ext4_xattr_list_entries(dentry, IFIRST(header), @@ -532,8 +517,8 @@ static void ext4_xattr_update_super_block(handle_t *handle, } /* - * Release the xattr block BH: If the reference count is > 1, decrement it; - * otherwise free the block. + * Release the xattr block BH: If the reference count is > 1, decrement + * it; otherwise free the block. */ static void ext4_xattr_release_block(handle_t *handle, struct inode *inode, @@ -553,31 +538,16 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode, if (ce) mb_cache_entry_free(ce); get_bh(bh); - unlock_buffer(bh); ext4_free_blocks(handle, inode, bh, 0, 1, EXT4_FREE_BLOCKS_METADATA | EXT4_FREE_BLOCKS_FORGET); + unlock_buffer(bh); } else { le32_add_cpu(&BHDR(bh)->h_refcount, -1); if (ce) mb_cache_entry_release(ce); - /* - * Beware of this ugliness: Releasing of xattr block references - * from different inodes can race and so we have to protect - * from a race where someone else frees the block (and releases - * its journal_head) before we are done dirtying the buffer. In - * nojournal mode this race is harmless and we actually cannot - * call ext4_handle_dirty_xattr_block() with locked buffer as - * that function can call sync_dirty_buffer() so for that case - * we handle the dirtying after unlocking the buffer. - */ - if (ext4_handle_valid(handle)) - error = ext4_handle_dirty_xattr_block(handle, inode, - bh); unlock_buffer(bh); - if (!ext4_handle_valid(handle)) - error = ext4_handle_dirty_xattr_block(handle, inode, - bh); + error = ext4_handle_dirty_xattr_block(handle, inode, bh); if (IS_SYNC(inode)) ext4_handle_sync(handle); dquot_free_block(inode, EXT4_C2B(EXT4_SB(inode->i_sb), 1)); @@ -987,8 +957,7 @@ int ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i, is->s.here = is->s.first; is->s.end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size; if (ext4_test_inode_state(inode, EXT4_STATE_XATTR)) { - error = ext4_xattr_check_names(IFIRST(header), is->s.end, - IFIRST(header)); + error = ext4_xattr_check_names(IFIRST(header), is->s.end); if (error) return error; /* Find the named attribute. */ diff --git a/fs/file_table.c b/fs/file_table.c index 54a34be444f..485dc0eddd6 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -211,10 +211,10 @@ static void drop_file_write_access(struct file *file) struct dentry *dentry = file->f_path.dentry; struct inode *inode = dentry->d_inode; + put_write_access(inode); + if (special_file(inode->i_mode)) return; - - put_write_access(inode); if (file_check_writeable(file) != 0) return; __mnt_drop_write(mnt); diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 556af9eff33..6c0f509060c 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -1051,10 +1051,10 @@ void bdi_writeback_workfn(struct work_struct *work) trace_writeback_pages_written(pages_written); } - if (!list_empty(&bdi->work_list)) - mod_delayed_work(bdi_wq, &wb->dwork, 0); - else if (wb_has_dirty_io(wb) && dirty_writeback_interval) - bdi_wakeup_thread_delayed(bdi); + if (!list_empty(&bdi->work_list) || + (wb_has_dirty_io(wb) && dirty_writeback_interval)) + queue_delayed_work(bdi_wq, &wb->dwork, + msecs_to_jiffies(dirty_writeback_interval * 10)); current->flags &= ~PF_SWAPWRITE; } diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c index 39a986e1da9..b5718516825 100644 --- a/fs/fuse/inode.c +++ b/fs/fuse/inode.c @@ -461,17 +461,6 @@ static const match_table_t tokens = { {OPT_ERR, NULL} }; -static int fuse_match_uint(substring_t *s, unsigned int *res) -{ - int err = -ENOMEM; - char *buf = match_strdup(s); - if (buf) { - err = kstrtouint(buf, 10, res); - kfree(buf); - } - return err; -} - static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) { char *p; @@ -482,7 +471,6 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) while ((p = strsep(&opt, ",")) != NULL) { int token; int value; - unsigned uv; substring_t args[MAX_OPT_ARGS]; if (!*p) continue; @@ -506,18 +494,18 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) break; case OPT_USER_ID: - if (fuse_match_uint(&args[0], &uv)) + if (match_int(&args[0], &value)) return 0; - d->user_id = make_kuid(current_user_ns(), uv); + d->user_id = make_kuid(current_user_ns(), value); if (!uid_valid(d->user_id)) return 0; d->user_id_present = 1; break; case OPT_GROUP_ID: - if (fuse_match_uint(&args[0], &uv)) + if (match_int(&args[0], &value)) return 0; - d->group_id = make_kgid(current_user_ns(), uv); + d->group_id = make_kgid(current_user_ns(), value); if (!gid_valid(d->group_id)) return 0; d->group_id_present = 1; diff --git a/fs/inode.c b/fs/inode.c index 604a2847c35..0e7953d67ab 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -1837,18 +1837,14 @@ EXPORT_SYMBOL(inode_init_owner); * inode_owner_or_capable - check current task permissions to inode * @inode: inode being checked * - * Return true if current either has CAP_FOWNER in a namespace with the - * inode owner uid mapped, or owns the file. + * Return true if current either has CAP_FOWNER to the inode, or + * owns the file. */ bool inode_owner_or_capable(const struct inode *inode) { - struct user_namespace *ns; - if (uid_eq(current_fsuid(), inode->i_uid)) return true; - - ns = current_user_ns(); - if (ns_capable(ns, CAP_FOWNER) && kuid_has_mapping(ns, inode->i_uid)) + if (inode_capable(inode, CAP_FOWNER)) return true; return false; } diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c index 10489bbd40f..d3705490ff9 100644 --- a/fs/isofs/inode.c +++ b/fs/isofs/inode.c @@ -69,7 +69,7 @@ static void isofs_put_super(struct super_block *sb) return; } -static int isofs_read_inode(struct inode *, int relocated); +static int isofs_read_inode(struct inode *); static int isofs_statfs (struct dentry *, struct kstatfs *); static struct kmem_cache *isofs_inode_cachep; @@ -1274,7 +1274,7 @@ out_toomany: goto out; } -static int isofs_read_inode(struct inode *inode, int relocated) +static int isofs_read_inode(struct inode *inode) { struct super_block *sb = inode->i_sb; struct isofs_sb_info *sbi = ISOFS_SB(sb); @@ -1419,7 +1419,7 @@ static int isofs_read_inode(struct inode *inode, int relocated) */ if (!high_sierra) { - parse_rock_ridge_inode(de, inode, relocated); + parse_rock_ridge_inode(de, inode); /* if we want uid/gid set, override the rock ridge setting */ if (sbi->s_uid_set) inode->i_uid = sbi->s_uid; @@ -1498,10 +1498,9 @@ static int isofs_iget5_set(struct inode *ino, void *data) * offset that point to the underlying meta-data for the inode. The * code below is otherwise similar to the iget() code in * include/linux/fs.h */ -struct inode *__isofs_iget(struct super_block *sb, - unsigned long block, - unsigned long offset, - int relocated) +struct inode *isofs_iget(struct super_block *sb, + unsigned long block, + unsigned long offset) { unsigned long hashval; struct inode *inode; @@ -1523,7 +1522,7 @@ struct inode *__isofs_iget(struct super_block *sb, return ERR_PTR(-ENOMEM); if (inode->i_state & I_NEW) { - ret = isofs_read_inode(inode, relocated); + ret = isofs_read_inode(inode); if (ret < 0) { iget_failed(inode); inode = ERR_PTR(ret); diff --git a/fs/isofs/isofs.h b/fs/isofs/isofs.h index 0ac4c1f73fb..99167238518 100644 --- a/fs/isofs/isofs.h +++ b/fs/isofs/isofs.h @@ -107,7 +107,7 @@ extern int iso_date(char *, int); struct inode; /* To make gcc happy */ -extern int parse_rock_ridge_inode(struct iso_directory_record *, struct inode *, int relocated); +extern int parse_rock_ridge_inode(struct iso_directory_record *, struct inode *); extern int get_rock_ridge_filename(struct iso_directory_record *, char *, struct inode *); extern int isofs_name_translate(struct iso_directory_record *, char *, struct inode *); @@ -118,24 +118,9 @@ extern struct dentry *isofs_lookup(struct inode *, struct dentry *, unsigned int extern struct buffer_head *isofs_bread(struct inode *, sector_t); extern int isofs_get_blocks(struct inode *, sector_t, struct buffer_head **, unsigned long); -struct inode *__isofs_iget(struct super_block *sb, - unsigned long block, - unsigned long offset, - int relocated); - -static inline struct inode *isofs_iget(struct super_block *sb, - unsigned long block, - unsigned long offset) -{ - return __isofs_iget(sb, block, offset, 0); -} - -static inline struct inode *isofs_iget_reloc(struct super_block *sb, - unsigned long block, - unsigned long offset) -{ - return __isofs_iget(sb, block, offset, 1); -} +extern struct inode *isofs_iget(struct super_block *sb, + unsigned long block, + unsigned long offset); /* Because the inode number is no longer relevant to finding the * underlying meta-data for an inode, we are free to choose a more diff --git a/fs/isofs/rock.c b/fs/isofs/rock.c index f488bbae541..c0bf42472e4 100644 --- a/fs/isofs/rock.c +++ b/fs/isofs/rock.c @@ -288,16 +288,12 @@ eio: goto out; } -#define RR_REGARD_XA 1 -#define RR_RELOC_DE 2 - static int parse_rock_ridge_inode_internal(struct iso_directory_record *de, - struct inode *inode, int flags) + struct inode *inode, int regard_xa) { int symlink_len = 0; int cnt, sig; - unsigned int reloc_block; struct inode *reloc; struct rock_ridge *rr; int rootflag; @@ -309,7 +305,7 @@ parse_rock_ridge_inode_internal(struct iso_directory_record *de, init_rock_state(&rs, inode); setup_rock_ridge(de, inode, &rs); - if (flags & RR_REGARD_XA) { + if (regard_xa) { rs.chr += 14; rs.len -= 14; if (rs.len < 0) @@ -489,22 +485,12 @@ repeat: "relocated directory\n"); goto out; case SIG('C', 'L'): - if (flags & RR_RELOC_DE) { - printk(KERN_ERR - "ISOFS: Recursive directory relocation " - "is not supported\n"); - goto eio; - } - reloc_block = isonum_733(rr->u.CL.location); - if (reloc_block == ISOFS_I(inode)->i_iget5_block && - ISOFS_I(inode)->i_iget5_offset == 0) { - printk(KERN_ERR - "ISOFS: Directory relocation points to " - "itself\n"); - goto eio; - } - ISOFS_I(inode)->i_first_extent = reloc_block; - reloc = isofs_iget_reloc(inode->i_sb, reloc_block, 0); + ISOFS_I(inode)->i_first_extent = + isonum_733(rr->u.CL.location); + reloc = + isofs_iget(inode->i_sb, + ISOFS_I(inode)->i_first_extent, + 0); if (IS_ERR(reloc)) { ret = PTR_ERR(reloc); goto out; @@ -651,11 +637,9 @@ static char *get_symlink_chunk(char *rpnt, struct rock_ridge *rr, char *plimit) return rpnt; } -int parse_rock_ridge_inode(struct iso_directory_record *de, struct inode *inode, - int relocated) +int parse_rock_ridge_inode(struct iso_directory_record *de, struct inode *inode) { - int flags = relocated ? RR_RELOC_DE : 0; - int result = parse_rock_ridge_inode_internal(de, inode, flags); + int result = parse_rock_ridge_inode_internal(de, inode, 0); /* * if rockridge flag was reset and we didn't look for attributes @@ -663,8 +647,7 @@ int parse_rock_ridge_inode(struct iso_directory_record *de, struct inode *inode, */ if ((ISOFS_SB(inode->i_sb)->s_rock_offset == -1) && (ISOFS_SB(inode->i_sb)->s_rock == 2)) { - result = parse_rock_ridge_inode_internal(de, inode, - flags | RR_REGARD_XA); + result = parse_rock_ridge_inode_internal(de, inode, 14); } return result; } diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c index 6e2fb5cbacd..626846bac32 100644 --- a/fs/jbd2/recovery.c +++ b/fs/jbd2/recovery.c @@ -427,7 +427,6 @@ static int do_one_pass(journal_t *journal, int tag_bytes = journal_tag_bytes(journal); __u32 crc32_sum = ~0; /* Transactional Checksums */ int descr_csum_size = 0; - int block_error = 0; /* * First thing is to establish what we expect to find in the log @@ -522,7 +521,6 @@ static int do_one_pass(journal_t *journal, !jbd2_descr_block_csum_verify(journal, bh->b_data)) { err = -EIO; - brelse(bh); goto failed; } @@ -601,8 +599,7 @@ static int do_one_pass(journal_t *journal, "checksum recovering " "block %llu in log\n", blocknr); - block_error = 1; - goto skip_write; + continue; } /* Find a buffer for the new @@ -801,8 +798,7 @@ static int do_one_pass(journal_t *journal, success = -EIO; } } - if (block_error && success == 0) - success = -EIO; + return success; failed: diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index ec34e11d685..a6917125f21 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -1442,12 +1442,9 @@ int jbd2_journal_stop(handle_t *handle) * to perform a synchronous write. We do this to detect the * case where a single process is doing a stream of sync * writes. No point in waiting for joiners in that case. - * - * Setting max_batch_time to 0 disables this completely. */ pid = current->pid; - if (handle->h_sync && journal->j_last_sync_writer != pid && - journal->j_max_batch_time) { + if (handle->h_sync && journal->j_last_sync_writer != pid) { u64 commit_time, trans_time; journal->j_last_sync_writer = pid; diff --git a/fs/jffs2/compr_rtime.c b/fs/jffs2/compr_rtime.c index 406d9cc84ba..16a5047903a 100644 --- a/fs/jffs2/compr_rtime.c +++ b/fs/jffs2/compr_rtime.c @@ -33,7 +33,7 @@ static int jffs2_rtime_compress(unsigned char *data_in, unsigned char *cpage_out, uint32_t *sourcelen, uint32_t *dstlen) { - unsigned short positions[256]; + short positions[256]; int outpos = 0; int pos=0; @@ -74,7 +74,7 @@ static int jffs2_rtime_decompress(unsigned char *data_in, unsigned char *cpage_out, uint32_t srclen, uint32_t destlen) { - unsigned short positions[256]; + short positions[256]; int outpos = 0; int pos=0; diff --git a/fs/jffs2/jffs2_fs_sb.h b/fs/jffs2/jffs2_fs_sb.h index 046fee8b6e9..413ef89c2d1 100644 --- a/fs/jffs2/jffs2_fs_sb.h +++ b/fs/jffs2/jffs2_fs_sb.h @@ -134,6 +134,8 @@ struct jffs2_sb_info { struct rw_semaphore wbuf_sem; /* Protects the write buffer */ struct delayed_work wbuf_dwork; /* write-buffer write-out work */ + int wbuf_queued; /* non-zero delayed work is queued */ + spinlock_t wbuf_dwork_lock; /* protects wbuf_dwork and and wbuf_queued */ unsigned char *oobbuf; int oobavail; /* How many bytes are available for JFFS2 in OOB */ diff --git a/fs/jffs2/nodelist.h b/fs/jffs2/nodelist.h index fa35ff79ab3..e4619b00f7c 100644 --- a/fs/jffs2/nodelist.h +++ b/fs/jffs2/nodelist.h @@ -231,7 +231,7 @@ struct jffs2_tmp_dnode_info uint32_t version; uint32_t data_crc; uint32_t partial_crc; - uint32_t csize; + uint16_t csize; uint16_t overlapped; }; diff --git a/fs/jffs2/nodemgmt.c b/fs/jffs2/nodemgmt.c index b6bd4affd9a..03310721712 100644 --- a/fs/jffs2/nodemgmt.c +++ b/fs/jffs2/nodemgmt.c @@ -179,7 +179,6 @@ int jffs2_reserve_space(struct jffs2_sb_info *c, uint32_t minsize, spin_unlock(&c->erase_completion_lock); schedule(); - remove_wait_queue(&c->erase_wait, &wait); } else spin_unlock(&c->erase_completion_lock); } else if (ret) @@ -212,25 +211,20 @@ out: int jffs2_reserve_space_gc(struct jffs2_sb_info *c, uint32_t minsize, uint32_t *len, uint32_t sumsize) { - int ret; + int ret = -EAGAIN; minsize = PAD(minsize); jffs2_dbg(1, "%s(): Requested 0x%x bytes\n", __func__, minsize); - while (true) { - spin_lock(&c->erase_completion_lock); + spin_lock(&c->erase_completion_lock); + while(ret == -EAGAIN) { ret = jffs2_do_reserve_space(c, minsize, len, sumsize); if (ret) { jffs2_dbg(1, "%s(): looping, ret is %d\n", __func__, ret); } - spin_unlock(&c->erase_completion_lock); - - if (ret == -EAGAIN) - cond_resched(); - else - break; } + spin_unlock(&c->erase_completion_lock); if (!ret) ret = jffs2_prealloc_raw_node_refs(c, c->nextblock, 1); diff --git a/fs/jffs2/wbuf.c b/fs/jffs2/wbuf.c index 09ed55190ee..a6597d60d76 100644 --- a/fs/jffs2/wbuf.c +++ b/fs/jffs2/wbuf.c @@ -1162,6 +1162,10 @@ static void delayed_wbuf_sync(struct work_struct *work) struct jffs2_sb_info *c = work_to_sb(work); struct super_block *sb = OFNI_BS_2SFFJ(c); + spin_lock(&c->wbuf_dwork_lock); + c->wbuf_queued = 0; + spin_unlock(&c->wbuf_dwork_lock); + if (!(sb->s_flags & MS_RDONLY)) { jffs2_dbg(1, "%s()\n", __func__); jffs2_flush_wbuf_gc(c, 0); @@ -1176,9 +1180,14 @@ void jffs2_dirty_trigger(struct jffs2_sb_info *c) if (sb->s_flags & MS_RDONLY) return; - delay = msecs_to_jiffies(dirty_writeback_interval * 10); - if (queue_delayed_work(system_long_wq, &c->wbuf_dwork, delay)) + spin_lock(&c->wbuf_dwork_lock); + if (!c->wbuf_queued) { jffs2_dbg(1, "%s()\n", __func__); + delay = msecs_to_jiffies(dirty_writeback_interval * 10); + queue_delayed_work(system_long_wq, &c->wbuf_dwork, delay); + c->wbuf_queued = 1; + } + spin_unlock(&c->wbuf_dwork_lock); } int jffs2_nand_flash_setup(struct jffs2_sb_info *c) @@ -1202,6 +1211,7 @@ int jffs2_nand_flash_setup(struct jffs2_sb_info *c) /* Initialise write buffer */ init_rwsem(&c->wbuf_sem); + spin_lock_init(&c->wbuf_dwork_lock); INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync); c->wbuf_pagesize = c->mtd->writesize; c->wbuf_ofs = 0xFFFFFFFF; @@ -1241,6 +1251,7 @@ int jffs2_dataflash_setup(struct jffs2_sb_info *c) { /* Initialize write buffer */ init_rwsem(&c->wbuf_sem); + spin_lock_init(&c->wbuf_dwork_lock); INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync); c->wbuf_pagesize = c->mtd->erasesize; @@ -1300,6 +1311,7 @@ int jffs2_nor_wbuf_flash_setup(struct jffs2_sb_info *c) { /* Initialize write buffer */ init_rwsem(&c->wbuf_sem); + spin_lock_init(&c->wbuf_dwork_lock); INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync); c->wbuf_pagesize = c->mtd->writesize; @@ -1334,6 +1346,7 @@ int jffs2_ubivol_setup(struct jffs2_sb_info *c) { return 0; init_rwsem(&c->wbuf_sem); + spin_lock_init(&c->wbuf_dwork_lock); INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync); c->wbuf_pagesize = c->mtd->writesize; diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c index 6ae664b489a..1812f026960 100644 --- a/fs/lockd/mon.c +++ b/fs/lockd/mon.c @@ -159,12 +159,6 @@ static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res, msg.rpc_proc = &clnt->cl_procinfo[proc]; status = rpc_call_sync(clnt, &msg, RPC_TASK_SOFTCONN); - if (status == -ECONNREFUSED) { - dprintk("lockd: NSM upcall RPC failed, status=%d, forcing rebind\n", - status); - rpc_force_rebind(clnt); - status = rpc_call_sync(clnt, &msg, RPC_TASK_SOFTCONN); - } if (status < 0) dprintk("lockd: NSM upcall RPC failed, status=%d\n", status); diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 9c8a5a6d33d..a2aa97d4567 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -235,7 +235,6 @@ out_err: if (warned++ == 0) printk(KERN_WARNING "lockd_up: makesock failed, error=%d\n", err); - svc_shutdown_net(serv, net); return err; } @@ -253,11 +252,13 @@ static int lockd_up_net(struct svc_serv *serv, struct net *net) error = make_socks(serv, net); if (error < 0) - goto err_bind; + goto err_socks; set_grace_period(net); dprintk("lockd_up_net: per-net data created; net=%p\n", net); return 0; +err_socks: + svc_rpcb_cleanup(serv, net); err_bind: ln->nlmsvc_users--; return error; diff --git a/fs/locks.c b/fs/locks.c index 0274c953b07..cb424a4fed7 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -1243,10 +1243,11 @@ int __break_lease(struct inode *inode, unsigned int mode) restart: break_time = flock->fl_break_time; - if (break_time != 0) + if (break_time != 0) { break_time -= jiffies; - if (break_time == 0) - break_time++; + if (break_time == 0) + break_time++; + } locks_insert_block(flock, new_fl); unlock_flocks(); error = wait_event_interruptible_timeout(new_fl->fl_wait, diff --git a/fs/namei.c b/fs/namei.c index f7c4393f853..cccaf77e76c 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -34,7 +34,6 @@ #include <linux/device_cgroup.h> #include <linux/fs_struct.h> #include <linux/posix_acl.h> -#include <linux/hash.h> #include <asm/uaccess.h> #include "internal.h" @@ -322,11 +321,10 @@ int generic_permission(struct inode *inode, int mask) if (S_ISDIR(inode->i_mode)) { /* DACs are overridable for directories */ - if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE)) + if (inode_capable(inode, CAP_DAC_OVERRIDE)) return 0; if (!(mask & MAY_WRITE)) - if (capable_wrt_inode_uidgid(inode, - CAP_DAC_READ_SEARCH)) + if (inode_capable(inode, CAP_DAC_READ_SEARCH)) return 0; return -EACCES; } @@ -336,7 +334,7 @@ int generic_permission(struct inode *inode, int mask) * at least one exec bit set. */ if (!(mask & MAY_EXEC) || (inode->i_mode & S_IXUGO)) - if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE)) + if (inode_capable(inode, CAP_DAC_OVERRIDE)) return 0; /* @@ -344,7 +342,7 @@ int generic_permission(struct inode *inode, int mask) */ mask &= MAY_READ | MAY_WRITE | MAY_EXEC; if (mask == MAY_READ) - if (capable_wrt_inode_uidgid(inode, CAP_DAC_READ_SEARCH)) + if (inode_capable(inode, CAP_DAC_READ_SEARCH)) return 0; return -EACCES; @@ -1648,7 +1646,8 @@ static inline int can_lookup(struct inode *inode) static inline unsigned int fold_hash(unsigned long hash) { - return hash_64(hash, 32); + hash += hash >> (8*sizeof(int)); + return hash; } #else /* 32-bit case */ @@ -2200,7 +2199,7 @@ static inline int check_sticky(struct inode *dir, struct inode *inode) return 0; if (uid_eq(dir->i_uid, fsuid)) return 0; - return !capable_wrt_inode_uidgid(inode, CAP_FOWNER); + return !inode_capable(inode, CAP_FOWNER); } /* @@ -3656,7 +3655,6 @@ retry: out_dput: done_path_create(&new_path, new_dentry); if (retry_estale(error, how)) { - path_put(&old_path); how |= LOOKUP_REVAL; goto retry; } diff --git a/fs/namespace.c b/fs/namespace.c index 15482239778..a45ba4f267f 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -828,21 +828,8 @@ static struct mount *clone_mnt(struct mount *old, struct dentry *root, mnt->mnt.mnt_flags = old->mnt.mnt_flags & ~MNT_WRITE_HOLD; /* Don't allow unprivileged users to change mount flags */ - if (flag & CL_UNPRIVILEGED) { - mnt->mnt.mnt_flags |= MNT_LOCK_ATIME; - - if (mnt->mnt.mnt_flags & MNT_READONLY) - mnt->mnt.mnt_flags |= MNT_LOCK_READONLY; - - if (mnt->mnt.mnt_flags & MNT_NODEV) - mnt->mnt.mnt_flags |= MNT_LOCK_NODEV; - - if (mnt->mnt.mnt_flags & MNT_NOSUID) - mnt->mnt.mnt_flags |= MNT_LOCK_NOSUID; - - if (mnt->mnt.mnt_flags & MNT_NOEXEC) - mnt->mnt.mnt_flags |= MNT_LOCK_NOEXEC; - } + if ((flag & CL_UNPRIVILEGED) && (mnt->mnt.mnt_flags & MNT_READONLY)) + mnt->mnt.mnt_flags |= MNT_LOCK_READONLY; atomic_inc(&sb->s_active); mnt->mnt.mnt_sb = sb; @@ -1274,8 +1261,6 @@ static int do_umount(struct mount *mnt, int flags) * Special case for "unmounting" root ... * we just try to remount it readonly. */ - if (!capable(CAP_SYS_ADMIN)) - return -EPERM; down_write(&sb->s_umount); if (!(sb->s_flags & MS_RDONLY)) retval = do_remount_sb(sb, MS_RDONLY, NULL, 0); @@ -1779,6 +1764,9 @@ static int change_mount_flags(struct vfsmount *mnt, int ms_flags) if (readonly_request == __mnt_is_readonly(mnt)) return 0; + if (mnt->mnt_flags & MNT_LOCK_READONLY) + return -EPERM; + if (readonly_request) error = mnt_make_readonly(real_mount(mnt)); else @@ -1804,33 +1792,6 @@ static int do_remount(struct path *path, int flags, int mnt_flags, if (path->dentry != path->mnt->mnt_root) return -EINVAL; - /* Don't allow changing of locked mnt flags. - * - * No locks need to be held here while testing the various - * MNT_LOCK flags because those flags can never be cleared - * once they are set. - */ - if ((mnt->mnt.mnt_flags & MNT_LOCK_READONLY) && - !(mnt_flags & MNT_READONLY)) { - return -EPERM; - } - if ((mnt->mnt.mnt_flags & MNT_LOCK_NODEV) && - !(mnt_flags & MNT_NODEV)) { - return -EPERM; - } - if ((mnt->mnt.mnt_flags & MNT_LOCK_NOSUID) && - !(mnt_flags & MNT_NOSUID)) { - return -EPERM; - } - if ((mnt->mnt.mnt_flags & MNT_LOCK_NOEXEC) && - !(mnt_flags & MNT_NOEXEC)) { - return -EPERM; - } - if ((mnt->mnt.mnt_flags & MNT_LOCK_ATIME) && - ((mnt->mnt.mnt_flags & MNT_ATIME_MASK) != (mnt_flags & MNT_ATIME_MASK))) { - return -EPERM; - } - err = security_sb_remount(sb, data); if (err) return err; @@ -1844,7 +1805,7 @@ static int do_remount(struct path *path, int flags, int mnt_flags, err = do_remount_sb(sb, flags, data, 0); if (!err) { br_write_lock(&vfsmount_lock); - mnt_flags |= mnt->mnt.mnt_flags & ~MNT_USER_SETTABLE_MASK; + mnt_flags |= mnt->mnt.mnt_flags & MNT_PROPAGATION_MASK; mnt->mnt.mnt_flags = mnt_flags; br_write_unlock(&vfsmount_lock); } @@ -2030,7 +1991,7 @@ static int do_new_mount(struct path *path, const char *fstype, int flags, */ if (!(type->fs_flags & FS_USERNS_DEV_MOUNT)) { flags |= MS_NODEV; - mnt_flags |= MNT_NODEV | MNT_LOCK_NODEV; + mnt_flags |= MNT_NODEV; } } @@ -2348,14 +2309,6 @@ long do_mount(const char *dev_name, const char *dir_name, if (flags & MS_RDONLY) mnt_flags |= MNT_READONLY; - /* The default atime for remount is preservation */ - if ((flags & MS_REMOUNT) && - ((flags & (MS_NOATIME | MS_NODIRATIME | MS_RELATIME | - MS_STRICTATIME)) == 0)) { - mnt_flags &= ~MNT_ATIME_MASK; - mnt_flags |= path.mnt->mnt_flags & MNT_ATIME_MASK; - } - flags &= ~(MS_NOSUID | MS_NOEXEC | MS_NODEV | MS_ACTIVE | MS_BORN | MS_NOATIME | MS_NODIRATIME | MS_RELATIME| MS_KERNMOUNT | MS_STRICTATIME); @@ -2696,9 +2649,6 @@ SYSCALL_DEFINE2(pivot_root, const char __user *, new_root, /* make sure we can reach put_old from new_root */ if (!is_path_reachable(old_mnt, old.dentry, &new)) goto out4; - /* make certain new is below the root */ - if (!is_path_reachable(new_mnt, new.dentry, &root)) - goto out4; root_mp->m_count++; /* pin it so it won't go away */ br_write_lock(&vfsmount_lock); detach_mnt(new_mnt, &parent_path); diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c index 4b49a8c6cca..57db3244f4d 100644 --- a/fs/nfs/delegation.c +++ b/fs/nfs/delegation.c @@ -656,19 +656,16 @@ int nfs_async_inode_return_delegation(struct inode *inode, rcu_read_lock(); delegation = rcu_dereference(NFS_I(inode)->delegation); - if (delegation == NULL) - goto out_enoent; - if (!clp->cl_mvops->match_stateid(&delegation->stateid, stateid)) - goto out_enoent; + if (!clp->cl_mvops->match_stateid(&delegation->stateid, stateid)) { + rcu_read_unlock(); + return -ENOENT; + } nfs_mark_return_delegation(server, delegation); rcu_read_unlock(); nfs_delegation_run_state_manager(clp); return 0; -out_enoent: - rcu_read_unlock(); - return -ENOENT; } static struct inode * diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index 79872e22e4a..ce727047ee8 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -1382,20 +1382,18 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) inode->i_version = fattr->change_attr; } } else if (server->caps & NFS_CAP_CHANGE_ATTR) - nfsi->cache_validity |= save_cache_validity; + invalid |= save_cache_validity; if (fattr->valid & NFS_ATTR_FATTR_MTIME) { memcpy(&inode->i_mtime, &fattr->mtime, sizeof(inode->i_mtime)); } else if (server->caps & NFS_CAP_MTIME) - nfsi->cache_validity |= save_cache_validity & - (NFS_INO_INVALID_ATTR + invalid |= save_cache_validity & (NFS_INO_INVALID_ATTR | NFS_INO_REVAL_FORCED); if (fattr->valid & NFS_ATTR_FATTR_CTIME) { memcpy(&inode->i_ctime, &fattr->ctime, sizeof(inode->i_ctime)); } else if (server->caps & NFS_CAP_CTIME) - nfsi->cache_validity |= save_cache_validity & - (NFS_INO_INVALID_ATTR + invalid |= save_cache_validity & (NFS_INO_INVALID_ATTR | NFS_INO_REVAL_FORCED); /* Check if our cached file size is stale */ @@ -1418,8 +1416,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) (long long)new_isize); } } else - nfsi->cache_validity |= save_cache_validity & - (NFS_INO_INVALID_ATTR + invalid |= save_cache_validity & (NFS_INO_INVALID_ATTR | NFS_INO_REVAL_PAGECACHE | NFS_INO_REVAL_FORCED); @@ -1427,8 +1424,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) if (fattr->valid & NFS_ATTR_FATTR_ATIME) memcpy(&inode->i_atime, &fattr->atime, sizeof(inode->i_atime)); else if (server->caps & NFS_CAP_ATIME) - nfsi->cache_validity |= save_cache_validity & - (NFS_INO_INVALID_ATIME + invalid |= save_cache_validity & (NFS_INO_INVALID_ATIME | NFS_INO_REVAL_FORCED); if (fattr->valid & NFS_ATTR_FATTR_MODE) { @@ -1439,8 +1435,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) invalid |= NFS_INO_INVALID_ATTR|NFS_INO_INVALID_ACCESS|NFS_INO_INVALID_ACL; } } else if (server->caps & NFS_CAP_MODE) - nfsi->cache_validity |= save_cache_validity & - (NFS_INO_INVALID_ATTR + invalid |= save_cache_validity & (NFS_INO_INVALID_ATTR | NFS_INO_INVALID_ACCESS | NFS_INO_INVALID_ACL | NFS_INO_REVAL_FORCED); @@ -1451,8 +1446,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) inode->i_uid = fattr->uid; } } else if (server->caps & NFS_CAP_OWNER) - nfsi->cache_validity |= save_cache_validity & - (NFS_INO_INVALID_ATTR + invalid |= save_cache_validity & (NFS_INO_INVALID_ATTR | NFS_INO_INVALID_ACCESS | NFS_INO_INVALID_ACL | NFS_INO_REVAL_FORCED); @@ -1463,8 +1457,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) inode->i_gid = fattr->gid; } } else if (server->caps & NFS_CAP_OWNER_GROUP) - nfsi->cache_validity |= save_cache_validity & - (NFS_INO_INVALID_ATTR + invalid |= save_cache_validity & (NFS_INO_INVALID_ATTR | NFS_INO_INVALID_ACCESS | NFS_INO_INVALID_ACL | NFS_INO_REVAL_FORCED); @@ -1477,8 +1470,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) set_nlink(inode, fattr->nlink); } } else if (server->caps & NFS_CAP_NLINK) - nfsi->cache_validity |= save_cache_validity & - (NFS_INO_INVALID_ATTR + invalid |= save_cache_validity & (NFS_INO_INVALID_ATTR | NFS_INO_REVAL_FORCED); if (fattr->valid & NFS_ATTR_FATTR_SPACE_USED) { diff --git a/fs/nfs/nfs3acl.c b/fs/nfs/nfs3acl.c index 8c34f57a9ae..4a1aafba6a2 100644 --- a/fs/nfs/nfs3acl.c +++ b/fs/nfs/nfs3acl.c @@ -305,10 +305,7 @@ static int nfs3_proc_setacls(struct inode *inode, struct posix_acl *acl, .rpc_argp = &args, .rpc_resp = &fattr, }; - int status = 0; - - if (acl == NULL && (!S_ISDIR(inode->i_mode) || dfacl == NULL)) - goto out; + int status; status = -EOPNOTSUPP; if (!nfs_server_capable(inode, NFS_CAP_ACLS)) diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c index cc143ee7a56..02773aab43c 100644 --- a/fs/nfs/nfs4client.c +++ b/fs/nfs/nfs4client.c @@ -311,16 +311,6 @@ int nfs40_walk_client_list(struct nfs_client *new, spin_lock(&nn->nfs_client_lock); list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { - - if (pos->rpc_ops != new->rpc_ops) - continue; - - if (pos->cl_proto != new->cl_proto) - continue; - - if (pos->cl_minorversion != new->cl_minorversion) - continue; - /* If "pos" isn't marked ready, we can't trust the * remaining fields in "pos" */ if (pos->cl_cons_state > NFS_CS_READY) { @@ -340,6 +330,15 @@ int nfs40_walk_client_list(struct nfs_client *new, if (pos->cl_cons_state != NFS_CS_READY) continue; + if (pos->rpc_ops != new->rpc_ops) + continue; + + if (pos->cl_proto != new->cl_proto) + continue; + + if (pos->cl_minorversion != new->cl_minorversion) + continue; + if (pos->cl_clientid != new->cl_clientid) continue; @@ -445,16 +444,6 @@ int nfs41_walk_client_list(struct nfs_client *new, spin_lock(&nn->nfs_client_lock); list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { - - if (pos->rpc_ops != new->rpc_ops) - continue; - - if (pos->cl_proto != new->cl_proto) - continue; - - if (pos->cl_minorversion != new->cl_minorversion) - continue; - /* If "pos" isn't marked ready, we can't trust the * remaining fields in "pos", especially the client * ID and serverowner fields. Wait for CREATE_SESSION @@ -480,6 +469,15 @@ int nfs41_walk_client_list(struct nfs_client *new, if (pos->cl_cons_state != NFS_CS_READY) continue; + if (pos->rpc_ops != new->rpc_ops) + continue; + + if (pos->cl_proto != new->cl_proto) + continue; + + if (pos->cl_minorversion != new->cl_minorversion) + continue; + if (!nfs4_match_clientids(pos, new)) continue; diff --git a/fs/nfs/nfs4filelayout.c b/fs/nfs/nfs4filelayout.c index b039f7f26d9..22d10623f5e 100644 --- a/fs/nfs/nfs4filelayout.c +++ b/fs/nfs/nfs4filelayout.c @@ -1300,7 +1300,7 @@ filelayout_alloc_layout_hdr(struct inode *inode, gfp_t gfp_flags) struct nfs4_filelayout *flo; flo = kzalloc(sizeof(*flo), gfp_flags); - return flo != NULL ? &flo->generic_hdr : NULL; + return &flo->generic_hdr; } static void diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 5b845c05255..7ccc30ecd74 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -2287,7 +2287,6 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data) struct nfs4_closedata *calldata = data; struct nfs4_state *state = calldata->state; struct inode *inode = calldata->inode; - bool is_rdonly, is_wronly, is_rdwr; int call_close = 0; dprintk("%s: begin!\n", __func__); @@ -2295,27 +2294,21 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data) goto out_wait; task->tk_msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_DOWNGRADE]; + calldata->arg.fmode = FMODE_READ|FMODE_WRITE; spin_lock(&state->owner->so_lock); - is_rdwr = test_bit(NFS_O_RDWR_STATE, &state->flags); - is_rdonly = test_bit(NFS_O_RDONLY_STATE, &state->flags); - is_wronly = test_bit(NFS_O_WRONLY_STATE, &state->flags); /* Calculate the change in open mode */ - calldata->arg.fmode = 0; if (state->n_rdwr == 0) { - if (state->n_rdonly == 0) - call_close |= is_rdonly; - else if (is_rdonly) - calldata->arg.fmode |= FMODE_READ; - if (state->n_wronly == 0) - call_close |= is_wronly; - else if (is_wronly) - calldata->arg.fmode |= FMODE_WRITE; - } else if (is_rdwr) - calldata->arg.fmode |= FMODE_READ|FMODE_WRITE; - - if (calldata->arg.fmode == 0) - call_close |= is_rdwr; - + if (state->n_rdonly == 0) { + call_close |= test_bit(NFS_O_RDONLY_STATE, &state->flags); + call_close |= test_bit(NFS_O_RDWR_STATE, &state->flags); + calldata->arg.fmode &= ~FMODE_READ; + } + if (state->n_wronly == 0) { + call_close |= test_bit(NFS_O_WRONLY_STATE, &state->flags); + call_close |= test_bit(NFS_O_RDWR_STATE, &state->flags); + calldata->arg.fmode &= ~FMODE_WRITE; + } + } if (!nfs4_valid_open_stateid(state)) call_close = 0; spin_unlock(&state->owner->so_lock); @@ -3614,9 +3607,8 @@ static bool nfs4_stateid_is_current(nfs4_stateid *stateid, { nfs4_stateid current_stateid; - /* If the current stateid represents a lost lock, then exit */ - if (nfs4_set_rw_stateid(¤t_stateid, ctx, l_ctx, fmode) == -EIO) - return true; + if (nfs4_set_rw_stateid(¤t_stateid, ctx, l_ctx, fmode)) + return false; return nfs4_stateid_match(stateid, ¤t_stateid); } @@ -6067,7 +6059,7 @@ static int nfs41_proc_async_sequence(struct nfs_client *clp, struct rpc_cred *cr int ret = 0; if ((renew_flags & NFS4_RENEW_TIMEOUT) == 0) - return -EAGAIN; + return 0; task = _nfs41_proc_sequence(clp, cred, false); if (IS_ERR(task)) ret = PTR_ERR(task); diff --git a/fs/nfs/nfs4renewd.c b/fs/nfs/nfs4renewd.c index e1ba58c3d1a..1720d32ffa5 100644 --- a/fs/nfs/nfs4renewd.c +++ b/fs/nfs/nfs4renewd.c @@ -88,18 +88,10 @@ nfs4_renew_state(struct work_struct *work) } nfs_expire_all_delegations(clp); } else { - int ret; - /* Queue an asynchronous RENEW. */ - ret = ops->sched_state_renewal(clp, cred, renew_flags); + ops->sched_state_renewal(clp, cred, renew_flags); put_rpccred(cred); - switch (ret) { - default: - goto out_exp; - case -EAGAIN: - case -ENOMEM: - break; - } + goto out_exp; } } else { dprintk("%s: failed to call renewd. Reason: lease not expired \n", diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c index d482b86d0e0..2c37442ed93 100644 --- a/fs/nfs/nfs4state.c +++ b/fs/nfs/nfs4state.c @@ -1699,8 +1699,7 @@ restart: if (status < 0) { set_bit(ops->owner_flag_bit, &sp->so_flags); nfs4_put_state_owner(sp); - status = nfs4_recovery_handle_error(clp, status); - return (status != 0) ? status : -EAGAIN; + return nfs4_recovery_handle_error(clp, status); } nfs4_put_state_owner(sp); @@ -1709,7 +1708,7 @@ restart: spin_unlock(&clp->cl_lock); } rcu_read_unlock(); - return 0; + return status; } static int nfs4_check_lease(struct nfs_client *clp) @@ -1756,6 +1755,7 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status) break; case -NFS4ERR_STALE_CLIENTID: clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state); + nfs4_state_clear_reclaim_reboot(clp); nfs4_state_start_reclaim_reboot(clp); break; case -NFS4ERR_CLID_INUSE: @@ -2174,11 +2174,14 @@ static void nfs4_state_manager(struct nfs_client *clp) section = "reclaim reboot"; status = nfs4_do_reclaim(clp, clp->cl_mvops->reboot_recovery_ops); - if (status == -EAGAIN) + if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) || + test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state)) + continue; + nfs4_state_end_reclaim_reboot(clp); + if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state)) continue; if (status < 0) goto out_error; - nfs4_state_end_reclaim_reboot(clp); } /* Now recover expired state... */ @@ -2186,7 +2189,9 @@ static void nfs4_state_manager(struct nfs_client *clp) section = "reclaim nograce"; status = nfs4_do_reclaim(clp, clp->cl_mvops->nograce_recovery_ops); - if (status == -EAGAIN) + if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) || + test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) || + test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state)) continue; if (status < 0) goto out_error; diff --git a/fs/nfsd/nfs4acl.c b/fs/nfsd/nfs4acl.c index e15bcbd5043..8a50b3c1809 100644 --- a/fs/nfsd/nfs4acl.c +++ b/fs/nfsd/nfs4acl.c @@ -385,10 +385,8 @@ sort_pacl(struct posix_acl *pacl) * by uid/gid. */ int i, j; - /* no users or groups */ - if (!pacl || pacl->a_count <= 4) - return; - + if (pacl->a_count <= 4) + return; /* no users or groups */ i = 1; while (pacl->a_entries[i].e_tag == ACL_USER) i++; @@ -515,12 +513,13 @@ posix_state_to_acl(struct posix_acl_state *state, unsigned int flags) /* * ACLs with no ACEs are treated differently in the inheritable - * and effective cases: when there are no inheritable ACEs, - * calls ->set_acl with a NULL ACL structure. + * and effective cases: when there are no inheritable ACEs, we + * set a zero-length default posix acl: */ - if (state->empty && (flags & NFS4_ACL_TYPE_DEFAULT)) - return NULL; - + if (state->empty && (flags & NFS4_ACL_TYPE_DEFAULT)) { + pacl = posix_acl_alloc(0, GFP_KERNEL); + return pacl ? pacl : ERR_PTR(-ENOMEM); + } /* * When there are no effective ACEs, the following will end * up setting a 3-element effective posix ACL with all diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c index cc8c5b32043..7f05cd140de 100644 --- a/fs/nfsd/nfs4callback.c +++ b/fs/nfsd/nfs4callback.c @@ -637,11 +637,9 @@ static struct rpc_cred *get_backchannel_cred(struct nfs4_client *clp, struct rpc static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *conn, struct nfsd4_session *ses) { - int maxtime = max_cb_time(clp->net); struct rpc_timeout timeparms = { - .to_initval = maxtime, + .to_initval = max_cb_time(clp->net), .to_retries = 0, - .to_maxval = maxtime, }; struct rpc_create_args args = { .net = clp->net, @@ -672,8 +670,7 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c clp->cl_cb_session = ses; args.bc_xprt = conn->cb_xprt; args.prognumber = clp->cl_cb_session->se_cb_prog; - args.protocol = conn->cb_xprt->xpt_class->xcl_ident | - XPRT_TRANSPORT_BC; + args.protocol = XPRT_TRANSPORT_BC_TCP; args.authflavor = ses->se_cb_sec.flavor; } /* Create RPC client */ diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index 9240dd1678d..27d74a29451 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -576,6 +576,15 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, switch (create->cr_type) { case NF4LNK: + /* ugh! we have to null-terminate the linktext, or + * vfs_symlink() will choke. it is always safe to + * null-terminate by brute force, since at worst we + * will overwrite the first byte of the create namelen + * in the XDR buffer, which has already been extracted + * during XDR decode. + */ + create->cr_linkname[create->cr_linklen] = 0; + status = nfsd_symlink(rqstp, &cstate->current_fh, create->cr_name, create->cr_namelen, create->cr_linkname, create->cr_linklen, @@ -1191,8 +1200,7 @@ static bool need_wrongsec_check(struct svc_rqst *rqstp) */ if (argp->opcnt == resp->opcnt) return false; - if (next->opnum == OP_ILLEGAL) - return false; + nextd = OPDESC(next); /* * Rest of 2.6.3.1.1: certain operations will return WRONGSEC @@ -1299,12 +1307,6 @@ nfsd4_proc_compound(struct svc_rqst *rqstp, /* If op is non-idempotent */ if (opdesc->op_flags & OP_MODIFIES_SOMETHING) { plen = opdesc->op_rsize_bop(rqstp, op); - /* - * If there's still another operation, make sure - * we'll have space to at least encode an error: - */ - if (resp->opcnt < args->opcnt) - plen += COMPOUND_ERR_SLACK_SPACE; op->status = nfsd4_check_resp_size(resp, plen); } @@ -1469,8 +1471,7 @@ static inline u32 nfsd4_setattr_rsize(struct svc_rqst *rqstp, struct nfsd4_op *o static inline u32 nfsd4_setclientid_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op) { - return (op_encode_hdr_size + 2 + XDR_QUADLEN(NFS4_VERIFIER_SIZE)) * - sizeof(__be32); + return (op_encode_hdr_size + 2 + 1024) * sizeof(__be32); } static inline u32 nfsd4_write_rsize(struct svc_rqst *rqstp, struct nfsd4_op *op) diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index bdff771057d..316ec843dec 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -1081,18 +1081,6 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name) return NULL; } clp->cl_name.len = name.len; - INIT_LIST_HEAD(&clp->cl_sessions); - idr_init(&clp->cl_stateids); - atomic_set(&clp->cl_refcount, 0); - clp->cl_cb_state = NFSD4_CB_UNKNOWN; - INIT_LIST_HEAD(&clp->cl_idhash); - INIT_LIST_HEAD(&clp->cl_openowners); - INIT_LIST_HEAD(&clp->cl_delegations); - INIT_LIST_HEAD(&clp->cl_lru); - INIT_LIST_HEAD(&clp->cl_callbacks); - INIT_LIST_HEAD(&clp->cl_revoked); - spin_lock_init(&clp->cl_lock); - rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table"); return clp; } @@ -1110,7 +1098,6 @@ free_client(struct nfs4_client *clp) WARN_ON_ONCE(atomic_read(&ses->se_ref)); free_session(ses); } - rpc_destroy_wait_queue(&clp->cl_cb_waitq); free_svc_cred(&clp->cl_cred); kfree(clp->cl_name.data); idr_destroy(&clp->cl_stateids); @@ -1328,6 +1315,7 @@ static struct nfs4_client *create_client(struct xdr_netobj name, if (clp == NULL) return NULL; + INIT_LIST_HEAD(&clp->cl_sessions); ret = copy_cred(&clp->cl_cred, &rqstp->rq_cred); if (ret) { spin_lock(&nn->client_lock); @@ -1335,9 +1323,20 @@ static struct nfs4_client *create_client(struct xdr_netobj name, spin_unlock(&nn->client_lock); return NULL; } + idr_init(&clp->cl_stateids); + atomic_set(&clp->cl_refcount, 0); + clp->cl_cb_state = NFSD4_CB_UNKNOWN; + INIT_LIST_HEAD(&clp->cl_idhash); + INIT_LIST_HEAD(&clp->cl_openowners); + INIT_LIST_HEAD(&clp->cl_delegations); + INIT_LIST_HEAD(&clp->cl_lru); + INIT_LIST_HEAD(&clp->cl_callbacks); + INIT_LIST_HEAD(&clp->cl_revoked); + spin_lock_init(&clp->cl_lock); nfsd4_init_callback(&clp->cl_cb_null); clp->cl_time = get_seconds(); clear_bit(0, &clp->cl_cb_slot_busy); + rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table"); copy_verf(clp, verf); rpc_copy_addr((struct sockaddr *) &clp->cl_addr, sa); gen_confirm(clp); @@ -3599,16 +3598,9 @@ out: static __be32 nfsd4_free_lock_stateid(struct nfs4_ol_stateid *stp) { - struct nfs4_lockowner *lo = lockowner(stp->st_stateowner); - - if (check_for_locks(stp->st_file, lo)) + if (check_for_locks(stp->st_file, lockowner(stp->st_stateowner))) return nfserr_locks_held; - /* - * Currently there's a 1-1 lock stateid<->lockowner - * correspondance, and we have to delete the lockowner when we - * delete the lock stateid: - */ - release_lockowner(lo); + release_lock_stateid(stp); return nfs_ok; } @@ -4052,10 +4044,6 @@ static bool same_lockowner_ino(struct nfs4_lockowner *lo, struct inode *inode, c if (!same_owner_str(&lo->lo_owner, owner, clid)) return false; - if (list_empty(&lo->lo_owner.so_stateids)) { - WARN_ON_ONCE(1); - return false; - } lst = list_first_entry(&lo->lo_owner.so_stateids, struct nfs4_ol_stateid, st_perstateowner); return lst->st_file->fi_inode == inode; @@ -4970,6 +4958,7 @@ nfs4_state_destroy_net(struct net *net) int i; struct nfs4_client *clp = NULL; struct nfsd_net *nn = net_generic(net, nfsd_net_id); + struct rb_node *node, *tmp; for (i = 0; i < CLIENT_HASH_SIZE; i++) { while (!list_empty(&nn->conf_id_hashtbl[i])) { @@ -4978,11 +4967,13 @@ nfs4_state_destroy_net(struct net *net) } } - for (i = 0; i < CLIENT_HASH_SIZE; i++) { - while (!list_empty(&nn->unconf_id_hashtbl[i])) { - clp = list_entry(nn->unconf_id_hashtbl[i].next, struct nfs4_client, cl_idhash); - destroy_client(clp); - } + node = rb_first(&nn->unconf_name_tree); + while (node != NULL) { + tmp = node; + node = rb_next(tmp); + clp = rb_entry(tmp, struct nfs4_client, cl_namenode); + rb_erase(tmp, &nn->unconf_name_tree); + destroy_client(clp); } kfree(nn->sessionid_hashtbl); diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 9b45f0666cf..582321a978b 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -553,18 +553,7 @@ nfsd4_decode_create(struct nfsd4_compoundargs *argp, struct nfsd4_create *create READ_BUF(4); READ32(create->cr_linklen); READ_BUF(create->cr_linklen); - /* - * The VFS will want a null-terminated string, and - * null-terminating in place isn't safe since this might - * end on a page boundary: - */ - create->cr_linkname = - kmalloc(create->cr_linklen + 1, GFP_KERNEL); - if (!create->cr_linkname) - return nfserr_jukebox; - memcpy(create->cr_linkname, p, create->cr_linklen); - create->cr_linkname[create->cr_linklen] = '\0'; - defer_free(argp, kfree, create->cr_linkname); + SAVEMEM(create->cr_linkname, create->cr_linklen); break; case NF4BLK: case NF4CHR: @@ -2046,8 +2035,8 @@ nfsd4_encode_fattr(struct svc_fh *fhp, struct svc_export *exp, err = vfs_getattr(&path, &stat); if (err) goto out_nfserr; - if ((bmval0 & (FATTR4_WORD0_FILES_AVAIL | FATTR4_WORD0_FILES_FREE | - FATTR4_WORD0_FILES_TOTAL | FATTR4_WORD0_MAXNAME)) || + if ((bmval0 & (FATTR4_WORD0_FILES_FREE | FATTR4_WORD0_FILES_TOTAL | + FATTR4_WORD0_MAXNAME)) || (bmval1 & (FATTR4_WORD1_SPACE_AVAIL | FATTR4_WORD1_SPACE_FREE | FATTR4_WORD1_SPACE_TOTAL))) { err = vfs_statfs(&path, &statfs); @@ -2412,8 +2401,6 @@ out_acl: WRITE64(stat.ino); } if (bmval2 & FATTR4_WORD2_SUPPATTR_EXCLCREAT) { - if ((buflen -= 16) < 0) - goto out_resource; WRITE32(3); WRITE32(NFSD_SUPPATTR_EXCLCREAT_WORD0); WRITE32(NFSD_SUPPATTR_EXCLCREAT_WORD1); @@ -3395,9 +3382,6 @@ nfsd4_encode_test_stateid(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_test_stateid_id *stateid, *next; __be32 *p; - if (nfserr) - return nfserr; - RESERVE_SPACE(4 + (4 * test_stateid->ts_num_ids)); *p++ = htonl(test_stateid->ts_num_ids); diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index e5e4675b7e7..ec8d97ddc63 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -129,6 +129,13 @@ nfsd_reply_cache_alloc(void) } static void +nfsd_reply_cache_unhash(struct svc_cacherep *rp) +{ + hlist_del_init(&rp->c_hash); + list_del_init(&rp->c_lru); +} + +static void nfsd_reply_cache_free_locked(struct svc_cacherep *rp) { if (rp->c_type == RC_REPLBUFF && rp->c_replvec.iov_base) { @@ -221,6 +228,13 @@ hash_refile(struct svc_cacherep *rp) hlist_add_head(&rp->c_hash, cache_hash + hash_32(rp->c_xid, maskbits)); } +static inline bool +nfsd_cache_entry_expired(struct svc_cacherep *rp) +{ + return rp->c_state != RC_INPROG && + time_after(jiffies, rp->c_timestamp + RC_EXPIRE); +} + /* * Walk the LRU list and prune off entries that are older than RC_EXPIRE. * Also prune the oldest ones when the total exceeds the max number of entries. @@ -231,14 +245,8 @@ prune_cache_entries(void) struct svc_cacherep *rp, *tmp; list_for_each_entry_safe(rp, tmp, &lru_head, c_lru) { - /* - * Don't free entries attached to calls that are still - * in-progress, but do keep scanning the list. - */ - if (rp->c_state == RC_INPROG) - continue; - if (num_drc_entries <= max_drc_entries && - time_before(jiffies, rp->c_timestamp + RC_EXPIRE)) + if (!nfsd_cache_entry_expired(rp) && + num_drc_entries <= max_drc_entries) break; nfsd_reply_cache_free_locked(rp); } @@ -394,8 +402,22 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) /* * Since the common case is a cache miss followed by an insert, - * preallocate an entry. + * preallocate an entry. First, try to reuse the first entry on the LRU + * if it works, then go ahead and prune the LRU list. */ + spin_lock(&cache_lock); + if (!list_empty(&lru_head)) { + rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru); + if (nfsd_cache_entry_expired(rp) || + num_drc_entries >= max_drc_entries) { + nfsd_reply_cache_unhash(rp); + prune_cache_entries(); + goto search_cache; + } + } + + /* No expired ones available, allocate a new one. */ + spin_unlock(&cache_lock); rp = nfsd_reply_cache_alloc(); spin_lock(&cache_lock); if (likely(rp)) { @@ -403,9 +425,7 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) drc_mem_usage += sizeof(*rp); } - /* go ahead and prune the cache */ - prune_cache_entries(); - +search_cache: found = nfsd_cache_search(rqstp, csum); if (found) { if (likely(rp)) @@ -419,6 +439,15 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) goto out; } + /* + * We're keeping the one we just allocated. Are we now over the + * limit? Prune one off the tip of the LRU in trade for the one we + * just allocated if so. + */ + if (num_drc_entries >= max_drc_entries) + nfsd_reply_cache_free_locked(list_first_entry(&lru_head, + struct svc_cacherep, c_lru)); + nfsdstats.rcmisses++; rqstp->rq_cacherep = rp; rp->c_state = RC_INPROG; diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index f34d9de802a..7f555179bf8 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -699,11 +699,6 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net) if (err != 0 || fd < 0) return -EINVAL; - if (svc_alien_sock(net, fd)) { - printk(KERN_ERR "%s: socket net is different to NFSd's one\n", __func__); - return -EINVAL; - } - err = nfsd_create_serv(net); if (err != 0) return err; diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 8016892f3f0..262df5ccbf5 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -220,8 +220,7 @@ static int nfsd_startup_generic(int nrservs) */ ret = nfsd_racache_init(2*nrservs); if (ret) - goto dec_users; - + return ret; ret = nfs4_state_start(); if (ret) goto out_racache; @@ -229,8 +228,6 @@ static int nfsd_startup_generic(int nrservs) out_racache: nfsd_racache_shutdown(); -dec_users: - nfsd_users--; return ret; } diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c index d9b298cbfe5..62fd6616801 100644 --- a/fs/nfsd/vfs.c +++ b/fs/nfsd/vfs.c @@ -406,7 +406,6 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap, umode_t ftype = 0; __be32 err; int host_err; - bool get_write_count; int size_change = 0; if (iap->ia_valid & (ATTR_ATIME | ATTR_MTIME | ATTR_SIZE)) @@ -414,18 +413,10 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap, if (iap->ia_valid & ATTR_SIZE) ftype = S_IFREG; - /* Callers that do fh_verify should do the fh_want_write: */ - get_write_count = !fhp->fh_dentry; - /* Get inode */ err = fh_verify(rqstp, fhp, ftype, accmode); if (err) goto out; - if (get_write_count) { - host_err = fh_want_write(fhp); - if (host_err) - return nfserrno(host_err); - } dentry = fhp->fh_dentry; inode = dentry->d_inode; diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c index 2e1372efbb0..bccfec8343c 100644 --- a/fs/nilfs2/inode.c +++ b/fs/nilfs2/inode.c @@ -24,7 +24,6 @@ #include <linux/buffer_head.h> #include <linux/gfp.h> #include <linux/mpage.h> -#include <linux/pagemap.h> #include <linux/writeback.h> #include <linux/aio.h> #include "nilfs.h" @@ -220,10 +219,10 @@ static int nilfs_writepage(struct page *page, struct writeback_control *wbc) static int nilfs_set_page_dirty(struct page *page) { - struct inode *inode = page->mapping->host; int ret = __set_page_dirty_nobuffers(page); if (page_has_buffers(page)) { + struct inode *inode = page->mapping->host; unsigned nr_dirty = 0; struct buffer_head *bh, *head; @@ -246,10 +245,6 @@ static int nilfs_set_page_dirty(struct page *page) if (nr_dirty) nilfs_set_file_dirty(inode, nr_dirty); - } else if (ret) { - unsigned nr_dirty = 1 << (PAGE_CACHE_SHIFT - inode->i_blkbits); - - nilfs_set_file_dirty(inode, nr_dirty); } return ret; } diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c index 9be6b416340..f1680cdbd88 100644 --- a/fs/notify/fanotify/fanotify_user.c +++ b/fs/notify/fanotify/fanotify_user.c @@ -69,7 +69,7 @@ static int create_fd(struct fsnotify_group *group, pr_debug("%s: group=%p event=%p\n", __func__, group, event); - client_fd = get_unused_fd_flags(group->fanotify_data.f_flags); + client_fd = get_unused_fd(); if (client_fd < 0) return client_fd; diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c index 9d7e2b9659c..238a5930cb3 100644 --- a/fs/notify/fdinfo.c +++ b/fs/notify/fdinfo.c @@ -42,7 +42,7 @@ static int show_mark_fhandle(struct seq_file *m, struct inode *inode) { struct { struct file_handle handle; - u8 pad[MAX_HANDLE_SZ]; + u8 pad[64]; } f; int size, ret, i; @@ -50,7 +50,7 @@ static int show_mark_fhandle(struct seq_file *m, struct inode *inode) size = f.handle.handle_bytes >> 2; ret = exportfs_encode_inode_fh(inode, (struct fid *)f.handle.f_handle, &size, 0); - if ((ret == FILEID_INVALID) || (ret < 0)) { + if ((ret == 255) || (ret == -ENOSPC)) { WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret); return 0; } diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c index 4f66e007dae..5d18ad10c27 100644 --- a/fs/ocfs2/buffer_head_io.c +++ b/fs/ocfs2/buffer_head_io.c @@ -90,6 +90,7 @@ int ocfs2_write_block(struct ocfs2_super *osb, struct buffer_head *bh, * information for this bh as it's not marked locally * uptodate. */ ret = -EIO; + put_bh(bh); mlog_errno(ret); } @@ -419,6 +420,7 @@ int ocfs2_write_super_or_backup(struct ocfs2_super *osb, if (!buffer_uptodate(bh)) { ret = -EIO; + put_bh(bh); mlog_errno(ret); } diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c index 2b941113e42..33ecbe0e673 100644 --- a/fs/ocfs2/dlm/dlmmaster.c +++ b/fs/ocfs2/dlm/dlmmaster.c @@ -653,9 +653,12 @@ void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm, clear_bit(bit, res->refmap); } -static void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, + +void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, struct dlm_lock_resource *res) { + assert_spin_locked(&res->spinlock); + res->inflight_locks++; mlog(0, "%s: res %.*s, inflight++: now %u, %ps()\n", dlm->name, @@ -663,13 +666,6 @@ static void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, __builtin_return_address(0)); } -void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, - struct dlm_lock_resource *res) -{ - assert_spin_locked(&res->spinlock); - __dlm_lockres_grab_inflight_ref(dlm, res); -} - void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, struct dlm_lock_resource *res) { @@ -859,8 +855,10 @@ lookup: /* finally add the lockres to its hash bucket */ __dlm_insert_lockres(dlm, res); - /* since this lockres is new it doesn't not require the spinlock */ - __dlm_lockres_grab_inflight_ref(dlm, res); + /* Grab inflight ref to pin the resource */ + spin_lock(&res->spinlock); + dlm_lockres_grab_inflight_ref(dlm, res); + spin_unlock(&res->spinlock); /* get an extra ref on the mle in case this is a BLOCK * if so, the creator of the BLOCK may try to put the last diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c index 9bd981cd314..e68588e6b1e 100644 --- a/fs/ocfs2/dlm/dlmrecovery.c +++ b/fs/ocfs2/dlm/dlmrecovery.c @@ -540,10 +540,7 @@ master_here: /* success! see if any other nodes need recovery */ mlog(0, "DONE mastering recovery of %s:%u here(this=%u)!\n", dlm->name, dlm->reco.dead_node, dlm->node_num); - spin_lock(&dlm->spinlock); - __dlm_reset_recovery(dlm); - dlm->reco.state &= ~DLM_RECO_STATE_FINALIZE; - spin_unlock(&dlm->spinlock); + dlm_reset_recovery(dlm); } dlm_end_recovery(dlm); @@ -701,14 +698,6 @@ static int dlm_remaster_locks(struct dlm_ctxt *dlm, u8 dead_node) if (all_nodes_done) { int ret; - /* Set this flag on recovery master to avoid - * a new recovery for another dead node start - * before the recovery is not done. That may - * cause recovery hung.*/ - spin_lock(&dlm->spinlock); - dlm->reco.state |= DLM_RECO_STATE_FINALIZE; - spin_unlock(&dlm->spinlock); - /* all nodes are now in DLM_RECO_NODE_DATA_DONE state * just send a finalize message to everyone and * clean up */ @@ -1762,13 +1751,13 @@ static int dlm_process_recovery_data(struct dlm_ctxt *dlm, struct dlm_migratable_lockres *mres) { struct dlm_migratable_lock *ml; - struct list_head *queue, *iter; + struct list_head *queue; struct list_head *tmpq = NULL; struct dlm_lock *newlock = NULL; struct dlm_lockstatus *lksb = NULL; int ret = 0; int i, j, bad; - struct dlm_lock *lock; + struct dlm_lock *lock = NULL; u8 from = O2NM_MAX_NODES; unsigned int added = 0; __be64 c; @@ -1803,16 +1792,14 @@ static int dlm_process_recovery_data(struct dlm_ctxt *dlm, /* MIGRATION ONLY! */ BUG_ON(!(mres->flags & DLM_MRES_MIGRATION)); - lock = NULL; spin_lock(&res->spinlock); for (j = DLM_GRANTED_LIST; j <= DLM_BLOCKED_LIST; j++) { tmpq = dlm_list_idx_to_ptr(res, j); - list_for_each(iter, tmpq) { - lock = list_entry(iter, - struct dlm_lock, list); - if (lock->ml.cookie == ml->cookie) + list_for_each_entry(lock, tmpq, list) { + if (lock->ml.cookie != ml->cookie) + lock = NULL; + else break; - lock = NULL; } if (lock) break; @@ -2880,8 +2867,8 @@ int dlm_finalize_reco_handler(struct o2net_msg *msg, u32 len, void *data, BUG(); } dlm->reco.state &= ~DLM_RECO_STATE_FINALIZE; - __dlm_reset_recovery(dlm); spin_unlock(&dlm->spinlock); + dlm_reset_recovery(dlm); dlm_kick_recovery_thread(dlm); break; default: diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c index 46387e49aa4..ff54014a24e 100644 --- a/fs/ocfs2/file.c +++ b/fs/ocfs2/file.c @@ -2374,8 +2374,8 @@ out_dio: if (((file->f_flags & O_DSYNC) && !direct_io) || IS_SYNC(inode) || ((file->f_flags & O_DIRECT) && !direct_io)) { - ret = filemap_fdatawrite_range(file->f_mapping, *ppos, - *ppos + count - 1); + ret = filemap_fdatawrite_range(file->f_mapping, pos, + pos + count - 1); if (ret < 0) written = ret; @@ -2388,8 +2388,8 @@ out_dio: } if (!ret) - ret = filemap_fdatawait_range(file->f_mapping, *ppos, - *ppos + count - 1); + ret = filemap_fdatawait_range(file->f_mapping, pos, + pos + count - 1); } /* diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c index e49b4f1cb26..332a281f217 100644 --- a/fs/ocfs2/quota_global.c +++ b/fs/ocfs2/quota_global.c @@ -717,12 +717,6 @@ static int ocfs2_release_dquot(struct dquot *dquot) */ if (status < 0) mlog_errno(status); - /* - * Clear dq_off so that we search for the structure in quota file next - * time we acquire it. The structure might be deleted and reallocated - * elsewhere by another node while our dquot structure is on freelist. - */ - dquot->dq_off = 0; clear_bit(DQ_ACTIVE_B, &dquot->dq_flags); out_trans: ocfs2_commit_trans(osb, handle); @@ -762,17 +756,16 @@ static int ocfs2_acquire_dquot(struct dquot *dquot) status = ocfs2_lock_global_qf(info, 1); if (status < 0) goto out; - status = ocfs2_qinfo_lock(info, 0); - if (status < 0) - goto out_dq; - /* - * We always want to read dquot structure from disk because we don't - * know what happened with it while it was on freelist. - */ - status = qtree_read_dquot(&info->dqi_gi, dquot); - ocfs2_qinfo_unlock(info, 0); - if (status < 0) - goto out_dq; + if (!test_bit(DQ_READ_B, &dquot->dq_flags)) { + status = ocfs2_qinfo_lock(info, 0); + if (status < 0) + goto out_dq; + status = qtree_read_dquot(&info->dqi_gi, dquot); + ocfs2_qinfo_unlock(info, 0); + if (status < 0) + goto out_dq; + } + set_bit(DQ_READ_B, &dquot->dq_flags); OCFS2_DQUOT(dquot)->dq_use_count++; OCFS2_DQUOT(dquot)->dq_origspace = dquot->dq_dqb.dqb_curspace; diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c index d0f323da0b5..27fe7ee4874 100644 --- a/fs/ocfs2/quota_local.c +++ b/fs/ocfs2/quota_local.c @@ -1303,6 +1303,10 @@ int ocfs2_local_release_dquot(handle_t *handle, struct dquot *dquot) ocfs2_journal_dirty(handle, od->dq_chunk->qc_headerbh); out: + /* Clear the read bit so that next time someone uses this + * dquot he reads fresh info from disk and allocates local + * dquot structure */ + clear_bit(DQ_READ_B, &dquot->dq_flags); return status; } diff --git a/fs/open.c b/fs/open.c index 86092bde31f..8c741002f94 100644 --- a/fs/open.c +++ b/fs/open.c @@ -628,12 +628,23 @@ out: static inline int __get_file_write_access(struct inode *inode, struct vfsmount *mnt) { - int error = get_write_access(inode); + int error; + error = get_write_access(inode); if (error) return error; - error = __mnt_want_write(mnt); - if (error) - put_write_access(inode); + /* + * Do not take mount writer counts on + * special files since no writes to + * the mount itself will occur. + */ + if (!special_file(inode->i_mode)) { + /* + * Balanced in __fput() + */ + error = __mnt_want_write(mnt); + if (error) + put_write_access(inode); + } return error; } @@ -666,11 +677,12 @@ static int do_dentry_open(struct file *f, path_get(&f->f_path); inode = f->f_inode = f->f_path.dentry->d_inode; - if (f->f_mode & FMODE_WRITE && !special_file(inode->i_mode)) { + if (f->f_mode & FMODE_WRITE) { error = __get_file_write_access(inode, f->f_path.mnt); if (error) goto cleanup_file; - file_take_write(f); + if (!special_file(inode->i_mode)) + file_take_write(f); } f->f_mapping = inode->i_mapping; @@ -711,6 +723,7 @@ cleanup_all: fops_put(f->f_op); file_sb_list_del(f); if (f->f_mode & FMODE_WRITE) { + put_write_access(inode); if (!special_file(inode->i_mode)) { /* * We don't consider this a real @@ -718,7 +731,6 @@ cleanup_all: * because it all happenend right * here, so just reset the state. */ - put_write_access(inode); file_reset_write(f); __mnt_drop_write(f->f_path.mnt); } diff --git a/fs/posix_acl.c b/fs/posix_acl.c index 3542f1f814e..8bd2135b7f8 100644 --- a/fs/posix_acl.c +++ b/fs/posix_acl.c @@ -158,12 +158,6 @@ posix_acl_equiv_mode(const struct posix_acl *acl, umode_t *mode_p) umode_t mode = 0; int not_equiv = 0; - /* - * A null ACL can always be presented as mode bits. - */ - if (!acl) - return 0; - FOREACH_ACL_ENTRY(pa, acl, pe) { switch (pa->e_tag) { case ACL_USER_OBJ: diff --git a/fs/proc/array.c b/fs/proc/array.c index 09f0d9c374a..cbd0f1b324b 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -304,11 +304,15 @@ static void render_cap_t(struct seq_file *m, const char *header, seq_puts(m, header); CAP_FOR_EACH_U32(__capi) { seq_printf(m, "%08x", - a->cap[CAP_LAST_U32 - __capi]); + a->cap[(_KERNEL_CAPABILITY_U32S-1) - __capi]); } seq_putc(m, '\n'); } +/* Remove non-existent capabilities */ +#define NORM_CAPS(v) (v.cap[CAP_TO_INDEX(CAP_LAST_CAP)] &= \ + CAP_TO_MASK(CAP_LAST_CAP + 1) - 1) + static inline void task_cap(struct seq_file *m, struct task_struct *p) { const struct cred *cred; @@ -322,6 +326,11 @@ static inline void task_cap(struct seq_file *m, struct task_struct *p) cap_bset = cred->cap_bset; rcu_read_unlock(); + NORM_CAPS(cap_inheritable); + NORM_CAPS(cap_permitted); + NORM_CAPS(cap_effective); + NORM_CAPS(cap_bset); + render_cap_t(m, "CapInh:\t", &cap_inheritable); render_cap_t(m, "CapPrm:\t", &cap_permitted); render_cap_t(m, "CapEff:\t", &cap_effective); diff --git a/fs/proc/base.c b/fs/proc/base.c index 4823113f258..77377014887 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -1867,7 +1867,6 @@ static int proc_map_files_get_link(struct dentry *dentry, struct path *path) if (rc) goto out_mmput; - rc = -ENOENT; down_read(&mm->mmap_sem); vma = find_exact_vma(mm, vm_start, vm_end); if (vma && vma->vm_file) { diff --git a/fs/proc/page.c b/fs/proc/page.c index 2a8cc94bb64..b8730d9ebae 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -121,7 +121,7 @@ u64 stable_page_flags(struct page *page) * just checks PG_head/PG_tail, so we need to check PageLRU to make * sure a given page is a thp, not a non-huge compound page. */ - else if (PageTransCompound(page) && PageLRU(compound_head(page))) + else if (PageTransCompound(page) && PageLRU(compound_trans_head(page))) u |= 1 << KPF_THP; /* diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c index 3ba30825f38..e4bcb2cf055 100644 --- a/fs/pstore/inode.c +++ b/fs/pstore/inode.c @@ -316,10 +316,10 @@ int pstore_mkfile(enum pstore_type_id type, char *psname, u64 id, int count, sprintf(name, "dmesg-%s-%lld", psname, id); break; case PSTORE_TYPE_CONSOLE: - sprintf(name, "console-%s-%lld", psname, id); + sprintf(name, "console-%s", psname); break; case PSTORE_TYPE_FTRACE: - sprintf(name, "ftrace-%s-%lld", psname, id); + sprintf(name, "ftrace-%s", psname); break; case PSTORE_TYPE_MCE: sprintf(name, "mce-%s-%lld", psname, id); diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c index 7a10e047bc3..38802d68396 100644 --- a/fs/quota/dquot.c +++ b/fs/quota/dquot.c @@ -637,7 +637,7 @@ int dquot_writeback_dquots(struct super_block *sb, int type) dqstats_inc(DQST_LOOKUPS); err = sb->dq_op->write_dquot(dquot); if (!ret && err) - ret = err; + err = ret; dqput(dquot); spin_lock(&dq_list_lock); } diff --git a/fs/reiserfs/dir.c b/fs/reiserfs/dir.c index 2b96b59f75d..6c2d136561c 100644 --- a/fs/reiserfs/dir.c +++ b/fs/reiserfs/dir.c @@ -128,7 +128,6 @@ int reiserfs_readdir_dentry(struct dentry *dentry, void *dirent, char *d_name; off_t d_off; ino_t d_ino; - loff_t cur_pos = deh_offset(deh); if (!de_visible(deh)) /* it is hidden entry */ @@ -201,9 +200,8 @@ int reiserfs_readdir_dentry(struct dentry *dentry, void *dirent, if (local_buf != small_buf) { kfree(local_buf); } - - /* deh_offset(deh) may be invalid now. */ - next_pos = cur_pos + 1; + // next entry should be looked for with such offset + next_pos = deh_offset(deh) + 1; if (item_moved(&tmp_ih, &path_to_entry)) { set_cpu_key_k_offset(&pos_key, diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index 36166443bc4..f844533792e 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -3211,14 +3211,8 @@ int reiserfs_setattr(struct dentry *dentry, struct iattr *attr) attr->ia_size != i_size_read(inode)) { error = inode_newsize_ok(inode, attr->ia_size); if (!error) { - /* - * Could race against reiserfs_file_release - * if called from NFS, so take tailpack mutex. - */ - mutex_lock(&REISERFS_I(inode)->tailpack); truncate_setsize(inode, attr->ia_size); - reiserfs_truncate_file(inode, 1); - mutex_unlock(&REISERFS_I(inode)->tailpack); + reiserfs_vfs_truncate_file(inode); } } diff --git a/fs/super.c b/fs/super.c index e028b508db2..68307c02922 100644 --- a/fs/super.c +++ b/fs/super.c @@ -76,8 +76,6 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc) total_objects = sb->s_nr_dentry_unused + sb->s_nr_inodes_unused + fs_objects + 1; - if (!total_objects) - total_objects = 1; if (sc->nr_to_scan) { int dentries; diff --git a/fs/ubifs/commit.c b/fs/ubifs/commit.c index 26b69b2d4a4..ff8229340cd 100644 --- a/fs/ubifs/commit.c +++ b/fs/ubifs/commit.c @@ -166,10 +166,15 @@ static int do_commit(struct ubifs_info *c) err = ubifs_orphan_end_commit(c); if (err) goto out; + old_ltail_lnum = c->ltail_lnum; + err = ubifs_log_end_commit(c, new_ltail_lnum); + if (err) + goto out; err = dbg_check_old_index(c, &zroot); if (err) goto out; + mutex_lock(&c->mst_mutex); c->mst_node->cmt_no = cpu_to_le64(c->cmt_no); c->mst_node->log_lnum = cpu_to_le32(new_ltail_lnum); c->mst_node->root_lnum = cpu_to_le32(zroot.lnum); @@ -198,9 +203,8 @@ static int do_commit(struct ubifs_info *c) c->mst_node->flags |= cpu_to_le32(UBIFS_MST_NO_ORPHS); else c->mst_node->flags &= ~cpu_to_le32(UBIFS_MST_NO_ORPHS); - - old_ltail_lnum = c->ltail_lnum; - err = ubifs_log_end_commit(c, new_ltail_lnum); + err = ubifs_write_master(c); + mutex_unlock(&c->mst_mutex); if (err) goto out; diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index 881324c0843..14374530784 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -1524,7 +1524,8 @@ static int ubifs_vm_page_mkwrite(struct vm_area_struct *vma, } wait_for_stable_page(page); - return VM_FAULT_LOCKED; + unlock_page(page); + return 0; out_unlock: unlock_page(page); diff --git a/fs/ubifs/log.c b/fs/ubifs/log.c index 06649d21b05..36bd4efd081 100644 --- a/fs/ubifs/log.c +++ b/fs/ubifs/log.c @@ -106,14 +106,10 @@ static inline long long empty_log_bytes(const struct ubifs_info *c) h = (long long)c->lhead_lnum * c->leb_size + c->lhead_offs; t = (long long)c->ltail_lnum * c->leb_size; - if (h > t) + if (h >= t) return c->log_bytes - h + t; - else if (h != t) - return t - h; - else if (c->lhead_lnum != c->ltail_lnum) - return 0; else - return c->log_bytes; + return t - h; } /** @@ -451,9 +447,9 @@ out: * @ltail_lnum: new log tail LEB number * * This function is called on when the commit operation was finished. It - * moves log tail to new position and updates the master node so that it stores - * the new log tail LEB number. Returns zero in case of success and a negative - * error code in case of failure. + * moves log tail to new position and unmaps LEBs which contain obsolete data. + * Returns zero in case of success and a negative error code in case of + * failure. */ int ubifs_log_end_commit(struct ubifs_info *c, int ltail_lnum) { @@ -481,12 +477,7 @@ int ubifs_log_end_commit(struct ubifs_info *c, int ltail_lnum) spin_unlock(&c->buds_lock); err = dbg_check_bud_bytes(c); - if (err) - goto out; - err = ubifs_write_master(c); - -out: mutex_unlock(&c->log_mutex); return err; } diff --git a/fs/ubifs/master.c b/fs/ubifs/master.c index 1a4bb9e8b3b..ab83ace9910 100644 --- a/fs/ubifs/master.c +++ b/fs/ubifs/master.c @@ -352,9 +352,10 @@ int ubifs_read_master(struct ubifs_info *c) * ubifs_write_master - write master node. * @c: UBIFS file-system description object * - * This function writes the master node. Returns zero in case of success and a - * negative error code in case of failure. The master node is written twice to - * enable recovery. + * This function writes the master node. The caller has to take the + * @c->mst_mutex lock before calling this function. Returns zero in case of + * success and a negative error code in case of failure. The master node is + * written twice to enable recovery. */ int ubifs_write_master(struct ubifs_info *c) { diff --git a/fs/ubifs/shrinker.c b/fs/ubifs/shrinker.c index e0a7a764a90..9e1d05666fe 100644 --- a/fs/ubifs/shrinker.c +++ b/fs/ubifs/shrinker.c @@ -128,6 +128,7 @@ static int shrink_tnc(struct ubifs_info *c, int nr, int age, int *contention) freed = ubifs_destroy_tnc_subtree(znode); atomic_long_sub(freed, &ubifs_clean_zn_cnt); atomic_long_sub(freed, &c->clean_zn_cnt); + ubifs_assert(atomic_long_read(&c->clean_zn_cnt) >= 0); total_freed += freed; znode = zprev; } diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c index 05115d71940..879b9976c12 100644 --- a/fs/ubifs/super.c +++ b/fs/ubifs/super.c @@ -1970,6 +1970,7 @@ static struct ubifs_info *alloc_ubifs_info(struct ubi_volume_desc *ubi) mutex_init(&c->lp_mutex); mutex_init(&c->tnc_mutex); mutex_init(&c->log_mutex); + mutex_init(&c->mst_mutex); mutex_init(&c->umount_mutex); mutex_init(&c->bu_mutex); mutex_init(&c->write_reserve_mutex); diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h index bd51277f6fe..b2babce4d70 100644 --- a/fs/ubifs/ubifs.h +++ b/fs/ubifs/ubifs.h @@ -1042,6 +1042,7 @@ struct ubifs_debug_info; * * @mst_node: master node * @mst_offs: offset of valid master node + * @mst_mutex: protects the master node area, @mst_node, and @mst_offs * * @max_bu_buf_len: maximum bulk-read buffer length * @bu_mutex: protects the pre-allocated bulk-read buffer and @c->bu @@ -1281,6 +1282,7 @@ struct ubifs_info { struct ubifs_mst_node *mst_node; int mst_offs; + struct mutex mst_mutex; int max_bu_buf_len; struct mutex bu_mutex; diff --git a/fs/udf/inode.c b/fs/udf/inode.c index aa023283cc8..b6d15d34981 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -1270,22 +1270,13 @@ update_time: return 0; } -/* - * Maximum length of linked list formed by ICB hierarchy. The chosen number is - * arbitrary - just that we hopefully don't limit any real use of rewritten - * inode on write-once media but avoid looping for too long on corrupted media. - */ -#define UDF_MAX_ICB_NESTING 1024 - static void __udf_read_inode(struct inode *inode) { struct buffer_head *bh = NULL; struct fileEntry *fe; uint16_t ident; struct udf_inode_info *iinfo = UDF_I(inode); - unsigned int indirections = 0; -reread: /* * Set defaults, but the inode is still incomplete! * Note: get_new_inode() sets the following on a new inode: @@ -1322,26 +1313,28 @@ reread: ibh = udf_read_ptagged(inode->i_sb, &iinfo->i_location, 1, &ident); if (ident == TAG_IDENT_IE && ibh) { + struct buffer_head *nbh = NULL; struct kernel_lb_addr loc; struct indirectEntry *ie; ie = (struct indirectEntry *)ibh->b_data; loc = lelb_to_cpu(ie->indirectICB.extLocation); - if (ie->indirectICB.extLength) { - brelse(bh); - brelse(ibh); - memcpy(&iinfo->i_location, &loc, - sizeof(struct kernel_lb_addr)); - if (++indirections > UDF_MAX_ICB_NESTING) { - udf_err(inode->i_sb, - "too many ICBs in ICB hierarchy" - " (max %d supported)\n", - UDF_MAX_ICB_NESTING); - make_bad_inode(inode); + if (ie->indirectICB.extLength && + (nbh = udf_read_ptagged(inode->i_sb, &loc, 0, + &ident))) { + if (ident == TAG_IDENT_FE || + ident == TAG_IDENT_EFE) { + memcpy(&iinfo->i_location, + &loc, + sizeof(struct kernel_lb_addr)); + brelse(bh); + brelse(ibh); + brelse(nbh); + __udf_read_inode(inode); return; } - goto reread; + brelse(nbh); } } brelse(ibh); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index cfbb4c1b2f1..41a695048be 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -1661,72 +1661,11 @@ xfs_vm_readpages( return mpage_readpages(mapping, pages, nr_pages, xfs_get_blocks); } -/* - * This is basically a copy of __set_page_dirty_buffers() with one - * small tweak: buffers beyond EOF do not get marked dirty. If we mark them - * dirty, we'll never be able to clean them because we don't write buffers - * beyond EOF, and that means we can't invalidate pages that span EOF - * that have been marked dirty. Further, the dirty state can leak into - * the file interior if the file is extended, resulting in all sorts of - * bad things happening as the state does not match the underlying data. - * - * XXX: this really indicates that bufferheads in XFS need to die. Warts like - * this only exist because of bufferheads and how the generic code manages them. - */ -STATIC int -xfs_vm_set_page_dirty( - struct page *page) -{ - struct address_space *mapping = page->mapping; - struct inode *inode = mapping->host; - loff_t end_offset; - loff_t offset; - int newly_dirty; - - if (unlikely(!mapping)) - return !TestSetPageDirty(page); - - end_offset = i_size_read(inode); - offset = page_offset(page); - - spin_lock(&mapping->private_lock); - if (page_has_buffers(page)) { - struct buffer_head *head = page_buffers(page); - struct buffer_head *bh = head; - - do { - if (offset < end_offset) - set_buffer_dirty(bh); - bh = bh->b_this_page; - offset += 1 << inode->i_blkbits; - } while (bh != head); - } - newly_dirty = !TestSetPageDirty(page); - spin_unlock(&mapping->private_lock); - - if (newly_dirty) { - /* sigh - __set_page_dirty() is static, so copy it here, too */ - unsigned long flags; - - spin_lock_irqsave(&mapping->tree_lock, flags); - if (page->mapping) { /* Race with truncate? */ - WARN_ON_ONCE(!PageUptodate(page)); - account_page_dirtied(page, mapping); - radix_tree_tag_set(&mapping->page_tree, - page_index(page), PAGECACHE_TAG_DIRTY); - } - spin_unlock_irqrestore(&mapping->tree_lock, flags); - __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); - } - return newly_dirty; -} - const struct address_space_operations xfs_address_space_operations = { .readpage = xfs_vm_readpage, .readpages = xfs_vm_readpages, .writepage = xfs_vm_writepage, .writepages = xfs_vm_writepages, - .set_page_dirty = xfs_vm_set_page_dirty, .releasepage = xfs_vm_releasepage, .invalidatepage = xfs_vm_invalidatepage, .write_begin = xfs_vm_write_begin, diff --git a/fs/xfs/xfs_da_btree.c b/fs/xfs/xfs_da_btree.c index 79ddbaf9320..eca6f9d8a26 100644 --- a/fs/xfs/xfs_da_btree.c +++ b/fs/xfs/xfs_da_btree.c @@ -1334,7 +1334,7 @@ xfs_da3_fixhashpath( node = blk->bp->b_addr; xfs_da3_node_hdr_from_disk(&nodehdr, node); btree = xfs_da3_node_tree_p(node); - if (be32_to_cpu(btree[blk->index].hashval) == lasthash) + if (be32_to_cpu(btree->hashval) == lasthash) break; blk->hashval = lasthash; btree[blk->index].hashval = cpu_to_be32(lasthash); diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index bac3e1635b7..044e97a33c8 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -1104,8 +1104,7 @@ xfs_qm_dqflush( * Get the buffer containing the on-disk dquot */ error = xfs_trans_read_buf(mp, NULL, mp->m_ddev_targp, dqp->q_blkno, - mp->m_quotainfo->qi_dqchunklen, 0, &bp, - &xfs_dquot_buf_ops); + mp->m_quotainfo->qi_dqchunklen, 0, &bp, NULL); if (error) goto out_unlock; diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 9f457fedbcf..a5f2042aec8 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -298,16 +298,7 @@ xfs_file_aio_read( xfs_rw_iunlock(ip, XFS_IOLOCK_EXCL); return ret; } - - /* - * Invalidate whole pages. This can return an error if - * we fail to invalidate a page, but this should never - * happen on XFS. Warn if it does fail. - */ - ret = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping, - pos >> PAGE_CACHE_SHIFT, -1); - WARN_ON_ONCE(ret); - ret = 0; + truncate_pagecache_range(VFS_I(ip), pos, -1); } xfs_rw_ilock_demote(ip, XFS_IOLOCK_EXCL); } @@ -686,15 +677,7 @@ xfs_file_dio_aio_write( pos, -1); if (ret) goto out; - /* - * Invalidate whole pages. This can return an error if - * we fail to invalidate a page, but this should never - * happen on XFS. Warn if it does fail. - */ - ret = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping, - pos >> PAGE_CACHE_SHIFT, -1); - WARN_ON_ONCE(ret); - ret = 0; + truncate_pagecache_range(VFS_I(ip), pos, -1); } /* diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 29d1ca567ed..b75c9bb6e71 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -935,12 +935,6 @@ xfs_qm_dqiter_bufs( if (error) break; - /* - * A corrupt buffer might not have a verifier attached, so - * make sure we have the correct one attached before writeback - * occurs. - */ - bp->b_ops = &xfs_dquot_buf_ops; xfs_qm_reset_dqcounts(mp, bp, firstid, type); xfs_buf_delwri_queue(bp, buffer_list); xfs_buf_relse(bp); @@ -1024,7 +1018,7 @@ xfs_qm_dqiterate( xfs_buf_readahead(mp->m_ddev_targp, XFS_FSB_TO_DADDR(mp, rablkno), mp->m_quotainfo->qi_dqchunklen, - &xfs_dquot_buf_ops); + NULL); rablkno++; } } diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 17bccd3a4b0..b58268a5ddd 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -620,47 +620,32 @@ static inline int pmd_numa(pmd_t pmd) #ifndef pte_mknonnuma static inline pte_t pte_mknonnuma(pte_t pte) { - pteval_t val = pte_val(pte); - - val &= ~_PAGE_NUMA; - val |= (_PAGE_PRESENT|_PAGE_ACCESSED); - return __pte(val); + pte = pte_clear_flags(pte, _PAGE_NUMA); + return pte_set_flags(pte, _PAGE_PRESENT|_PAGE_ACCESSED); } #endif #ifndef pmd_mknonnuma static inline pmd_t pmd_mknonnuma(pmd_t pmd) { - pmdval_t val = pmd_val(pmd); - - val &= ~_PAGE_NUMA; - val |= (_PAGE_PRESENT|_PAGE_ACCESSED); - - return __pmd(val); + pmd = pmd_clear_flags(pmd, _PAGE_NUMA); + return pmd_set_flags(pmd, _PAGE_PRESENT|_PAGE_ACCESSED); } #endif #ifndef pte_mknuma static inline pte_t pte_mknuma(pte_t pte) { - pteval_t val = pte_val(pte); - - val &= ~_PAGE_PRESENT; - val |= _PAGE_NUMA; - - return __pte(val); + pte = pte_set_flags(pte, _PAGE_NUMA); + return pte_clear_flags(pte, _PAGE_PRESENT); } #endif #ifndef pmd_mknuma static inline pmd_t pmd_mknuma(pmd_t pmd) { - pmdval_t val = pmd_val(pmd); - - val &= ~_PAGE_PRESENT; - val |= _PAGE_NUMA; - - return __pmd(val); + pmd = pmd_set_flags(pmd, _PAGE_NUMA); + return pmd_clear_flags(pmd, _PAGE_PRESENT); } #endif #else diff --git a/include/drm/drm_pciids.h b/include/drm/drm_pciids.h index d7b717090f2..ecaef57f9f6 100644 --- a/include/drm/drm_pciids.h +++ b/include/drm/drm_pciids.h @@ -52,6 +52,7 @@ {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ + {0x1002, 0x4C6E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|RADEON_IS_MOBILITY}, \ {0x1002, 0x4E44, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ {0x1002, 0x4E45, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ {0x1002, 0x4E46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ @@ -142,11 +143,8 @@ {0x1002, 0x6601, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6602, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6603, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ - {0x1002, 0x6604, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ - {0x1002, 0x6605, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6606, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6607, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ - {0x1002, 0x6608, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6610, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6611, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6613, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \ @@ -258,7 +256,6 @@ {0x1002, 0x6829, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x682A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x682B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ - {0x1002, 0x682C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x682D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x682F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6830, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ diff --git a/include/linux/bitops.h b/include/linux/bitops.h index c1dde8e00d2..a3b6b82108b 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -185,21 +185,6 @@ static inline unsigned long __ffs64(u64 word) #ifdef __KERNEL__ -#ifndef set_mask_bits -#define set_mask_bits(ptr, _mask, _bits) \ -({ \ - const typeof(*ptr) mask = (_mask), bits = (_bits); \ - typeof(*ptr) old, new; \ - \ - do { \ - old = ACCESS_ONCE(*ptr); \ - new = (old & ~mask) | bits; \ - } while (cmpxchg(ptr, old, new) != old); \ - \ - new; \ -}) -#endif - #ifndef find_last_bit /** * find_last_bit - find the last set bit in a memory region diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 494d228a91d..2fdb4a451b4 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1187,9 +1187,10 @@ static inline int queue_alignment_offset(struct request_queue *q) static inline int queue_limit_alignment_offset(struct queue_limits *lim, sector_t sector) { unsigned int granularity = max(lim->physical_block_size, lim->io_min); - unsigned int alignment = sector_div(sector, granularity >> 9) << 9; + unsigned int alignment = (sector << 9) & (granularity - 1); - return (granularity + lim->alignment_offset - alignment) % granularity; + return (granularity + lim->alignment_offset - alignment) + & (granularity - 1); } static inline int bdev_alignment_offset(struct block_device *bdev) diff --git a/include/linux/capability.h b/include/linux/capability.h index 9b4378af414..d9a4f7f40f3 100644 --- a/include/linux/capability.h +++ b/include/linux/capability.h @@ -78,11 +78,8 @@ extern const kernel_cap_t __cap_init_eff_set; # error Fix up hand-coded capability macro initializers #else /* HAND-CODED capability initializers */ -#define CAP_LAST_U32 ((_KERNEL_CAPABILITY_U32S) - 1) -#define CAP_LAST_U32_VALID_MASK (CAP_TO_MASK(CAP_LAST_CAP + 1) -1) - # define CAP_EMPTY_SET ((kernel_cap_t){{ 0, 0 }}) -# define CAP_FULL_SET ((kernel_cap_t){{ ~0, CAP_LAST_U32_VALID_MASK }}) +# define CAP_FULL_SET ((kernel_cap_t){{ ~0, ~0 }}) # define CAP_FS_SET ((kernel_cap_t){{ CAP_FS_MASK_B0 \ | CAP_TO_MASK(CAP_LINUX_IMMUTABLE), \ CAP_FS_MASK_B1 } }) @@ -214,7 +211,7 @@ extern bool has_ns_capability_noaudit(struct task_struct *t, extern bool capable(int cap); extern bool ns_capable(struct user_namespace *ns, int cap); extern bool nsown_capable(int cap); -extern bool capable_wrt_inode_uidgid(const struct inode *inode, int cap); +extern bool inode_capable(const struct inode *inode, int cap); extern bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap); /* audit system wants to get cap info from files as well */ diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h index 6ade97de7a8..7c1420bb1dc 100644 --- a/include/linux/ceph/messenger.h +++ b/include/linux/ceph/messenger.h @@ -157,7 +157,7 @@ struct ceph_msg { bool front_is_vmalloc; bool more_to_follow; bool needs_out_seq; - int front_alloc_len; + int front_max; unsigned long ack_stamp; /* tx: when we were acked */ struct ceph_msgpool *pool; diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h index 4fb6a893895..8f47625a066 100644 --- a/include/linux/ceph/osd_client.h +++ b/include/linux/ceph/osd_client.h @@ -138,7 +138,6 @@ struct ceph_osd_request { __le64 *r_request_pool; void *r_request_pgid; __le32 *r_request_attempts; - bool r_paused; struct ceph_eversion *r_request_reassert_version; int r_result; diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index 02ae99e8e6d..24545cd90a2 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -37,9 +37,6 @@ __asm__ ("" : "=r"(__ptr) : "0"(ptr)); \ (typeof(ptr)) (__ptr + (off)); }) -/* Make the optimizer believe the variable can be manipulated arbitrarily. */ -#define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var)) - #ifdef __CHECKER__ #define __must_be_array(arr) 0 #else diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h deleted file mode 100644 index cdd1cc202d5..00000000000 --- a/include/linux/compiler-gcc5.h +++ /dev/null @@ -1,66 +0,0 @@ -#ifndef __LINUX_COMPILER_H -#error "Please don't include <linux/compiler-gcc5.h> directly, include <linux/compiler.h> instead." -#endif - -#define __used __attribute__((__used__)) -#define __must_check __attribute__((warn_unused_result)) -#define __compiler_offsetof(a, b) __builtin_offsetof(a, b) - -/* Mark functions as cold. gcc will assume any path leading to a call - to them will be unlikely. This means a lot of manual unlikely()s - are unnecessary now for any paths leading to the usual suspects - like BUG(), printk(), panic() etc. [but let's keep them for now for - older compilers] - - Early snapshots of gcc 4.3 don't support this and we can't detect this - in the preprocessor, but we can live with this because they're unreleased. - Maketime probing would be overkill here. - - gcc also has a __attribute__((__hot__)) to move hot functions into - a special section, but I don't see any sense in this right now in - the kernel context */ -#define __cold __attribute__((__cold__)) - -#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) - -#ifndef __CHECKER__ -# define __compiletime_warning(message) __attribute__((warning(message))) -# define __compiletime_error(message) __attribute__((error(message))) -#endif /* __CHECKER__ */ - -/* - * Mark a position in code as unreachable. This can be used to - * suppress control flow warnings after asm blocks that transfer - * control elsewhere. - * - * Early snapshots of gcc 4.5 don't support this and we can't detect - * this in the preprocessor, but we can live with this because they're - * unreleased. Really, we need to have autoconf for the kernel. - */ -#define unreachable() __builtin_unreachable() - -/* Mark a function definition as prohibited from being cloned. */ -#define __noclone __attribute__((__noclone__)) - -/* - * Tell the optimizer that something else uses this function or variable. - */ -#define __visible __attribute__((externally_visible)) - -/* - * GCC 'asm goto' miscompiles certain code sequences: - * - * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 - * - * Work it around via a compiler barrier quirk suggested by Jakub Jelinek. - * Fixed in GCC 4.8.2 and later versions. - * - * (asm goto is automatically volatile - the naming reflects this.) - */ -#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0) - -#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP -#define __HAVE_BUILTIN_BSWAP32__ -#define __HAVE_BUILTIN_BSWAP64__ -#define __HAVE_BUILTIN_BSWAP16__ -#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */ diff --git a/include/linux/compiler-intel.h b/include/linux/compiler-intel.h index 5529c523942..dc1bd3dcf11 100644 --- a/include/linux/compiler-intel.h +++ b/include/linux/compiler-intel.h @@ -15,7 +15,6 @@ */ #undef barrier #undef RELOC_HIDE -#undef OPTIMIZER_HIDE_VAR #define barrier() __memory_barrier() @@ -24,12 +23,6 @@ __ptr = (unsigned long) (ptr); \ (typeof(ptr)) (__ptr + (off)); }) -/* This should act as an optimization barrier on var. - * Given that this compiler does not have inline assembly, a compiler barrier - * is the best we can do. - */ -#define OPTIMIZER_HIDE_VAR(var) barrier() - /* Intel ECC compiler doesn't support __builtin_types_compatible_p() */ #define __must_be_array(a) 0 diff --git a/include/linux/compiler.h b/include/linux/compiler.h index a2329c5e620..92669cd182a 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -170,10 +170,6 @@ void ftrace_likely_update(struct ftrace_branch_data *f, int val, int expect); (typeof(ptr)) (__ptr + (off)); }) #endif -#ifndef OPTIMIZER_HIDE_VAR -#define OPTIMIZER_HIDE_VAR(var) barrier() -#endif - /* Not-quite-unique ID. */ #ifndef __UNIQUE_ID # define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __LINE__) diff --git a/include/linux/firewire.h b/include/linux/firewire.h index 5d838bf10cb..217e4b42b7c 100644 --- a/include/linux/firewire.h +++ b/include/linux/firewire.h @@ -200,7 +200,6 @@ struct fw_device { unsigned irmc:1; unsigned bc_implemented:2; - work_func_t workfn; struct delayed_work work; struct fw_attribute_group attribute_group; }; diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 7a13848d635..99d0fbcbaf7 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -524,7 +524,6 @@ static inline int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_a extern int ftrace_arch_read_dyn_info(char *buf, int size); extern int skip_trace(unsigned long ip); -extern void ftrace_module_init(struct module *mod); extern void ftrace_disable_daemon(void); extern void ftrace_enable_daemon(void); @@ -534,7 +533,6 @@ static inline int ftrace_force_update(void) { return 0; } static inline void ftrace_disable_daemon(void) { } static inline void ftrace_enable_daemon(void) { } static inline void ftrace_release_mod(struct module *mod) {} -static inline void ftrace_module_init(struct module *mod) {} static inline int register_ftrace_command(struct ftrace_func_command *cmd) { return -EINVAL; diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h index b5e36017acd..120d57a1c3a 100644 --- a/include/linux/ftrace_event.h +++ b/include/linux/ftrace_event.h @@ -325,6 +325,10 @@ enum { FILTER_TRACE_FN, }; +#define EVENT_STORAGE_SIZE 128 +extern struct mutex event_storage_mutex; +extern char event_storage[EVENT_STORAGE_SIZE]; + extern int trace_event_raw_init(struct ftrace_event_call *call); extern int trace_define_field(struct ftrace_event_call *call, const char *type, const char *name, int offset, int size, diff --git a/include/linux/futex.h b/include/linux/futex.h index 6435f46d6e1..b0d95cac826 100644 --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -55,11 +55,7 @@ union futex_key { #ifdef CONFIG_FUTEX extern void exit_robust_list(struct task_struct *curr); extern void exit_pi_state_list(struct task_struct *curr); -#ifdef CONFIG_HAVE_FUTEX_CMPXCHG -#define futex_cmpxchg_enabled 1 -#else extern int futex_cmpxchg_enabled; -#endif #else static inline void exit_robust_list(struct task_struct *curr) { diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index a193bb3e413..528454c2caa 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -159,6 +159,23 @@ static inline int hpage_nr_pages(struct page *page) return HPAGE_PMD_NR; return 1; } +static inline struct page *compound_trans_head(struct page *page) +{ + if (PageTail(page)) { + struct page *head; + head = page->first_page; + smp_rmb(); + /* + * head may be a dangling pointer. + * __split_huge_page_refcount clears PageTail before + * overwriting first_page, so if PageTail is still + * there it means the head pointer isn't dangling. + */ + if (PageTail(page)) + return head; + } + return page; +} extern int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pmd_t pmd, pmd_t *pmdp); @@ -188,6 +205,7 @@ static inline int split_huge_page(struct page *page) do { } while (0) #define split_huge_page_pmd_mm(__mm, __address, __pmd) \ do { } while (0) +#define compound_trans_head(page) compound_head(page) static inline int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags, int advice) { diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 422eac8538f..c2559847d7e 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -483,18 +483,15 @@ hv_get_ringbuffer_availbytes(struct hv_ring_buffer_info *rbi, * 0 . 13 (Windows Server 2008) * 1 . 1 (Windows 7) * 2 . 4 (Windows 8) - * 3 . 0 (Windows 8 R2) */ #define VERSION_WS2008 ((0 << 16) | (13)) #define VERSION_WIN7 ((1 << 16) | (1)) #define VERSION_WIN8 ((2 << 16) | (4)) -#define VERSION_WIN8_1 ((3 << 16) | (0)) - #define VERSION_INVAL -1 -#define VERSION_CURRENT VERSION_WIN8_1 +#define VERSION_CURRENT VERSION_WIN8 /* Make maximum size of pipe payload of 16K */ #define MAX_PIPE_DATA_PAYLOAD (sizeof(u8) * 16384) @@ -897,7 +894,7 @@ struct vmbus_channel_relid_released { struct vmbus_channel_initiate_contact { struct vmbus_channel_message_header header; u32 vmbus_version_requested; - u32 target_vcpu; /* The VCPU the host should respond to */ + u32 padding2; u64 interrupt_page; u64 monitor_page1; u64 monitor_page2; diff --git a/include/linux/if_team.h b/include/linux/if_team.h index 25b8b15197b..16fae6436d0 100644 --- a/include/linux/if_team.h +++ b/include/linux/if_team.h @@ -193,7 +193,6 @@ struct team { bool user_carrier_enabled; bool queue_override_enabled; struct list_head *qom_lists; /* array of queue override mapping lists */ - bool port_mtu_change_allowed; long mode_priv[TEAM_MODE_PRIV_LONGS]; }; diff --git a/include/linux/iio/trigger.h b/include/linux/iio/trigger.h index 545deb14965..3869c525b05 100644 --- a/include/linux/iio/trigger.h +++ b/include/linux/iio/trigger.h @@ -83,12 +83,10 @@ static inline void iio_trigger_put(struct iio_trigger *trig) put_device(&trig->dev); } -static inline struct iio_trigger *iio_trigger_get(struct iio_trigger *trig) +static inline void iio_trigger_get(struct iio_trigger *trig) { get_device(&trig->dev); __module_get(trig->ops->owner); - - return trig; } /** diff --git a/include/linux/init_task.h b/include/linux/init_task.h index 998f4dfedec..5cd0f094992 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h @@ -40,7 +40,6 @@ extern struct fs_struct init_fs; #define INIT_SIGNALS(sig) { \ .nr_threads = 1, \ - .thread_head = LIST_HEAD_INIT(init_task.thread_node), \ .wait_chldexit = __WAIT_QUEUE_HEAD_INITIALIZER(sig.wait_chldexit),\ .shared_pending = { \ .list = LIST_HEAD_INIT(sig.shared_pending.list), \ @@ -214,7 +213,6 @@ extern struct task_group root_task_group; [PIDTYPE_SID] = INIT_PID_LINK(PIDTYPE_SID), \ }, \ .thread_group = LIST_HEAD_INIT(tsk.thread_group), \ - .thread_node = LIST_HEAD_INIT(init_signals.thread_head), \ INIT_IDS \ INIT_PERF_EVENTS(tsk) \ INIT_TRACE_IRQFLAGS \ diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 6de0f2c14ec..5fa5afeeb75 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -239,40 +239,7 @@ static inline int check_wakeup_irqs(void) { return 0; } extern cpumask_var_t irq_default_affinity; -/* Internal implementation. Use the helpers below */ -extern int __irq_set_affinity(unsigned int irq, const struct cpumask *cpumask, - bool force); - -/** - * irq_set_affinity - Set the irq affinity of a given irq - * @irq: Interrupt to set affinity - * @mask: cpumask - * - * Fails if cpumask does not contain an online CPU - */ -static inline int -irq_set_affinity(unsigned int irq, const struct cpumask *cpumask) -{ - return __irq_set_affinity(irq, cpumask, false); -} - -/** - * irq_force_affinity - Force the irq affinity of a given irq - * @irq: Interrupt to set affinity - * @mask: cpumask - * - * Same as irq_set_affinity, but without checking the mask against - * online cpus. - * - * Solely for low level cpu hotplug code, where we need to make per - * cpu interrupts affine before the cpu becomes online. - */ -static inline int -irq_force_affinity(unsigned int irq, const struct cpumask *cpumask) -{ - return __irq_set_affinity(irq, cpumask, true); -} - +extern int irq_set_affinity(unsigned int irq, const struct cpumask *cpumask); extern int irq_can_set_affinity(unsigned int irq); extern int irq_select_affinity(unsigned int irq); @@ -308,11 +275,6 @@ static inline int irq_set_affinity(unsigned int irq, const struct cpumask *m) return -EINVAL; } -static inline int irq_force_affinity(unsigned int irq, const struct cpumask *cpumask) -{ - return 0; -} - static inline int irq_can_set_affinity(unsigned int irq) { return 0; diff --git a/include/linux/irq.h b/include/linux/irq.h index d591bfe1475..bc4e0661195 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -375,8 +375,7 @@ extern void remove_percpu_irq(unsigned int irq, struct irqaction *act); extern void irq_cpu_online(void); extern void irq_cpu_offline(void); -extern int irq_set_affinity_locked(struct irq_data *data, - const struct cpumask *cpumask, bool force); +extern int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *cpumask); #ifdef CONFIG_GENERIC_HARDIRQS diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h index 078bc2fc74f..623325e2ff9 100644 --- a/include/linux/irqdesc.h +++ b/include/linux/irqdesc.h @@ -27,8 +27,6 @@ struct irq_desc; * @irq_count: stats field to detect stalled irqs * @last_unhandled: aging timer for unhandled count * @irqs_unhandled: stats field for spurious unhandled interrupts - * @threads_handled: stats field for deferred spurious detection of threaded handlers - * @threads_handled_last: comparator field for deferred spurious detection of theraded handlers * @lock: locking for SMP * @affinity_hint: hint to user space for preferred irq affinity * @affinity_notify: context for notification of affinity changes @@ -54,8 +52,6 @@ struct irq_desc { unsigned int irq_count; /* For detecting broken IRQs */ unsigned long last_unhandled; /* Aging timer for unhandled count */ unsigned int irqs_unhandled; - atomic_t threads_handled; - int threads_handled_last; raw_spinlock_t lock; struct cpumask *percpu_enabled; #ifdef CONFIG_SMP diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h index c039fe1315e..8fb8edf1241 100644 --- a/include/linux/jiffies.h +++ b/include/linux/jiffies.h @@ -101,13 +101,13 @@ static inline u64 get_jiffies_64(void) #define time_after(a,b) \ (typecheck(unsigned long, a) && \ typecheck(unsigned long, b) && \ - ((long)((b) - (a)) < 0)) + ((long)(b) - (long)(a) < 0)) #define time_before(a,b) time_after(b,a) #define time_after_eq(a,b) \ (typecheck(unsigned long, a) && \ typecheck(unsigned long, b) && \ - ((long)((a) - (b)) >= 0)) + ((long)(a) - (long)(b) >= 0)) #define time_before_eq(a,b) time_after_eq(b,a) /* @@ -130,13 +130,13 @@ static inline u64 get_jiffies_64(void) #define time_after64(a,b) \ (typecheck(__u64, a) && \ typecheck(__u64, b) && \ - ((__s64)((b) - (a)) < 0)) + ((__s64)(b) - (__s64)(a) < 0)) #define time_before64(a,b) time_after64(b,a) #define time_after_eq64(a,b) \ (typecheck(__u64, a) && \ typecheck(__u64, b) && \ - ((__s64)((a) - (b)) >= 0)) + ((__s64)(a) - (__s64)(b) >= 0)) #define time_before_eq64(a,b) time_after_eq64(b,a) /* @@ -254,11 +254,23 @@ extern unsigned long preset_lpj; #define SEC_JIFFIE_SC (32 - SHIFT_HZ) #endif #define NSEC_JIFFIE_SC (SEC_JIFFIE_SC + 29) +#define USEC_JIFFIE_SC (SEC_JIFFIE_SC + 19) #define SEC_CONVERSION ((unsigned long)((((u64)NSEC_PER_SEC << SEC_JIFFIE_SC) +\ TICK_NSEC -1) / (u64)TICK_NSEC)) #define NSEC_CONVERSION ((unsigned long)((((u64)1 << NSEC_JIFFIE_SC) +\ TICK_NSEC -1) / (u64)TICK_NSEC)) +#define USEC_CONVERSION \ + ((unsigned long)((((u64)NSEC_PER_USEC << USEC_JIFFIE_SC) +\ + TICK_NSEC -1) / (u64)TICK_NSEC)) +/* + * USEC_ROUND is used in the timeval to jiffie conversion. See there + * for more details. It is the scaled resolution rounding value. Note + * that it is a 64-bit value. Since, when it is applied, we are already + * in jiffies (albit scaled), it is nothing but the bits we will shift + * off. + */ +#define USEC_ROUND (u64)(((u64)1 << USEC_JIFFIE_SC) - 1) /* * The maximum jiffie value is (MAX_INT >> 1). Here we translate that * into seconds. The 64-bit case will overflow if we are not careful, diff --git a/include/linux/libata.h b/include/linux/libata.h index cc82cfb6625..f33619d8ac5 100644 --- a/include/linux/libata.h +++ b/include/linux/libata.h @@ -547,7 +547,6 @@ struct ata_host { struct device *dev; void __iomem * const *iomap; unsigned int n_ports; - unsigned int n_tags; /* nr of NCQ tags */ void *private_data; struct ata_port_operations *ops; unsigned long flags; @@ -773,7 +772,6 @@ struct ata_port { unsigned long qc_allocated; unsigned int qc_active; int nr_active_links; /* #links with active qcs */ - unsigned int last_tag; /* track next tag hw expects */ struct ata_link link; /* host default link */ struct ata_link *slave_link; /* see ata_slave_link_init() */ diff --git a/include/linux/list.h b/include/linux/list.h index 83a9576f479..b83e5657365 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -373,22 +373,6 @@ static inline void list_splice_tail_init(struct list_head *list, (!list_empty(ptr) ? list_first_entry(ptr, type, member) : NULL) /** - * list_next_entry - get the next element in list - * @pos: the type * to cursor - * @member: the name of the list_struct within the struct. - */ -#define list_next_entry(pos, member) \ - list_entry((pos)->member.next, typeof(*(pos)), member) - -/** - * list_prev_entry - get the prev element in list - * @pos: the type * to cursor - * @member: the name of the list_struct within the struct. - */ -#define list_prev_entry(pos, member) \ - list_entry((pos)->member.prev, typeof(*(pos)), member) - -/** * list_for_each - iterate over a list * @pos: the &struct list_head to use as a loop cursor. * @head: the head for your list. diff --git a/include/linux/mm.h b/include/linux/mm.h index ae19523871c..8b8d3c2d4b5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -363,18 +363,8 @@ static inline void compound_unlock_irqrestore(struct page *page, static inline struct page *compound_head(struct page *page) { - if (unlikely(PageTail(page))) { - struct page *head = page->first_page; - - /* - * page->first_page may be a dangling pointer to an old - * compound page, so recheck that it is still a tail - * page before returning. - */ - smp_rmb(); - if (likely(PageTail(page))) - return head; - } + if (unlikely(PageTail(page))) + return page->first_page; return page; } @@ -1011,7 +1001,6 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping, extern void truncate_pagecache(struct inode *inode, loff_t old, loff_t new); extern void truncate_setsize(struct inode *inode, loff_t newsize); -void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to); void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end); int truncate_inode_page(struct address_space *mapping, struct page *page); int generic_error_remove_page(struct address_space *mapping, struct page *page); diff --git a/include/linux/mount.h b/include/linux/mount.h index 8eeb8f6ab11..73005f9957e 100644 --- a/include/linux/mount.h +++ b/include/linux/mount.h @@ -42,18 +42,11 @@ struct mnt_namespace; * flag, consider how it interacts with shared mounts. */ #define MNT_SHARED_MASK (MNT_UNBINDABLE) -#define MNT_USER_SETTABLE_MASK (MNT_NOSUID | MNT_NODEV | MNT_NOEXEC \ - | MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME \ - | MNT_READONLY) +#define MNT_PROPAGATION_MASK (MNT_SHARED | MNT_UNBINDABLE) -#define MNT_ATIME_MASK (MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME ) #define MNT_INTERNAL 0x4000 -#define MNT_LOCK_ATIME 0x040000 -#define MNT_LOCK_NOEXEC 0x080000 -#define MNT_LOCK_NOSUID 0x100000 -#define MNT_LOCK_NODEV 0x200000 #define MNT_LOCK_READONLY 0x400000 struct vfsmount { diff --git a/include/linux/netlink.h b/include/linux/netlink.h index 9516dad4510..6358da5eeee 100644 --- a/include/linux/netlink.h +++ b/include/linux/netlink.h @@ -16,10 +16,9 @@ static inline struct nlmsghdr *nlmsg_hdr(const struct sk_buff *skb) } enum netlink_skb_flags { - NETLINK_SKB_MMAPED = 0x1, /* Packet data is mmaped */ - NETLINK_SKB_TX = 0x2, /* Packet was sent by userspace */ - NETLINK_SKB_DELIVERED = 0x4, /* Packet was delivered */ - NETLINK_SKB_DST = 0x8, /* Dst set in sendto or sendmsg */ + NETLINK_SKB_MMAPED = 0x1, /* Packet data is mmaped */ + NETLINK_SKB_TX = 0x2, /* Packet was sent by userspace */ + NETLINK_SKB_DELIVERED = 0x4, /* Packet was delivered */ }; struct netlink_skb_parms { @@ -145,11 +144,4 @@ static inline int netlink_dump_start(struct sock *ssk, struct sk_buff *skb, return __netlink_dump_start(ssk, skb, nlh, control); } -bool __netlink_ns_capable(const struct netlink_skb_parms *nsp, - struct user_namespace *ns, int cap); -bool netlink_ns_capable(const struct sk_buff *skb, - struct user_namespace *ns, int cap); -bool netlink_capable(const struct sk_buff *skb, int cap); -bool netlink_net_capable(const struct sk_buff *skb, int cap); - #endif /* __LINUX_NETLINK_H */ diff --git a/include/linux/of.h b/include/linux/of.h index 1eca53b80af..ec7b6b60c3b 100644 --- a/include/linux/of.h +++ b/include/linux/of.h @@ -252,12 +252,14 @@ extern int of_property_read_u64(const struct device_node *np, extern int of_property_read_string(struct device_node *np, const char *propname, const char **out_string); +extern int of_property_read_string_index(struct device_node *np, + const char *propname, + int index, const char **output); extern int of_property_match_string(struct device_node *np, const char *propname, const char *string); -extern int of_property_read_string_helper(struct device_node *np, - const char *propname, - const char **out_strs, size_t sz, int index); +extern int of_property_count_strings(struct device_node *np, + const char *propname); extern int of_device_is_compatible(const struct device_node *device, const char *); extern int of_device_is_available(const struct device_node *device); @@ -437,9 +439,15 @@ static inline int of_property_read_string(struct device_node *np, return -ENOSYS; } -static inline int of_property_read_string_helper(struct device_node *np, - const char *propname, - const char **out_strs, size_t sz, int index) +static inline int of_property_read_string_index(struct device_node *np, + const char *propname, int index, + const char **out_string) +{ + return -ENOSYS; +} + +static inline int of_property_count_strings(struct device_node *np, + const char *propname) { return -ENOSYS; } @@ -515,70 +523,6 @@ static inline int of_node_to_nid(struct device_node *np) #endif /** - * of_property_read_string_array() - Read an array of strings from a multiple - * strings property. - * @np: device node from which the property value is to be read. - * @propname: name of the property to be searched. - * @out_strs: output array of string pointers. - * @sz: number of array elements to read. - * - * Search for a property in a device tree node and retrieve a list of - * terminated string values (pointer to data, not a copy) in that property. - * - * If @out_strs is NULL, the number of strings in the property is returned. - */ -static inline int of_property_read_string_array(struct device_node *np, - const char *propname, const char **out_strs, - size_t sz) -{ - return of_property_read_string_helper(np, propname, out_strs, sz, 0); -} - -/** - * of_property_count_strings() - Find and return the number of strings from a - * multiple strings property. - * @np: device node from which the property value is to be read. - * @propname: name of the property to be searched. - * - * Search for a property in a device tree node and retrieve the number of null - * terminated string contain in it. Returns the number of strings on - * success, -EINVAL if the property does not exist, -ENODATA if property - * does not have a value, and -EILSEQ if the string is not null-terminated - * within the length of the property data. - */ -static inline int of_property_count_strings(struct device_node *np, - const char *propname) -{ - return of_property_read_string_helper(np, propname, NULL, 0, 0); -} - -/** - * of_property_read_string_index() - Find and read a string from a multiple - * strings property. - * @np: device node from which the property value is to be read. - * @propname: name of the property to be searched. - * @index: index of the string in the list of strings - * @out_string: pointer to null terminated return string, modified only if - * return value is 0. - * - * Search for a property in a device tree node and retrieve a null - * terminated string value (pointer to data, not a copy) in the list of strings - * contained in that property. - * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if - * property does not have a value, and -EILSEQ if the string is not - * null-terminated within the length of the property data. - * - * The out_string pointer is modified only if a valid string can be decoded. - */ -static inline int of_property_read_string_index(struct device_node *np, - const char *propname, - int index, const char **output) -{ - int rc = of_property_read_string_helper(np, propname, output, 1, index); - return rc < 0 ? rc : 0; -} - -/** * of_property_read_bool - Findfrom a property * @np: device node from which the property value is to be read. * @propname: name of the property to be searched. diff --git a/include/linux/oom.h b/include/linux/oom.h index 297cda52885..da60007075b 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -50,9 +50,6 @@ static inline bool oom_task_origin(const struct task_struct *p) extern unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg, const nodemask_t *nodemask, unsigned long totalpages); - -extern int oom_kills_count(void); -extern void note_oom_kill(void); extern void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, unsigned int points, unsigned long totalpages, struct mem_cgroup *memcg, nodemask_t *nodemask, diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 229a757e1c1..c5b6dbf9c2f 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -695,17 +695,10 @@ static inline void perf_callchain_store(struct perf_callchain_entry *entry, u64 extern int sysctl_perf_event_paranoid; extern int sysctl_perf_event_mlock; extern int sysctl_perf_event_sample_rate; -extern int sysctl_perf_cpu_time_max_percent; - -extern void perf_sample_event_took(u64 sample_len_ns); extern int perf_proc_update_handler(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos); -extern int perf_cpu_time_max_percent_handler(struct ctl_table *table, int write, - void __user *buffer, size_t *lenp, - loff_t *ppos); - static inline bool perf_paranoid_tracepoint_raw(void) { diff --git a/include/linux/printk.h b/include/linux/printk.h index 708b8a84f6c..22c7052e937 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -124,9 +124,9 @@ asmlinkage __printf(1, 2) __cold int printk(const char *fmt, ...); /* - * Special printk facility for scheduler/timekeeping use only, _DO_NOT_USE_ ! + * Special printk facility for scheduler use only, _DO_NOT_USE_ ! */ -__printf(1, 2) __cold int printk_deferred(const char *fmt, ...); +__printf(1, 2) __cold int printk_sched(const char *fmt, ...); /* * Please don't use printk_ratelimit(), because it shares ratelimiting state @@ -161,7 +161,7 @@ int printk(const char *s, ...) return 0; } static inline __printf(1, 2) __cold -int printk_deferred(const char *s, ...) +int printk_sched(const char *s, ...) { return 0; } diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h index bb980ae6d9d..89573a33ab3 100644 --- a/include/linux/ptrace.h +++ b/include/linux/ptrace.h @@ -5,7 +5,6 @@ #include <linux/sched.h> /* For struct task_struct. */ #include <linux/err.h> /* for IS_ERR_VALUE */ #include <linux/bug.h> /* For BUG_ON. */ -#include <linux/pid_namespace.h> /* For task_active_pid_ns. */ #include <uapi/linux/ptrace.h> /* @@ -130,37 +129,6 @@ static inline void ptrace_event(int event, unsigned long message) } /** - * ptrace_event_pid - possibly stop for a ptrace event notification - * @event: %PTRACE_EVENT_* value to report - * @pid: process identifier for %PTRACE_GETEVENTMSG to return - * - * Check whether @event is enabled and, if so, report @event and @pid - * to the ptrace parent. @pid is reported as the pid_t seen from the - * the ptrace parent's pid namespace. - * - * Called without locks. - */ -static inline void ptrace_event_pid(int event, struct pid *pid) -{ - /* - * FIXME: There's a potential race if a ptracer in a different pid - * namespace than parent attaches between computing message below and - * when we acquire tasklist_lock in ptrace_stop(). If this happens, - * the ptracer will get a bogus pid from PTRACE_GETEVENTMSG. - */ - unsigned long message = 0; - struct pid_namespace *ns; - - rcu_read_lock(); - ns = task_active_pid_ns(rcu_dereference(current->parent)); - if (ns) - message = pid_nr_ns(pid, ns); - rcu_read_unlock(); - - ptrace_event(event, message); -} - -/** * ptrace_init_task - initialize ptrace state for a new child * @child: new child task * @ptrace: true if child should be ptrace'd by parent's tracer @@ -337,9 +305,6 @@ static inline void user_single_step_siginfo(struct task_struct *tsk, * calling arch_ptrace_stop() when it would be superfluous. For example, * if the thread has not been back to user mode since the last stop, the * thread state might indicate that nothing needs to be done. - * - * This is guaranteed to be invoked once before a task stops for ptrace and - * may include arch-specific operations necessary prior to a ptrace stop. */ #define arch_ptrace_stop_needed(code, info) (0) #endif diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index 49a4d6f5910..d69cf637a15 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -97,7 +97,7 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *k __ring_buffer_alloc((size), (flags), &__key); \ }) -int ring_buffer_wait(struct ring_buffer *buffer, int cpu); +void ring_buffer_wait(struct ring_buffer *buffer, int cpu); int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu, struct file *filp, poll_table *poll_table); diff --git a/include/linux/sched.h b/include/linux/sched.h index 54ff6adb261..c4398dffb74 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -497,7 +497,6 @@ struct signal_struct { atomic_t sigcnt; atomic_t live; int nr_threads; - struct list_head thread_head; wait_queue_head_t wait_chldexit; /* for wait4() */ @@ -1179,7 +1178,6 @@ struct task_struct { /* PID/PID hash table linkage. */ struct pid_link pids[PIDTYPE_MAX]; struct list_head thread_group; - struct list_head thread_node; struct completion *vfork_done; /* for vfork() */ int __user *set_child_tid; /* CLONE_CHILD_SETTID */ @@ -1708,13 +1706,11 @@ extern int task_free_unregister(struct notifier_block *n); #define tsk_used_math(p) ((p)->flags & PF_USED_MATH) #define used_math() tsk_used_math(current) -/* __GFP_IO isn't allowed if PF_MEMALLOC_NOIO is set in current->flags - * __GFP_FS is also cleared as it implies __GFP_IO. - */ +/* __GFP_IO isn't allowed if PF_MEMALLOC_NOIO is set in current->flags */ static inline gfp_t memalloc_noio_flags(gfp_t flags) { if (unlikely(current->flags & PF_MEMALLOC_NOIO)) - flags &= ~(__GFP_IO | __GFP_FS); + flags &= ~__GFP_IO; return flags; } @@ -2209,16 +2205,6 @@ extern bool current_is_single_threaded(void); #define while_each_thread(g, t) \ while ((t = next_thread(t)) != g) -#define __for_each_thread(signal, t) \ - list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node) - -#define for_each_thread(p, t) \ - __for_each_thread((p)->signal, t) - -/* Careful: this is a double loop, 'break' won't work as expected. */ -#define for_each_process_thread(p, t) \ - for_each_process(p) for_each_thread(p, t) - static inline int get_nr_threads(struct task_struct *tsk) { return tsk->signal->nr_threads; diff --git a/include/linux/sock_diag.h b/include/linux/sock_diag.h index 46cca4c0684..54f91d35e5f 100644 --- a/include/linux/sock_diag.h +++ b/include/linux/sock_diag.h @@ -23,7 +23,7 @@ int sock_diag_check_cookie(void *sk, __u32 *cookie); void sock_diag_save_cookie(void *sk, __u32 *cookie); int sock_diag_put_meminfo(struct sock *sk, struct sk_buff *skb, int attr); -int sock_diag_put_filterinfo(bool may_report_filterinfo, struct sock *sk, +int sock_diag_put_filterinfo(struct user_namespace *user_ns, struct sock *sk, struct sk_buff *skb, int attrtype); #endif diff --git a/include/linux/string.h b/include/linux/string.h index 0ed878d0465..ac889c5ea11 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -129,7 +129,7 @@ int bprintf(u32 *bin_buf, size_t size, const char *fmt, ...) __printf(3, 4); #endif extern ssize_t memory_read_from_buffer(void *to, size_t count, loff_t *ppos, - const void *from, size_t available); + const void *from, size_t available); /** * strstarts - does @str start with @prefix? @@ -141,8 +141,7 @@ static inline bool strstarts(const char *str, const char *prefix) return strncmp(str, prefix, strlen(prefix)) == 0; } -size_t memweight(const void *ptr, size_t bytes); -void memzero_explicit(void *s, size_t count); +extern size_t memweight(const void *ptr, size_t bytes); /** * kbasename - return the last part of a pathname. diff --git a/include/linux/sunrpc/svc_xprt.h b/include/linux/sunrpc/svc_xprt.h index f5bfb1a80ab..b05963f09eb 100644 --- a/include/linux/sunrpc/svc_xprt.h +++ b/include/linux/sunrpc/svc_xprt.h @@ -32,7 +32,6 @@ struct svc_xprt_class { struct svc_xprt_ops *xcl_ops; struct list_head xcl_list; u32 xcl_max_payload; - int xcl_ident; }; /* diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h index 947009ed599..62fd1b756e9 100644 --- a/include/linux/sunrpc/svcsock.h +++ b/include/linux/sunrpc/svcsock.h @@ -56,7 +56,6 @@ int svc_recv(struct svc_rqst *, long); int svc_send(struct svc_rqst *); void svc_drop(struct svc_rqst *); void svc_sock_update_bufs(struct svc_serv *serv); -bool svc_alien_sock(struct net *net, int fd); int svc_addsock(struct svc_serv *serv, const int fd, char *name_return, const size_t len); void svc_init_xprt_sock(void); diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h index ba605015c4d..f8e084d0fc7 100644 --- a/include/linux/tracepoint.h +++ b/include/linux/tracepoint.h @@ -60,12 +60,6 @@ struct tp_module { unsigned int num_tracepoints; struct tracepoint * const *tracepoints_ptrs; }; -bool trace_module_has_bad_taint(struct module *mod); -#else -static inline bool trace_module_has_bad_taint(struct module *mod) -{ - return false; -} #endif /* CONFIG_MODULES */ struct tracepoint_iter { diff --git a/include/linux/usb/quirks.h b/include/linux/usb/quirks.h index 49587dc22f5..52f944dfe2f 100644 --- a/include/linux/usb/quirks.h +++ b/include/linux/usb/quirks.h @@ -30,7 +30,4 @@ descriptor */ #define USB_QUIRK_DELAY_INIT 0x00000040 -/* device generates spurious wakeup, ignore remote wakeup capability */ -#define USB_QUIRK_IGNORE_REMOTE_WAKEUP 0x00000200 - #endif /* __LINUX_USB_QUIRKS_H */ diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h index 1bd1d21578d..1bbd28ed95b 100644 --- a/include/linux/usb/usbnet.h +++ b/include/linux/usb/usbnet.h @@ -30,7 +30,7 @@ struct usbnet { struct driver_info *driver_info; const char *driver_name; void *driver_priv; - wait_queue_head_t wait; + wait_queue_head_t *wait; struct mutex phy_mutex; unsigned char suspend_count; unsigned char pkt_cnt, pkt_err; diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index ff28cf578d0..623488fdc1f 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -414,7 +414,7 @@ __alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active, #define create_freezable_workqueue(name) \ alloc_workqueue((name), WQ_FREEZABLE | WQ_UNBOUND | WQ_MEM_RECLAIM, 1) #define create_singlethread_workqueue(name) \ - alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, name) + alloc_workqueue((name), WQ_UNBOUND | WQ_MEM_RECLAIM, 1) extern void destroy_workqueue(struct workqueue_struct *wq); diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h index 2cc4e0df9c5..d88a098d1af 100644 --- a/include/media/videobuf2-core.h +++ b/include/media/videobuf2-core.h @@ -318,9 +318,6 @@ struct v4l2_fh; * @done_wq: waitqueue for processes waiting for buffers ready to be dequeued * @alloc_ctx: memory type/allocator-specific contexts for each plane * @streaming: current streaming state - * @waiting_for_buffers: used in poll() to check if vb2 is still waiting for - * buffers. Only set for capture queues if qbuf has not yet been - * called since poll() needs to return POLLERR in that situation. * @fileio: file io emulator internal data, used only if emulator is active */ struct vb2_queue { @@ -353,7 +350,6 @@ struct vb2_queue { unsigned int plane_sizes[VIDEO_MAX_PLANES]; unsigned int streaming:1; - unsigned int waiting_for_buffers:1; struct vb2_fileio_data *fileio; }; diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h index 0a8f6f961ba..de2c78529af 100644 --- a/include/net/inet_connection_sock.h +++ b/include/net/inet_connection_sock.h @@ -62,7 +62,6 @@ struct inet_connection_sock_af_ops { void (*addr2sockaddr)(struct sock *sk, struct sockaddr *); int (*bind_conflict)(const struct sock *sk, const struct inet_bind_bucket *tb, bool relax); - void (*mtu_reduced)(struct sock *sk); }; /** inet_connection_sock - INET connection oriented sock diff --git a/include/net/inetpeer.h b/include/net/inetpeer.h index bb06fd26a7b..53f464d7cdd 100644 --- a/include/net/inetpeer.h +++ b/include/net/inetpeer.h @@ -41,13 +41,14 @@ struct inet_peer { struct rcu_head gc_rcu; }; /* - * Once inet_peer is queued for deletion (refcnt == -1), following field - * is not available: rid + * Once inet_peer is queued for deletion (refcnt == -1), following fields + * are not available: rid, ip_id_count * We can share memory with rcu_head to help keep inet_peer small. */ union { struct { atomic_t rid; /* Frag reception counter */ + atomic_t ip_id_count; /* IP ID for the next packet */ }; struct rcu_head rcu; struct inet_peer *gc_next; @@ -165,7 +166,7 @@ extern void inetpeer_invalidate_tree(struct inet_peer_base *); extern void inetpeer_invalidate_family(int family); /* - * temporary check to make sure we dont access rid, tcp_ts, + * temporary check to make sure we dont access rid, ip_id_count, tcp_ts, * tcp_ts_stamp if no refcount is taken on inet_peer */ static inline void inet_peer_refcheck(const struct inet_peer *p) @@ -173,4 +174,20 @@ static inline void inet_peer_refcheck(const struct inet_peer *p) WARN_ON_ONCE(atomic_read(&p->refcnt) <= 0); } + +/* can be called with or without local BH being disabled */ +static inline int inet_getid(struct inet_peer *p, int more) +{ + int old, new; + more++; + inet_peer_refcheck(p); + do { + old = atomic_read(&p->ip_id_count); + new = old + more; + if (!new) + new = 1; + } while (atomic_cmpxchg(&p->ip_id_count, old, new) != old); + return new; +} + #endif /* _NET_INETPEER_H */ diff --git a/include/net/ip.h b/include/net/ip.h index 1800b216b7a..97aa4ba4d17 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -255,10 +255,9 @@ int ip_dont_fragment(struct sock *sk, struct dst_entry *dst) !(dst_metric_locked(dst, RTAX_MTU))); } -u32 ip_idents_reserve(u32 hash, int segs); -void __ip_select_ident(struct iphdr *iph, int segs); +extern void __ip_select_ident(struct iphdr *iph, struct dst_entry *dst, int more); -static inline void ip_select_ident_segs(struct sk_buff *skb, struct sock *sk, int segs) +static inline void ip_select_ident(struct sk_buff *skb, struct dst_entry *dst, struct sock *sk) { struct iphdr *iph = ip_hdr(skb); @@ -268,20 +267,24 @@ static inline void ip_select_ident_segs(struct sk_buff *skb, struct sock *sk, in * does not change, they drop every other packet in * a TCP stream using header compression. */ - if (sk && inet_sk(sk)->inet_daddr) { - iph->id = htons(inet_sk(sk)->inet_id); - inet_sk(sk)->inet_id += segs; - } else { - iph->id = 0; - } - } else { - __ip_select_ident(iph, segs); - } + iph->id = (sk && inet_sk(sk)->inet_daddr) ? + htons(inet_sk(sk)->inet_id++) : 0; + } else + __ip_select_ident(iph, dst, 0); } -static inline void ip_select_ident(struct sk_buff *skb, struct sock *sk) +static inline void ip_select_ident_more(struct sk_buff *skb, struct dst_entry *dst, struct sock *sk, int more) { - ip_select_ident_segs(skb, sk, 1); + struct iphdr *iph = ip_hdr(skb); + + if ((iph->frag_off & htons(IP_DF)) && !skb->local_df) { + if (sk && inet_sk(sk)->inet_daddr) { + iph->id = htons(inet_sk(sk)->inet_id); + inet_sk(sk)->inet_id += 1 + more; + } else + iph->id = 0; + } else + __ip_select_ident(iph, dst, more); } /* diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h index 8d977b34364..b906f4a131a 100644 --- a/include/net/ip6_route.h +++ b/include/net/ip6_route.h @@ -32,11 +32,6 @@ struct route_info { #define RT6_LOOKUP_F_SRCPREF_PUBLIC 0x00000010 #define RT6_LOOKUP_F_SRCPREF_COA 0x00000020 -/* We do not (yet ?) support IPv6 jumbograms (RFC 2675) - * Unlike IPv4, hdr->seg_len doesn't include the IPv6 header - */ -#define IP6_MAX_MTU (0xFFFF + sizeof(struct ipv6hdr)) - /* * rt6_srcprefs2flags() and rt6_flags2srcprefs() translate * between IPV6_ADDR_PREFERENCES socket option values diff --git a/include/net/ipv6.h b/include/net/ipv6.h index 27e9ba47b30..67b43806b62 100644 --- a/include/net/ipv6.h +++ b/include/net/ipv6.h @@ -539,19 +539,14 @@ static inline u32 ipv6_addr_hash(const struct in6_addr *a) } /* more secured version of ipv6_addr_hash() */ -static inline u32 __ipv6_addr_jhash(const struct in6_addr *a, const u32 initval) +static inline u32 ipv6_addr_jhash(const struct in6_addr *a) { u32 v = (__force u32)a->s6_addr32[0] ^ (__force u32)a->s6_addr32[1]; return jhash_3words(v, (__force u32)a->s6_addr32[2], (__force u32)a->s6_addr32[3], - initval); -} - -static inline u32 ipv6_addr_jhash(const struct in6_addr *a) -{ - return __ipv6_addr_jhash(a, ipv6_hash_secret); + ipv6_hash_secret); } static inline bool ipv6_addr_loopback(const struct in6_addr *a) @@ -663,6 +658,8 @@ static inline int ipv6_addr_diff(const struct in6_addr *a1, const struct in6_add return __ipv6_addr_diff(a1, a2, sizeof(struct in6_addr)); } +extern void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt); + /* * Header manipulation */ diff --git a/include/net/netfilter/nf_conntrack_extend.h b/include/net/netfilter/nf_conntrack_extend.h index 86dd7dd3d61..331310851cf 100644 --- a/include/net/netfilter/nf_conntrack_extend.h +++ b/include/net/netfilter/nf_conntrack_extend.h @@ -41,8 +41,8 @@ enum nf_ct_ext_id { /* Extensions: optional stuff which isn't permanently in struct. */ struct nf_ct_ext { struct rcu_head rcu; - u16 offset[NF_CT_EXT_NUM]; - u16 len; + u8 offset[NF_CT_EXT_NUM]; + u8 len; char data[0]; }; diff --git a/include/net/sctp/command.h b/include/net/sctp/command.h index 5f39c1cc076..35247271e55 100644 --- a/include/net/sctp/command.h +++ b/include/net/sctp/command.h @@ -118,7 +118,7 @@ typedef enum { * analysis of the state functions, but in reality just taken from * thin air in the hopes othat we don't trigger a kernel panic. */ -#define SCTP_MAX_NUM_COMMANDS 20 +#define SCTP_MAX_NUM_COMMANDS 14 typedef union { __s32 i32; diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h index da6b9a01ff7..1bd4c4144fe 100644 --- a/include/net/sctp/structs.h +++ b/include/net/sctp/structs.h @@ -1252,7 +1252,6 @@ struct sctp_endpoint { /* SCTP-AUTH: endpoint shared keys */ struct list_head endpoint_shared_keys; __u16 active_key_id; - __u8 auth_enable; }; /* Recover the outter endpoint structure. */ @@ -1281,8 +1280,7 @@ struct sctp_endpoint *sctp_endpoint_is_match(struct sctp_endpoint *, int sctp_has_association(struct net *net, const union sctp_addr *laddr, const union sctp_addr *paddr); -int sctp_verify_init(struct net *net, const struct sctp_endpoint *ep, - const struct sctp_association *asoc, +int sctp_verify_init(struct net *net, const struct sctp_association *asoc, sctp_cid_t, sctp_init_chunk_t *peer_init, struct sctp_chunk *chunk, struct sctp_chunk **err_chunk); int sctp_process_init(struct sctp_association *, struct sctp_chunk *chunk, diff --git a/include/net/secure_seq.h b/include/net/secure_seq.h index b1c3d1c63c4..c2e542b27a5 100644 --- a/include/net/secure_seq.h +++ b/include/net/secure_seq.h @@ -3,6 +3,8 @@ #include <linux/types.h> +extern __u32 secure_ip_id(__be32 daddr); +extern __u32 secure_ipv6_id(const __be32 daddr[4]); extern u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport); extern u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr, __be16 dport); diff --git a/include/net/sock.h b/include/net/sock.h index e8310f5ff9d..0cfdf67f90a 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -934,6 +934,7 @@ struct proto { struct sk_buff *skb); void (*release_cb)(struct sock *sk); + void (*mtu_reduced)(struct sock *sk); /* Keeping track of sk's, looking them up, and port selection methods. */ void (*hash)(struct sock *sk); @@ -1438,11 +1439,6 @@ static inline void sk_wmem_free_skb(struct sock *sk, struct sk_buff *skb) */ #define sock_owned_by_user(sk) ((sk)->sk_lock.owned) -static inline void sock_release_ownership(struct sock *sk) -{ - sk->sk_lock.owned = 0; -} - /* * Macro so as to not evaluate some arguments when * lockdep is not enabled. @@ -1730,8 +1726,8 @@ sk_dst_get(struct sock *sk) rcu_read_lock(); dst = rcu_dereference(sk->sk_dst_cache); - if (dst && !atomic_inc_not_zero(&dst->__refcnt)) - dst = NULL; + if (dst) + dst_hold(dst); rcu_read_unlock(); return dst; } @@ -1770,11 +1766,9 @@ __sk_dst_set(struct sock *sk, struct dst_entry *dst) static inline void sk_dst_set(struct sock *sk, struct dst_entry *dst) { - struct dst_entry *old_dst; - - sk_tx_queue_clear(sk); - old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst); - dst_release(old_dst); + spin_lock(&sk->sk_dst_lock); + __sk_dst_set(sk, dst); + spin_unlock(&sk->sk_dst_lock); } static inline void @@ -1786,7 +1780,9 @@ __sk_dst_reset(struct sock *sk) static inline void sk_dst_reset(struct sock *sk) { - sk_dst_set(sk, NULL); + spin_lock(&sk->sk_dst_lock); + __sk_dst_reset(sk); + spin_unlock(&sk->sk_dst_lock); } extern struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie); @@ -2251,11 +2247,6 @@ extern void sock_enable_timestamp(struct sock *sk, int flag); extern int sock_get_timestamp(struct sock *, struct timeval __user *); extern int sock_get_timestampns(struct sock *, struct timespec __user *); -bool sk_ns_capable(const struct sock *sk, - struct user_namespace *user_ns, int cap); -bool sk_capable(const struct sock *sk, int cap); -bool sk_net_capable(const struct sock *sk, int cap); - /* * Enable debug/info messages */ diff --git a/include/net/tcp.h b/include/net/tcp.h index e0fc2135758..c10bd7a3349 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -461,7 +461,6 @@ extern const u8 *tcp_parse_md5sig_option(const struct tcphdr *th); */ extern void tcp_v4_send_check(struct sock *sk, struct sk_buff *skb); -void tcp_v4_mtu_reduced(struct sock *sk); extern int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb); extern struct sock * tcp_create_openreq_child(struct sock *sk, struct request_sock *req, @@ -1310,8 +1309,7 @@ struct tcp_fastopen_request { /* Fast Open cookie. Size 0 means a cookie request */ struct tcp_fastopen_cookie cookie; struct msghdr *data; /* data in MSG_FASTOPEN */ - size_t size; - int copied; /* queued in tcp_connect() */ + u16 copied; /* queued in tcp_connect() */ }; void tcp_free_fastopen_req(struct tcp_sock *tp); diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h index cc92ef3df62..cc645876d14 100644 --- a/include/scsi/scsi_device.h +++ b/include/scsi/scsi_device.h @@ -248,7 +248,7 @@ struct scsi_target { struct list_head siblings; struct list_head devices; struct device dev; - struct kref reap_ref; /* last put renders target invisible */ + unsigned int reap_ref; /* protected by the host lock */ unsigned int channel; unsigned int id; /* target id ... replace * scsi_device.id eventually */ @@ -272,6 +272,7 @@ struct scsi_target { #define SCSI_DEFAULT_TARGET_BLOCKED 3 char scsi_level; + struct execute_work ew; enum scsi_target_state state; void *hostdata; /* available to low-level driver */ unsigned long starget_data[0]; /* for the transport */ diff --git a/include/sound/core.h b/include/sound/core.h index 97cd9c3592f..5bfe5136441 100644 --- a/include/sound/core.h +++ b/include/sound/core.h @@ -120,8 +120,6 @@ struct snd_card { int user_ctl_count; /* count of all user controls */ struct list_head controls; /* all controls for this card */ struct list_head ctl_files; /* active control files */ - struct mutex user_ctl_lock; /* protects user controls against - concurrent access */ struct snd_info_entry *proc_root; /* root for soundcard specific files */ struct snd_info_entry *proc_id; /* the card id */ diff --git a/include/target/iscsi/iscsi_transport.h b/include/target/iscsi/iscsi_transport.h index 4a5f00e2e6c..c5aade52386 100644 --- a/include/target/iscsi/iscsi_transport.h +++ b/include/target/iscsi/iscsi_transport.h @@ -11,7 +11,6 @@ struct iscsit_transport { int (*iscsit_setup_np)(struct iscsi_np *, struct __kernel_sockaddr_storage *); int (*iscsit_accept_np)(struct iscsi_np *, struct iscsi_conn *); void (*iscsit_free_np)(struct iscsi_np *); - void (*iscsit_wait_conn)(struct iscsi_conn *); void (*iscsit_free_conn)(struct iscsi_conn *); struct iscsi_cmd *(*iscsit_alloc_cmd)(struct iscsi_conn *, gfp_t); int (*iscsit_get_login_rx)(struct iscsi_conn *, struct iscsi_login *); diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h index a63529ab9fd..ffa2696d64d 100644 --- a/include/target/target_core_backend.h +++ b/include/target/target_core_backend.h @@ -50,7 +50,6 @@ int transport_subsystem_register(struct se_subsystem_api *); void transport_subsystem_release(struct se_subsystem_api *); void target_complete_cmd(struct se_cmd *, u8); -void target_complete_cmd_with_length(struct se_cmd *, u8, int); sense_reason_t spc_parse_cdb(struct se_cmd *cmd, unsigned int *size); sense_reason_t spc_emulate_report_luns(struct se_cmd *cmd); diff --git a/include/trace/events/block.h b/include/trace/events/block.h index 2e96e2bb152..60ae7c3db91 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -132,7 +132,6 @@ DEFINE_EVENT(block_rq_with_error, block_rq_requeue, * block_rq_complete - block IO operation completed by device driver * @q: queue containing the block operation request * @rq: block operations request - * @nr_bytes: number of completed bytes * * The block_rq_complete tracepoint event indicates that some portion * of operation request has been completed by the device driver. If @@ -140,37 +139,11 @@ DEFINE_EVENT(block_rq_with_error, block_rq_requeue, * do for the request. If @rq->bio is non-NULL then there is * additional work required to complete the request. */ -TRACE_EVENT(block_rq_complete, +DEFINE_EVENT(block_rq_with_error, block_rq_complete, - TP_PROTO(struct request_queue *q, struct request *rq, - unsigned int nr_bytes), - - TP_ARGS(q, rq, nr_bytes), - - TP_STRUCT__entry( - __field( dev_t, dev ) - __field( sector_t, sector ) - __field( unsigned int, nr_sector ) - __field( int, errors ) - __array( char, rwbs, RWBS_LEN ) - __dynamic_array( char, cmd, blk_cmd_buf_len(rq) ) - ), - - TP_fast_assign( - __entry->dev = rq->rq_disk ? disk_devt(rq->rq_disk) : 0; - __entry->sector = blk_rq_pos(rq); - __entry->nr_sector = nr_bytes >> 9; - __entry->errors = rq->errors; - - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, nr_bytes); - blk_dump_cmd(__get_str(cmd), rq); - ), + TP_PROTO(struct request_queue *q, struct request *rq), - TP_printk("%d,%d %s (%s) %llu + %u [%d]", - MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->rwbs, __get_str(cmd), - (unsigned long long)__entry->sector, - __entry->nr_sector, __entry->errors) + TP_ARGS(q, rq) ); DECLARE_EVENT_CLASS(block_rq, diff --git a/include/trace/events/module.h b/include/trace/events/module.h index ca298c7157a..16193273741 100644 --- a/include/trace/events/module.h +++ b/include/trace/events/module.h @@ -78,7 +78,7 @@ DECLARE_EVENT_CLASS(module_refcnt, TP_fast_assign( __entry->ip = ip; - __entry->refcnt = __this_cpu_read(mod->refptr->incs) - __this_cpu_read(mod->refptr->decs); + __entry->refcnt = __this_cpu_read(mod->refptr->incs) + __this_cpu_read(mod->refptr->decs); __assign_str(name, mod->name); ), diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h index dbb47418df8..66dba42128d 100644 --- a/include/trace/ftrace.h +++ b/include/trace/ftrace.h @@ -299,12 +299,15 @@ static struct trace_event_functions ftrace_event_type_funcs_##call = { \ #undef __array #define __array(type, item, len) \ do { \ - char *type_str = #type"["__stringify(len)"]"; \ + mutex_lock(&event_storage_mutex); \ BUILD_BUG_ON(len > MAX_FILTER_STR_VAL); \ - ret = trace_define_field(event_call, type_str, #item, \ + snprintf(event_storage, sizeof(event_storage), \ + "%s[%d]", #type, len); \ + ret = trace_define_field(event_call, event_storage, #item, \ offsetof(typeof(field), item), \ sizeof(field.item), \ is_signed_type(type), FILTER_OTHER); \ + mutex_unlock(&event_storage_mutex); \ if (ret) \ return ret; \ } while (0); diff --git a/include/trace/syscall.h b/include/trace/syscall.h index 0a5b4952aa3..84bc4197e73 100644 --- a/include/trace/syscall.h +++ b/include/trace/syscall.h @@ -4,7 +4,6 @@ #include <linux/tracepoint.h> #include <linux/unistd.h> #include <linux/ftrace_event.h> -#include <linux/thread_info.h> #include <asm/ptrace.h> @@ -32,18 +31,4 @@ struct syscall_metadata { struct ftrace_event_call *exit_event; }; -#if defined(CONFIG_TRACEPOINTS) && defined(CONFIG_HAVE_SYSCALL_TRACEPOINTS) -static inline void syscall_tracepoint_update(struct task_struct *p) -{ - if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) - set_tsk_thread_flag(p, TIF_SYSCALL_TRACEPOINT); - else - clear_tsk_thread_flag(p, TIF_SYSCALL_TRACEPOINT); -} -#else -static inline void syscall_tracepoint_update(struct task_struct *p) -{ -} -#endif - #endif /* _TRACE_SYSCALL_H */ diff --git a/include/uapi/drm/tegra_drm.h b/include/uapi/drm/tegra_drm.h index 86b1f9942d0..6e132a2f742 100644 --- a/include/uapi/drm/tegra_drm.h +++ b/include/uapi/drm/tegra_drm.h @@ -103,6 +103,7 @@ struct drm_tegra_submit { __u32 num_waitchks; __u32 waitchk_mask; __u32 timeout; + __u32 pad; __u64 syncpts; __u64 cmdbufs; __u64 relocs; diff --git a/include/uapi/linux/usb/Kbuild b/include/uapi/linux/usb/Kbuild index 4cc4d6e7e52..6cb4ea82683 100644 --- a/include/uapi/linux/usb/Kbuild +++ b/include/uapi/linux/usb/Kbuild @@ -1,7 +1,6 @@ # UAPI Header export list header-y += audio.h header-y += cdc.h -header-y += cdc-wdm.h header-y += ch11.h header-y += ch9.h header-y += functionfs.h diff --git a/include/uapi/linux/usb/cdc-wdm.h b/include/uapi/linux/usb/cdc-wdm.h index 0dc132e7503..f03134feebd 100644 --- a/include/uapi/linux/usb/cdc-wdm.h +++ b/include/uapi/linux/usb/cdc-wdm.h @@ -9,8 +9,6 @@ #ifndef _UAPI__LINUX_USB_CDC_WDM_H #define _UAPI__LINUX_USB_CDC_WDM_H -#include <linux/types.h> - /* * This IOCTL is used to retrieve the wMaxCommand for the device, * defining the message limit for both reading and writing. diff --git a/include/uapi/sound/compress_offload.h b/include/uapi/sound/compress_offload.h index 21eed488783..5759810e1c1 100644 --- a/include/uapi/sound/compress_offload.h +++ b/include/uapi/sound/compress_offload.h @@ -80,7 +80,7 @@ struct snd_compr_tstamp { struct snd_compr_avail { __u64 avail; struct snd_compr_tstamp tstamp; -} __attribute__((packed)); +}; enum snd_compr_direction { SND_COMPRESS_PLAYBACK = 0, diff --git a/init/Kconfig b/init/Kconfig index d8e0071197a..56de5459c0e 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1389,14 +1389,6 @@ config FUTEX support for "fast userspace mutexes". The resulting kernel may not run glibc-based applications correctly. -config HAVE_FUTEX_CMPXCHG - bool - depends on FUTEX - help - Architectures should select this if futex_atomic_cmpxchg_inatomic() - is implemented and always working. This removes a couple of runtime - checks. - config EPOLL bool "Enable eventpoll support" if EXPERT default y diff --git a/init/main.c b/init/main.c index 9887e75a6e0..c22e93f8cff 100644 --- a/init/main.c +++ b/init/main.c @@ -616,10 +616,6 @@ asmlinkage void __init start_kernel(void) if (efi_enabled(EFI_RUNTIME_SERVICES)) efi_enter_virtual_mode(); #endif -#ifdef CONFIG_X86_ESPFIX64 - /* Should be run before the first non-init thread is created */ - init_espfix_bsp(); -#endif thread_info_cache_init(); cred_init(); fork_init(totalram_pages); diff --git a/ipc/msg.c b/ipc/msg.c index 52770bfde2a..558aa91186b 100644 --- a/ipc/msg.c +++ b/ipc/msg.c @@ -885,8 +885,6 @@ long do_msgrcv(int msqid, void __user *buf, size_t bufsz, long msgtyp, int msgfl return -EINVAL; if (msgflg & MSG_COPY) { - if ((msgflg & MSG_EXCEPT) || !(msgflg & IPC_NOWAIT)) - return -EINVAL; copy = prepare_copy(buf, min_t(size_t, bufsz, ns->msg_ctlmax)); if (IS_ERR(copy)) return PTR_ERR(copy); diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks index e4d30533c56..44511d100ea 100644 --- a/kernel/Kconfig.locks +++ b/kernel/Kconfig.locks @@ -220,9 +220,6 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE endif -config ARCH_SUPPORTS_ATOMIC_RMW - bool - config MUTEX_SPIN_ON_OWNER def_bool y - depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW + depends on SMP && !DEBUG_MUTEXES diff --git a/kernel/audit.c b/kernel/audit.c index 4dd7529b084..6def25f1b35 100644 --- a/kernel/audit.c +++ b/kernel/audit.c @@ -593,13 +593,13 @@ static int audit_netlink_ok(struct sk_buff *skb, u16 msg_type) case AUDIT_TTY_SET: case AUDIT_TRIM: case AUDIT_MAKE_EQUIV: - if (!netlink_capable(skb, CAP_AUDIT_CONTROL)) + if (!capable(CAP_AUDIT_CONTROL)) err = -EPERM; break; case AUDIT_USER: case AUDIT_FIRST_USER_MSG ... AUDIT_LAST_USER_MSG: case AUDIT_FIRST_USER_MSG2 ... AUDIT_LAST_USER_MSG2: - if (!netlink_capable(skb, CAP_AUDIT_WRITE)) + if (!capable(CAP_AUDIT_WRITE)) err = -EPERM; break; default: /* bad msg */ @@ -1412,7 +1412,7 @@ void audit_log_cap(struct audit_buffer *ab, char *prefix, kernel_cap_t *cap) audit_log_format(ab, " %s=", prefix); CAP_FOR_EACH_U32(i) { audit_log_format(ab, "%08x", - cap->cap[CAP_LAST_U32 - i]); + cap->cap[(_KERNEL_CAPABILITY_U32S-1) - i]); } } diff --git a/kernel/auditsc.c b/kernel/auditsc.c index 03a3af8538b..9845cb32b60 100644 --- a/kernel/auditsc.c +++ b/kernel/auditsc.c @@ -733,22 +733,6 @@ static enum audit_state audit_filter_task(struct task_struct *tsk, char **key) return AUDIT_BUILD_CONTEXT; } -static int audit_in_mask(const struct audit_krule *rule, unsigned long val) -{ - int word, bit; - - if (val > 0xffffffff) - return false; - - word = AUDIT_WORD(val); - if (word >= AUDIT_BITMASK_SIZE) - return false; - - bit = AUDIT_BIT(val); - - return rule->mask[word] & bit; -} - /* At syscall entry and exit time, this filter is called if the * audit_state is not low enough that auditing cannot take place, but is * also not high enough that we already know we have to write an audit @@ -766,8 +750,11 @@ static enum audit_state audit_filter_syscall(struct task_struct *tsk, rcu_read_lock(); if (!list_empty(list)) { + int word = AUDIT_WORD(ctx->major); + int bit = AUDIT_BIT(ctx->major); + list_for_each_entry_rcu(e, list, list) { - if (audit_in_mask(&e->rule, ctx->major) && + if ((e->rule.mask[word] & bit) == bit && audit_filter_rules(tsk, &e->rule, ctx, NULL, &state, false)) { rcu_read_unlock(); @@ -787,16 +774,20 @@ static enum audit_state audit_filter_syscall(struct task_struct *tsk, static int audit_filter_inode_name(struct task_struct *tsk, struct audit_names *n, struct audit_context *ctx) { + int word, bit; int h = audit_hash_ino((u32)n->ino); struct list_head *list = &audit_inode_hash[h]; struct audit_entry *e; enum audit_state state; + word = AUDIT_WORD(ctx->major); + bit = AUDIT_BIT(ctx->major); + if (list_empty(list)) return 0; list_for_each_entry_rcu(e, list, list) { - if (audit_in_mask(&e->rule, ctx->major) && + if ((e->rule.mask[word] & bit) == bit && audit_filter_rules(tsk, &e->rule, ctx, n, &state, false)) { ctx->current_state = state; return 1; diff --git a/kernel/capability.c b/kernel/capability.c index 1339806a873..f6c2ce5701e 100644 --- a/kernel/capability.c +++ b/kernel/capability.c @@ -268,10 +268,6 @@ SYSCALL_DEFINE2(capset, cap_user_header_t, header, const cap_user_data_t, data) i++; } - effective.cap[CAP_LAST_U32] &= CAP_LAST_U32_VALID_MASK; - permitted.cap[CAP_LAST_U32] &= CAP_LAST_U32_VALID_MASK; - inheritable.cap[CAP_LAST_U32] &= CAP_LAST_U32_VALID_MASK; - new = prepare_creds(); if (!new) return -ENOMEM; @@ -449,18 +445,22 @@ bool nsown_capable(int cap) } /** - * capable_wrt_inode_uidgid - Check nsown_capable and uid and gid mapped + * inode_capable - Check superior capability over inode * @inode: The inode in question * @cap: The capability in question * - * Return true if the current task has the given capability targeted at - * its own user namespace and that the given inode's uid and gid are - * mapped into the current user namespace. + * Return true if the current task has the given superior capability + * targeted at it's own user namespace and that the given inode is owned + * by the current user namespace or a child namespace. + * + * Currently we check to see if an inode is owned by the current + * user namespace by seeing if the inode's owner maps into the + * current user namespace. + * */ -bool capable_wrt_inode_uidgid(const struct inode *inode, int cap) +bool inode_capable(const struct inode *inode, int cap) { struct user_namespace *ns = current_user_ns(); - return ns_capable(ns, cap) && kuid_has_mapping(ns, inode->i_uid) && - kgid_has_mapping(ns, inode->i_gid); + return ns_capable(ns, cap) && kuid_has_mapping(ns, inode->i_uid); } diff --git a/kernel/cpu.c b/kernel/cpu.c index d0c6432831f..9dd31fafaa8 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -705,12 +705,10 @@ void set_cpu_present(unsigned int cpu, bool present) void set_cpu_online(unsigned int cpu, bool online) { - if (online) { + if (online) cpumask_set_cpu(cpu, to_cpumask(cpu_online_bits)); - cpumask_set_cpu(cpu, to_cpumask(cpu_active_bits)); - } else { + else cpumask_clear_cpu(cpu, to_cpumask(cpu_online_bits)); - } } void set_cpu_active(unsigned int cpu, bool active) diff --git a/kernel/cpuset.c b/kernel/cpuset.c index 067750bbdad..d313870dcd0 100644 --- a/kernel/cpuset.c +++ b/kernel/cpuset.c @@ -1153,13 +1153,7 @@ done: int current_cpuset_is_being_rebound(void) { - int ret; - - rcu_read_lock(); - ret = task_cs(current) == cpuset_being_rebound; - rcu_read_unlock(); - - return ret; + return task_cs(current) == cpuset_being_rebound; } static int update_relax_domain_level(struct cpuset *cs, s64 val) @@ -2428,9 +2422,9 @@ int __cpuset_node_allowed_softwall(int node, gfp_t gfp_mask) task_lock(current); cs = nearest_hardwall_ancestor(task_cs(current)); - allowed = node_isset(node, cs->mems_allowed); task_unlock(current); + allowed = node_isset(node, cs->mems_allowed); mutex_unlock(&callback_mutex); return allowed; } diff --git a/kernel/events/core.c b/kernel/events/core.c index 0b473344715..f8eb2b154bd 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -165,109 +165,25 @@ int sysctl_perf_event_mlock __read_mostly = 512 + (PAGE_SIZE / 1024); /* 'free' /* * max perf event sample rate */ -#define DEFAULT_MAX_SAMPLE_RATE 100000 -#define DEFAULT_SAMPLE_PERIOD_NS (NSEC_PER_SEC / DEFAULT_MAX_SAMPLE_RATE) -#define DEFAULT_CPU_TIME_MAX_PERCENT 25 - -int sysctl_perf_event_sample_rate __read_mostly = DEFAULT_MAX_SAMPLE_RATE; - -static int max_samples_per_tick __read_mostly = DIV_ROUND_UP(DEFAULT_MAX_SAMPLE_RATE, HZ); -static int perf_sample_period_ns __read_mostly = DEFAULT_SAMPLE_PERIOD_NS; - -static atomic_t perf_sample_allowed_ns __read_mostly = - ATOMIC_INIT( DEFAULT_SAMPLE_PERIOD_NS * DEFAULT_CPU_TIME_MAX_PERCENT / 100); - -void update_perf_cpu_limits(void) -{ - u64 tmp = perf_sample_period_ns; - - tmp *= sysctl_perf_cpu_time_max_percent; - do_div(tmp, 100); - atomic_set(&perf_sample_allowed_ns, tmp); -} +#define DEFAULT_MAX_SAMPLE_RATE 100000 +int sysctl_perf_event_sample_rate __read_mostly = DEFAULT_MAX_SAMPLE_RATE; +static int max_samples_per_tick __read_mostly = + DIV_ROUND_UP(DEFAULT_MAX_SAMPLE_RATE, HZ); int perf_proc_update_handler(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { - int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); - - if (ret || !write) - return ret; - - max_samples_per_tick = DIV_ROUND_UP(sysctl_perf_event_sample_rate, HZ); - perf_sample_period_ns = NSEC_PER_SEC / sysctl_perf_event_sample_rate; - update_perf_cpu_limits(); - - return 0; -} - -int sysctl_perf_cpu_time_max_percent __read_mostly = DEFAULT_CPU_TIME_MAX_PERCENT; - -int perf_cpu_time_max_percent_handler(struct ctl_table *table, int write, - void __user *buffer, size_t *lenp, - loff_t *ppos) -{ int ret = proc_dointvec(table, write, buffer, lenp, ppos); if (ret || !write) return ret; - update_perf_cpu_limits(); + max_samples_per_tick = DIV_ROUND_UP(sysctl_perf_event_sample_rate, HZ); return 0; } -/* - * perf samples are done in some very critical code paths (NMIs). - * If they take too much CPU time, the system can lock up and not - * get any real work done. This will drop the sample rate when - * we detect that events are taking too long. - */ -#define NR_ACCUMULATED_SAMPLES 128 -DEFINE_PER_CPU(u64, running_sample_length); - -void perf_sample_event_took(u64 sample_len_ns) -{ - u64 avg_local_sample_len; - u64 local_samples_len; - - if (atomic_read(&perf_sample_allowed_ns) == 0) - return; - - /* decay the counter by 1 average sample */ - local_samples_len = __get_cpu_var(running_sample_length); - local_samples_len -= local_samples_len/NR_ACCUMULATED_SAMPLES; - local_samples_len += sample_len_ns; - __get_cpu_var(running_sample_length) = local_samples_len; - - /* - * note: this will be biased artifically low until we have - * seen NR_ACCUMULATED_SAMPLES. Doing it this way keeps us - * from having to maintain a count. - */ - avg_local_sample_len = local_samples_len/NR_ACCUMULATED_SAMPLES; - - if (avg_local_sample_len <= atomic_read(&perf_sample_allowed_ns)) - return; - - if (max_samples_per_tick <= 1) - return; - - max_samples_per_tick = DIV_ROUND_UP(max_samples_per_tick, 2); - sysctl_perf_event_sample_rate = max_samples_per_tick * HZ; - perf_sample_period_ns = NSEC_PER_SEC / sysctl_perf_event_sample_rate; - - printk_ratelimited(KERN_WARNING - "perf samples too long (%lld > %d), lowering " - "kernel.perf_event_max_sample_rate to %d\n", - avg_local_sample_len, - atomic_read(&perf_sample_allowed_ns), - sysctl_perf_event_sample_rate); - - update_perf_cpu_limits(); -} - static atomic64_t perf_event_id; static void cpu_ctx_sched_out(struct perf_cpu_context *cpuctx, @@ -1321,11 +1237,6 @@ group_sched_out(struct perf_event *group_event, cpuctx->exclusive = 0; } -struct remove_event { - struct perf_event *event; - bool detach_group; -}; - /* * Cross CPU call to remove a performance event * @@ -1334,15 +1245,12 @@ struct remove_event { */ static int __perf_remove_from_context(void *info) { - struct remove_event *re = info; - struct perf_event *event = re->event; + struct perf_event *event = info; struct perf_event_context *ctx = event->ctx; struct perf_cpu_context *cpuctx = __get_cpu_context(ctx); raw_spin_lock(&ctx->lock); event_sched_out(event, cpuctx, ctx); - if (re->detach_group) - perf_group_detach(event); list_del_event(event, ctx); if (!ctx->nr_events && cpuctx->task_ctx == ctx) { ctx->is_active = 0; @@ -1367,14 +1275,10 @@ static int __perf_remove_from_context(void *info) * When called from perf_event_exit_task, it's OK because the * context has been detached from its task. */ -static void perf_remove_from_context(struct perf_event *event, bool detach_group) +static void perf_remove_from_context(struct perf_event *event) { struct perf_event_context *ctx = event->ctx; struct task_struct *task = ctx->task; - struct remove_event re = { - .event = event, - .detach_group = detach_group, - }; lockdep_assert_held(&ctx->mutex); @@ -1383,12 +1287,12 @@ static void perf_remove_from_context(struct perf_event *event, bool detach_group * Per cpu events are removed via an smp call and * the removal is always successful. */ - cpu_function_call(event->cpu, __perf_remove_from_context, &re); + cpu_function_call(event->cpu, __perf_remove_from_context, event); return; } retry: - if (!task_function_call(task, __perf_remove_from_context, &re)) + if (!task_function_call(task, __perf_remove_from_context, event)) return; raw_spin_lock_irq(&ctx->lock); @@ -1398,11 +1302,6 @@ retry: */ if (ctx->is_active) { raw_spin_unlock_irq(&ctx->lock); - /* - * Reload the task pointer, it might have been changed by - * a concurrent perf_event_context_sched_out(). - */ - task = ctx->task; goto retry; } @@ -1410,8 +1309,6 @@ retry: * Since the task isn't running, its safe to remove the event, us * holding the ctx->lock ensures the task won't get scheduled in. */ - if (detach_group) - perf_group_detach(event); list_del_event(event, ctx); raw_spin_unlock_irq(&ctx->lock); } @@ -1834,11 +1731,6 @@ retry: */ if (ctx->is_active) { raw_spin_unlock_irq(&ctx->lock); - /* - * Reload the task pointer, it might have been changed by - * a concurrent perf_event_context_sched_out(). - */ - task = ctx->task; goto retry; } @@ -2124,6 +2016,9 @@ static void __perf_event_sync_stat(struct perf_event *event, perf_event_update_userpage(next_event); } +#define list_next_entry(pos, member) \ + list_entry(pos->member.next, typeof(*pos), member) + static void perf_event_sync_stat(struct perf_event_context *ctx, struct perf_event_context *next_ctx) { @@ -3123,7 +3018,10 @@ int perf_event_release_kernel(struct perf_event *event) * to trigger the AB-BA case. */ mutex_lock_nested(&ctx->mutex, SINGLE_DEPTH_NESTING); - perf_remove_from_context(event, true); + raw_spin_lock_irq(&ctx->lock); + perf_group_detach(event); + raw_spin_unlock_irq(&ctx->lock); + perf_remove_from_context(event); mutex_unlock(&ctx->mutex); free_event(event); @@ -5149,9 +5047,6 @@ struct swevent_htable { /* Recursion avoidance in each contexts */ int recursion[PERF_NR_CONTEXTS]; - - /* Keeps track of cpu being initialized/exited */ - bool online; }; static DEFINE_PER_CPU(struct swevent_htable, swevent_htable); @@ -5398,14 +5293,8 @@ static int perf_swevent_add(struct perf_event *event, int flags) hwc->state = !(flags & PERF_EF_START); head = find_swevent_head(swhash, event); - if (!head) { - /* - * We can race with cpu hotplug code. Do not - * WARN if the cpu just got unplugged. - */ - WARN_ON_ONCE(swhash->online); + if (WARN_ON_ONCE(!head)) return -EINVAL; - } hlist_add_head_rcu(&event->hlist_entry, head); @@ -6695,9 +6584,6 @@ SYSCALL_DEFINE5(perf_event_open, if (attr.freq) { if (attr.sample_freq > sysctl_perf_event_sample_rate) return -EINVAL; - } else { - if (attr.sample_period & (1ULL << 63)) - return -EINVAL; } /* @@ -6844,7 +6730,7 @@ SYSCALL_DEFINE5(perf_event_open, struct perf_event_context *gctx = group_leader->ctx; mutex_lock(&gctx->mutex); - perf_remove_from_context(group_leader, false); + perf_remove_from_context(group_leader); /* * Removing from the context ends up with disabled @@ -6854,7 +6740,7 @@ SYSCALL_DEFINE5(perf_event_open, perf_event__state_init(group_leader); list_for_each_entry(sibling, &group_leader->sibling_list, group_entry) { - perf_remove_from_context(sibling, false); + perf_remove_from_context(sibling); perf_event__state_init(sibling); put_ctx(gctx); } @@ -6984,7 +6870,7 @@ void perf_pmu_migrate_context(struct pmu *pmu, int src_cpu, int dst_cpu) mutex_lock(&src_ctx->mutex); list_for_each_entry_safe(event, tmp, &src_ctx->event_list, event_entry) { - perf_remove_from_context(event, false); + perf_remove_from_context(event); put_ctx(src_ctx); list_add(&event->event_entry, &events); } @@ -7044,7 +6930,13 @@ __perf_event_exit_task(struct perf_event *child_event, struct perf_event_context *child_ctx, struct task_struct *child) { - perf_remove_from_context(child_event, !!child_event->parent); + if (child_event->parent) { + raw_spin_lock_irq(&child_ctx->lock); + perf_group_detach(child_event); + raw_spin_unlock_irq(&child_ctx->lock); + } + + perf_remove_from_context(child_event); /* * It can happen that the parent exits first, and has events @@ -7482,10 +7374,8 @@ int perf_event_init_task(struct task_struct *child) for_each_task_context_nr(ctxn) { ret = perf_event_init_context(child, ctxn); - if (ret) { - perf_event_free_task(child); + if (ret) return ret; - } } return 0; @@ -7508,7 +7398,6 @@ static void __cpuinit perf_event_init_cpu(int cpu) struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu); mutex_lock(&swhash->hlist_mutex); - swhash->online = true; if (swhash->hlist_refcount > 0) { struct swevent_hlist *hlist; @@ -7531,14 +7420,14 @@ static void perf_pmu_rotate_stop(struct pmu *pmu) static void __perf_event_exit_context(void *__info) { - struct remove_event re = { .detach_group = false }; struct perf_event_context *ctx = __info; + struct perf_event *event; perf_pmu_rotate_stop(ctx->pmu); rcu_read_lock(); - list_for_each_entry_rcu(re.event, &ctx->event_list, event_entry) - __perf_remove_from_context(&re); + list_for_each_entry_rcu(event, &ctx->event_list, event_entry) + __perf_remove_from_context(event); rcu_read_unlock(); } @@ -7566,7 +7455,6 @@ static void perf_event_exit_cpu(int cpu) perf_event_exit_cpu_context(cpu); mutex_lock(&swhash->hlist_mutex); - swhash->online = false; swevent_hlist_release(swhash); mutex_unlock(&swhash->hlist_mutex); } diff --git a/kernel/exit.c b/kernel/exit.c index 33fde71b83d..6a057750ebb 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -74,7 +74,6 @@ static void __unhash_process(struct task_struct *p, bool group_dead) __this_cpu_dec(process_counts); } list_del_rcu(&p->thread_group); - list_del_rcu(&p->thread_node); } /* @@ -571,6 +570,9 @@ static void reparent_leader(struct task_struct *father, struct task_struct *p, struct list_head *dead) { list_move_tail(&p->sibling, &p->real_parent->children); + + if (p->exit_state == EXIT_DEAD) + return; /* * If this is a threaded reparent there is no need to * notify anyone anything has happened. @@ -578,19 +580,9 @@ static void reparent_leader(struct task_struct *father, struct task_struct *p, if (same_thread_group(p->real_parent, father)) return; - /* - * We don't want people slaying init. - * - * Note: we do this even if it is EXIT_DEAD, wait_task_zombie() - * can change ->exit_state to EXIT_ZOMBIE. If this is the final - * state, do_notify_parent() was already called and ->exit_signal - * doesn't matter. - */ + /* We don't want people slaying init. */ p->exit_signal = SIGCHLD; - if (p->exit_state == EXIT_DEAD) - return; - /* If it has exited notify the new parent about this child's death. */ if (!p->ptrace && p->exit_state == EXIT_ZOMBIE && thread_group_empty(p)) { @@ -802,8 +794,6 @@ void do_exit(long code) exit_shm(tsk); exit_files(tsk); exit_fs(tsk); - if (group_dead) - disassociate_ctty(1); exit_task_namespaces(tsk); exit_task_work(tsk); check_stack_usage(); @@ -819,9 +809,13 @@ void do_exit(long code) cgroup_exit(tsk, 1); + if (group_dead) + disassociate_ctty(1); + module_put(task_thread_info(tsk)->exec_domain->module); proc_exit_connector(tsk); + /* * FIXME: do that only when needed, using sched_exit tracepoint */ diff --git a/kernel/fork.c b/kernel/fork.c index ccc4044e6cc..0aa1bb5c8d6 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1062,11 +1062,6 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) sig->nr_threads = 1; atomic_set(&sig->live, 1); atomic_set(&sig->sigcnt, 1); - - /* list_add(thread_node, thread_head) without INIT_LIST_HEAD() */ - sig->thread_head = (struct list_head)LIST_HEAD_INIT(tsk->thread_node); - tsk->thread_node = (struct list_head)LIST_HEAD_INIT(sig->thread_head); - init_waitqueue_head(&sig->wait_chldexit); sig->curr_target = tsk; init_sigpending(&sig->shared_pending); @@ -1341,7 +1336,7 @@ static struct task_struct *copy_process(unsigned long clone_flags, goto bad_fork_cleanup_policy; retval = audit_alloc(p); if (retval) - goto bad_fork_cleanup_perf; + goto bad_fork_cleanup_policy; /* copy all the process information */ retval = copy_semundo(clone_flags, p); if (retval) @@ -1470,6 +1465,14 @@ static struct task_struct *copy_process(unsigned long clone_flags, goto bad_fork_free_pid; } + if (clone_flags & CLONE_THREAD) { + current->signal->nr_threads++; + atomic_inc(¤t->signal->live); + atomic_inc(¤t->signal->sigcnt); + p->group_leader = current->group_leader; + list_add_tail_rcu(&p->thread_group, &p->group_leader->thread_group); + } + if (likely(p->pid)) { ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace); @@ -1486,15 +1489,6 @@ static struct task_struct *copy_process(unsigned long clone_flags, list_add_tail(&p->sibling, &p->real_parent->children); list_add_tail_rcu(&p->tasks, &init_task.tasks); __this_cpu_inc(process_counts); - } else { - current->signal->nr_threads++; - atomic_inc(¤t->signal->live); - atomic_inc(¤t->signal->sigcnt); - p->group_leader = current->group_leader; - list_add_tail_rcu(&p->thread_group, - &p->group_leader->thread_group); - list_add_tail_rcu(&p->thread_node, - &p->signal->thread_head); } attach_pid(p, PIDTYPE_PID, pid); nr_threads++; @@ -1502,9 +1496,7 @@ static struct task_struct *copy_process(unsigned long clone_flags, total_forks++; spin_unlock(¤t->sighand->siglock); - syscall_tracepoint_update(p); write_unlock_irq(&tasklist_lock); - proc_fork_connector(p); cgroup_post_fork(p); if (clone_flags & CLONE_THREAD) @@ -1539,9 +1531,8 @@ bad_fork_cleanup_semundo: exit_sem(p); bad_fork_cleanup_audit: audit_free(p); -bad_fork_cleanup_perf: - perf_event_free_task(p); bad_fork_cleanup_policy: + perf_event_free_task(p); #ifdef CONFIG_NUMA mpol_put(p->mempolicy); bad_fork_cleanup_cgroup: @@ -1633,12 +1624,10 @@ long do_fork(unsigned long clone_flags, */ if (!IS_ERR(p)) { struct completion vfork; - struct pid *pid; trace_sched_process_fork(current, p); - pid = get_task_pid(p, PIDTYPE_PID); - nr = pid_vnr(pid); + nr = task_pid_vnr(p); if (clone_flags & CLONE_PARENT_SETTID) put_user(nr, parent_tidptr); @@ -1653,14 +1642,12 @@ long do_fork(unsigned long clone_flags, /* forking complete and child started to run, tell ptracer */ if (unlikely(trace)) - ptrace_event_pid(trace, pid); + ptrace_event(trace, nr); if (clone_flags & CLONE_VFORK) { if (!wait_for_vfork_done(p, &vfork)) - ptrace_event_pid(PTRACE_EVENT_VFORK_DONE, pid); + ptrace_event(PTRACE_EVENT_VFORK_DONE, nr); } - - put_pid(pid); } else { nr = PTR_ERR(p); } diff --git a/kernel/freezer.c b/kernel/freezer.c index 8f9279b9c6d..aa6a8aadb91 100644 --- a/kernel/freezer.c +++ b/kernel/freezer.c @@ -42,9 +42,6 @@ bool freezing_slow_path(struct task_struct *p) if (p->flags & (PF_NOFREEZE | PF_SUSPEND_TASK)) return false; - if (test_thread_flag(TIF_MEMDIE)) - return false; - if (pm_nosig_freezing || cgroup_freezing(p)) return true; diff --git a/kernel/futex.c b/kernel/futex.c index ad971d0f0be..a9182febe37 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -68,9 +68,7 @@ #include "rtmutex_common.h" -#ifndef CONFIG_HAVE_FUTEX_CMPXCHG int __read_mostly futex_cmpxchg_enabled; -#endif #define FUTEX_HASHBITS (CONFIG_BASE_SMALL ? 4 : 8) @@ -593,55 +591,6 @@ void exit_pi_state_list(struct task_struct *curr) raw_spin_unlock_irq(&curr->pi_lock); } -/* - * We need to check the following states: - * - * Waiter | pi_state | pi->owner | uTID | uODIED | ? - * - * [1] NULL | --- | --- | 0 | 0/1 | Valid - * [2] NULL | --- | --- | >0 | 0/1 | Valid - * - * [3] Found | NULL | -- | Any | 0/1 | Invalid - * - * [4] Found | Found | NULL | 0 | 1 | Valid - * [5] Found | Found | NULL | >0 | 1 | Invalid - * - * [6] Found | Found | task | 0 | 1 | Valid - * - * [7] Found | Found | NULL | Any | 0 | Invalid - * - * [8] Found | Found | task | ==taskTID | 0/1 | Valid - * [9] Found | Found | task | 0 | 0 | Invalid - * [10] Found | Found | task | !=taskTID | 0/1 | Invalid - * - * [1] Indicates that the kernel can acquire the futex atomically. We - * came came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit. - * - * [2] Valid, if TID does not belong to a kernel thread. If no matching - * thread is found then it indicates that the owner TID has died. - * - * [3] Invalid. The waiter is queued on a non PI futex - * - * [4] Valid state after exit_robust_list(), which sets the user space - * value to FUTEX_WAITERS | FUTEX_OWNER_DIED. - * - * [5] The user space value got manipulated between exit_robust_list() - * and exit_pi_state_list() - * - * [6] Valid state after exit_pi_state_list() which sets the new owner in - * the pi_state but cannot access the user space value. - * - * [7] pi_state->owner can only be NULL when the OWNER_DIED bit is set. - * - * [8] Owner and user space value match - * - * [9] There is no transient state which sets the user space TID to 0 - * except exit_robust_list(), but this is indicated by the - * FUTEX_OWNER_DIED bit. See [4] - * - * [10] There is no transient state which leaves owner and user space - * TID out of sync. - */ static int lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, union futex_key *key, struct futex_pi_state **ps) @@ -657,13 +606,12 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, plist_for_each_entry_safe(this, next, head, list) { if (match_futex(&this->key, key)) { /* - * Sanity check the waiter before increasing - * the refcount and attaching to it. + * Another waiter already exists - bump up + * the refcount and return its pi_state: */ pi_state = this->pi_state; /* - * Userspace might have messed up non-PI and - * PI futexes [3] + * Userspace might have messed up non-PI and PI futexes */ if (unlikely(!pi_state)) return -EINVAL; @@ -671,70 +619,34 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, WARN_ON(!atomic_read(&pi_state->refcount)); /* - * Handle the owner died case: + * When pi_state->owner is NULL then the owner died + * and another waiter is on the fly. pi_state->owner + * is fixed up by the task which acquires + * pi_state->rt_mutex. + * + * We do not check for pid == 0 which can happen when + * the owner died and robust_list_exit() cleared the + * TID. */ - if (uval & FUTEX_OWNER_DIED) { - /* - * exit_pi_state_list sets owner to NULL and - * wakes the topmost waiter. The task which - * acquires the pi_state->rt_mutex will fixup - * owner. - */ - if (!pi_state->owner) { - /* - * No pi state owner, but the user - * space TID is not 0. Inconsistent - * state. [5] - */ - if (pid) - return -EINVAL; - /* - * Take a ref on the state and - * return. [4] - */ - goto out_state; - } - - /* - * If TID is 0, then either the dying owner - * has not yet executed exit_pi_state_list() - * or some waiter acquired the rtmutex in the - * pi state, but did not yet fixup the TID in - * user space. - * - * Take a ref on the state and return. [6] - */ - if (!pid) - goto out_state; - } else { + if (pid && pi_state->owner) { /* - * If the owner died bit is not set, - * then the pi_state must have an - * owner. [7] + * Bail out if user space manipulated the + * futex value. */ - if (!pi_state->owner) + if (pid != task_pid_vnr(pi_state->owner)) return -EINVAL; } - /* - * Bail out if user space manipulated the - * futex value. If pi state exists then the - * owner TID must be the same as the user - * space TID. [9/10] - */ - if (pid != task_pid_vnr(pi_state->owner)) - return -EINVAL; - - out_state: atomic_inc(&pi_state->refcount); *ps = pi_state; + return 0; } } /* * We are the first waiter - try to look up the real owner and attach - * the new pi_state to it, but bail out when TID = 0 [1] + * the new pi_state to it, but bail out when TID = 0 */ if (!pid) return -ESRCH; @@ -742,11 +654,6 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, if (!p) return -ESRCH; - if (!p->mm) { - put_task_struct(p); - return -EPERM; - } - /* * We need to look at the task state flags to figure out, * whether the task is exiting. To protect against the do_exit @@ -767,9 +674,6 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, return ret; } - /* - * No existing pi state. First waiter. [2] - */ pi_state = alloc_pi_state(); /* @@ -841,18 +745,10 @@ retry: return -EDEADLK; /* - * Surprise - we got the lock, but we do not trust user space at all. + * Surprise - we got the lock. Just return to userspace: */ - if (unlikely(!curval)) { - /* - * We verify whether there is kernel state for this - * futex. If not, we can safely assume, that the 0 -> - * TID transition is correct. If state exists, we do - * not bother to fixup the user space state as it was - * corrupted already. - */ - return futex_top_waiter(hb, key) ? -EINVAL : 1; - } + if (unlikely(!curval)) + return 1; uval = curval; @@ -982,7 +878,6 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this) struct task_struct *new_owner; struct futex_pi_state *pi_state = this->pi_state; u32 uninitialized_var(curval), newval; - int ret = 0; if (!pi_state) return -EINVAL; @@ -1006,19 +901,23 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this) new_owner = this->task; /* - * We pass it to the next owner. The WAITERS bit is always - * kept enabled while there is PI state around. We cleanup the - * owner died bit, because we are the owner. + * We pass it to the next owner. (The WAITERS bit is always + * kept enabled while there is PI state around. We must also + * preserve the owner died bit.) */ - newval = FUTEX_WAITERS | task_pid_vnr(new_owner); + if (!(uval & FUTEX_OWNER_DIED)) { + int ret = 0; - if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) - ret = -EFAULT; - else if (curval != uval) - ret = -EINVAL; - if (ret) { - raw_spin_unlock(&pi_state->pi_mutex.wait_lock); - return ret; + newval = FUTEX_WAITERS | task_pid_vnr(new_owner); + + if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) + ret = -EFAULT; + else if (curval != uval) + ret = -EINVAL; + if (ret) { + raw_spin_unlock(&pi_state->pi_mutex.wait_lock); + return ret; + } } raw_spin_lock_irq(&pi_state->owner->pi_lock); @@ -1297,7 +1196,7 @@ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key, * * Return: * 0 - failed to acquire the lock atomically; - * >0 - acquired the lock, return value is vpid of the top_waiter + * 1 - acquired the lock; * <0 - error */ static int futex_proxy_trylock_atomic(u32 __user *pifutex, @@ -1308,7 +1207,7 @@ static int futex_proxy_trylock_atomic(u32 __user *pifutex, { struct futex_q *top_waiter = NULL; u32 curval; - int ret, vpid; + int ret; if (get_futex_value_locked(&curval, pifutex)) return -EFAULT; @@ -1336,13 +1235,11 @@ static int futex_proxy_trylock_atomic(u32 __user *pifutex, * the contended case or if set_waiters is 1. The pi_state is returned * in ps in contended cases. */ - vpid = task_pid_vnr(top_waiter->task); ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task, set_waiters); - if (ret == 1) { + if (ret == 1) requeue_pi_wake_futex(top_waiter, key2, hb2); - return vpid; - } + return ret; } @@ -1374,6 +1271,7 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, struct futex_hash_bucket *hb1, *hb2; struct plist_head *head1; struct futex_q *this, *next; + u32 curval2; if (requeue_pi) { /* @@ -1475,25 +1373,16 @@ retry_private: * At this point the top_waiter has either taken uaddr2 or is * waiting on it. If the former, then the pi_state will not * exist yet, look it up one more time to ensure we have a - * reference to it. If the lock was taken, ret contains the - * vpid of the top waiter task. + * reference to it. */ - if (ret > 0) { + if (ret == 1) { WARN_ON(pi_state); drop_count++; task_count++; - /* - * If we acquired the lock, then the user - * space value of uaddr2 should be vpid. It - * cannot be changed by the top waiter as it - * is blocked on hb2 lock if it tries to do - * so. If something fiddled with it behind our - * back the pi state lookup might unearth - * it. So we rather use the known value than - * rereading and handing potential crap to - * lookup_pi_state. - */ - ret = lookup_pi_state(ret, hb2, &key2, &pi_state); + ret = get_futex_value_locked(&curval2, uaddr2); + if (!ret) + ret = lookup_pi_state(curval2, hb2, &key2, + &pi_state); } switch (ret) { @@ -2263,10 +2152,9 @@ retry: /* * To avoid races, try to do the TID -> 0 atomic transition * again. If it succeeds then we can return without waking - * anyone else up. We only try this if neither the waiters nor - * the owner died bit are set. + * anyone else up: */ - if (!(uval & ~FUTEX_TID_MASK) && + if (!(uval & FUTEX_OWNER_DIED) && cmpxchg_futex_value_locked(&uval, uaddr, vpid, 0)) goto pi_faulted; /* @@ -2298,9 +2186,11 @@ retry: /* * No waiters - kernel unlocks the futex: */ - ret = unlock_futex_pi(uaddr, uval); - if (ret == -EFAULT) - goto pi_faulted; + if (!(uval & FUTEX_OWNER_DIED)) { + ret = unlock_futex_pi(uaddr, uval); + if (ret == -EFAULT) + goto pi_faulted; + } out_unlock: spin_unlock(&hb->lock); @@ -2865,10 +2755,10 @@ SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val, return do_futex(uaddr, op, val, tp, uaddr2, val2, val3); } -static void __init futex_detect_cmpxchg(void) +static int __init futex_init(void) { -#ifndef CONFIG_HAVE_FUTEX_CMPXCHG u32 curval; + int i; /* * This will fail and we want it. Some arch implementations do @@ -2882,14 +2772,6 @@ static void __init futex_detect_cmpxchg(void) */ if (cmpxchg_futex_value_locked(&curval, NULL, 0, 0) == -EFAULT) futex_cmpxchg_enabled = 1; -#endif -} - -static int __init futex_init(void) -{ - int i; - - futex_detect_cmpxchg(); for (i = 0; i < ARRAY_SIZE(futex_queues); i++) { plist_head_init(&futex_queues[i].chain); diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c index 1e84be10743..e268e269a69 100644 --- a/kernel/hrtimer.c +++ b/kernel/hrtimer.c @@ -246,11 +246,6 @@ again: goto again; } timer->base = new_base; - } else { - if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) { - cpu = this_cpu; - goto again; - } } return new_base; } @@ -586,23 +581,6 @@ hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal) cpu_base->expires_next.tv64 = expires_next.tv64; - /* - * If a hang was detected in the last timer interrupt then we - * leave the hang delay active in the hardware. We want the - * system to make progress. That also prevents the following - * scenario: - * T1 expires 50ms from now - * T2 expires 5s from now - * - * T1 is removed, so this code is called and would reprogram - * the hardware to 5s from now. Any hrtimer_start after that - * will not reprogram the hardware due to hang_detected being - * set. So we'd effectivly block all timers until the T2 event - * fires. - */ - if (cpu_base->hang_detected) - return; - if (cpu_base->expires_next.tv64 != KTIME_MAX) tick_program_event(cpu_base->expires_next, 1); } @@ -1000,8 +978,11 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, /* Remove an active timer from the queue: */ ret = remove_hrtimer(timer, base); + /* Switch the timer base, if necessary: */ + new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); + if (mode & HRTIMER_MODE_REL) { - tim = ktime_add_safe(tim, base->get_time()); + tim = ktime_add_safe(tim, new_base->get_time()); /* * CONFIG_TIME_LOW_RES is a temporary way for architectures * to signal that they simply return xtime in @@ -1016,9 +997,6 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, hrtimer_set_expires_range_ns(timer, tim, delta_ns); - /* Switch the timer base, if necessary: */ - new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); - timer_stats_hrtimer_set_start_info(timer); leftmost = enqueue_hrtimer(timer, new_base); diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index a79d267b64e..dc4db3228dc 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -150,7 +150,7 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask, struct irq_chip *chip = irq_data_get_irq_chip(data); int ret; - ret = chip->irq_set_affinity(data, mask, force); + ret = chip->irq_set_affinity(data, mask, false); switch (ret) { case IRQ_SET_MASK_OK: cpumask_copy(data->affinity, mask); @@ -162,8 +162,7 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask, return ret; } -int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask, - bool force) +int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask) { struct irq_chip *chip = irq_data_get_irq_chip(data); struct irq_desc *desc = irq_data_to_desc(data); @@ -173,7 +172,7 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask, return -EINVAL; if (irq_can_move_pcntxt(data)) { - ret = irq_do_set_affinity(data, mask, force); + ret = irq_do_set_affinity(data, mask, false); } else { irqd_set_move_pending(data); irq_copy_pending(desc, mask); @@ -188,7 +187,13 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask, return ret; } -int __irq_set_affinity(unsigned int irq, const struct cpumask *mask, bool force) +/** + * irq_set_affinity - Set the irq affinity of a given irq + * @irq: Interrupt to set affinity + * @mask: cpumask + * + */ +int irq_set_affinity(unsigned int irq, const struct cpumask *mask) { struct irq_desc *desc = irq_to_desc(irq); unsigned long flags; @@ -198,7 +203,7 @@ int __irq_set_affinity(unsigned int irq, const struct cpumask *mask, bool force) return -EINVAL; raw_spin_lock_irqsave(&desc->lock, flags); - ret = irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask, force); + ret = __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask); raw_spin_unlock_irqrestore(&desc->lock, flags); return ret; } @@ -797,7 +802,8 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc, static void wake_threads_waitq(struct irq_desc *desc) { - if (atomic_dec_and_test(&desc->threads_active)) + if (atomic_dec_and_test(&desc->threads_active) && + waitqueue_active(&desc->wait_for_threads)) wake_up(&desc->wait_for_threads); } @@ -861,8 +867,8 @@ static int irq_thread(void *data) irq_thread_check_affinity(desc, action); action_ret = handler_fn(desc, action); - if (action_ret == IRQ_HANDLED) - atomic_inc(&desc->threads_handled); + if (!noirqdebug) + note_interrupt(action->irq, desc, action_ret); wake_threads_waitq(desc); } diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c index febcee3c2aa..7b5f012bde9 100644 --- a/kernel/irq/spurious.c +++ b/kernel/irq/spurious.c @@ -265,119 +265,21 @@ try_misrouted_irq(unsigned int irq, struct irq_desc *desc, return action && (action->flags & IRQF_IRQPOLL); } -#define SPURIOUS_DEFERRED 0x80000000 - void note_interrupt(unsigned int irq, struct irq_desc *desc, irqreturn_t action_ret) { if (desc->istate & IRQS_POLL_INPROGRESS) return; + /* we get here again via the threaded handler */ + if (action_ret == IRQ_WAKE_THREAD) + return; + if (bad_action_ret(action_ret)) { report_bad_irq(irq, desc, action_ret); return; } - /* - * We cannot call note_interrupt from the threaded handler - * because we need to look at the compound of all handlers - * (primary and threaded). Aside of that in the threaded - * shared case we have no serialization against an incoming - * hardware interrupt while we are dealing with a threaded - * result. - * - * So in case a thread is woken, we just note the fact and - * defer the analysis to the next hardware interrupt. - * - * The threaded handlers store whether they sucessfully - * handled an interrupt and we check whether that number - * changed versus the last invocation. - * - * We could handle all interrupts with the delayed by one - * mechanism, but for the non forced threaded case we'd just - * add pointless overhead to the straight hardirq interrupts - * for the sake of a few lines less code. - */ - if (action_ret & IRQ_WAKE_THREAD) { - /* - * There is a thread woken. Check whether one of the - * shared primary handlers returned IRQ_HANDLED. If - * not we defer the spurious detection to the next - * interrupt. - */ - if (action_ret == IRQ_WAKE_THREAD) { - int handled; - /* - * We use bit 31 of thread_handled_last to - * denote the deferred spurious detection - * active. No locking necessary as - * thread_handled_last is only accessed here - * and we have the guarantee that hard - * interrupts are not reentrant. - */ - if (!(desc->threads_handled_last & SPURIOUS_DEFERRED)) { - desc->threads_handled_last |= SPURIOUS_DEFERRED; - return; - } - /* - * Check whether one of the threaded handlers - * returned IRQ_HANDLED since the last - * interrupt happened. - * - * For simplicity we just set bit 31, as it is - * set in threads_handled_last as well. So we - * avoid extra masking. And we really do not - * care about the high bits of the handled - * count. We just care about the count being - * different than the one we saw before. - */ - handled = atomic_read(&desc->threads_handled); - handled |= SPURIOUS_DEFERRED; - if (handled != desc->threads_handled_last) { - action_ret = IRQ_HANDLED; - /* - * Note: We keep the SPURIOUS_DEFERRED - * bit set. We are handling the - * previous invocation right now. - * Keep it for the current one, so the - * next hardware interrupt will - * account for it. - */ - desc->threads_handled_last = handled; - } else { - /* - * None of the threaded handlers felt - * responsible for the last interrupt - * - * We keep the SPURIOUS_DEFERRED bit - * set in threads_handled_last as we - * need to account for the current - * interrupt as well. - */ - action_ret = IRQ_NONE; - } - } else { - /* - * One of the primary handlers returned - * IRQ_HANDLED. So we don't care about the - * threaded handlers on the same line. Clear - * the deferred detection bit. - * - * In theory we could/should check whether the - * deferred bit is set and take the result of - * the previous run into account here as - * well. But it's really not worth the - * trouble. If every other interrupt is - * handled we never trigger the spurious - * detector. And if this is just the one out - * of 100k unhandled ones which is handled - * then we merily delay the spurious detection - * by one hard interrupt. Not a real problem. - */ - desc->threads_handled_last &= ~SPURIOUS_DEFERRED; - } - } - if (unlikely(action_ret == IRQ_NONE)) { /* * If we are seeing only the odd spurious IRQ caused by diff --git a/kernel/kcmp.c b/kernel/kcmp.c index 0aa69ea1d8f..e30ac0fe61c 100644 --- a/kernel/kcmp.c +++ b/kernel/kcmp.c @@ -44,12 +44,11 @@ static long kptr_obfuscate(long v, int type) */ static int kcmp_ptr(void *v1, void *v2, enum kcmp_type type) { - long t1, t2; + long ret; - t1 = kptr_obfuscate((long)v1, type); - t2 = kptr_obfuscate((long)v2, type); + ret = kptr_obfuscate((long)v1, type) - kptr_obfuscate((long)v2, type); - return (t1 < t2) | ((t1 > t2) << 1); + return (ret < 0) | ((ret > 0) << 1); } /* The caller must have pinned the task */ diff --git a/kernel/module.c b/kernel/module.c index 414387c7250..2b741ae39ab 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -1866,9 +1866,7 @@ static void free_module(struct module *mod) /* We leave it in list to prevent duplicate loads, but make sure * that noone uses it while it's being deconstructed. */ - mutex_lock(&module_mutex); mod->state = MODULE_STATE_UNFORMED; - mutex_unlock(&module_mutex); /* Remove dynamic debug info */ ddebug_remove_module(mod->name); @@ -3281,9 +3279,6 @@ static int load_module(struct load_info *info, const char __user *uargs, dynamic_debug_setup(info->debug, info->num_debug); - /* Ftrace init must be called in the MODULE_STATE_UNFORMED state */ - ftrace_module_init(mod); - /* Finally it's fully formed, ready to start executing. */ err = complete_formation(mod, info); if (err) diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c index e32703d5e0a..6917e8edb48 100644 --- a/kernel/pid_namespace.c +++ b/kernel/pid_namespace.c @@ -312,9 +312,7 @@ static void *pidns_get(struct task_struct *task) struct pid_namespace *ns; rcu_read_lock(); - ns = task_active_pid_ns(task); - if (ns) - get_pid_ns(ns); + ns = get_pid_ns(task_active_pid_ns(task)); rcu_read_unlock(); return ns; diff --git a/kernel/posix-timers.c b/kernel/posix-timers.c index 77e6b83c043..424c2d4265c 100644 --- a/kernel/posix-timers.c +++ b/kernel/posix-timers.c @@ -634,7 +634,6 @@ SYSCALL_DEFINE3(timer_create, const clockid_t, which_clock, goto out; } } else { - memset(&event.sigev_value, 0, sizeof(event.sigev_value)); event.sigev_notify = SIGEV_SIGNAL; event.sigev_signo = SIGALRM; event.sigev_value.sival_int = new_timer->it_id; diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c index 1634dc6e2fe..b26f5f1e773 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -491,14 +491,8 @@ int hibernation_restore(int platform_mode) error = dpm_suspend_start(PMSG_QUIESCE); if (!error) { error = resume_target_kernel(platform_mode); - /* - * The above should either succeed and jump to the new kernel, - * or return with an error. Otherwise things are just - * undefined, so let's be paranoid. - */ - BUG_ON(!error); + dpm_resume_end(PMSG_RECOVER); } - dpm_resume_end(PMSG_RECOVER); pm_restore_gfp_mask(); ftrace_start(); resume_console(); diff --git a/kernel/power/main.c b/kernel/power/main.c index 312c1b2c725..d77663bfede 100644 --- a/kernel/power/main.c +++ b/kernel/power/main.c @@ -293,12 +293,12 @@ static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr, { char *s = buf; #ifdef CONFIG_SUSPEND - suspend_state_t i; - - for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++) - if (pm_states[i].state) - s += sprintf(s,"%s ", pm_states[i].label); + int i; + for (i = 0; i < PM_SUSPEND_MAX; i++) { + if (pm_states[i] && valid_state(i)) + s += sprintf(s,"%s ", pm_states[i]); + } #endif #ifdef CONFIG_HIBERNATION s += sprintf(s, "%s\n", "disk"); @@ -314,7 +314,7 @@ static suspend_state_t decode_state(const char *buf, size_t n) { #ifdef CONFIG_SUSPEND suspend_state_t state = PM_SUSPEND_MIN; - struct pm_sleep_state *s; + const char * const *s; #endif char *p; int len; @@ -328,9 +328,8 @@ static suspend_state_t decode_state(const char *buf, size_t n) #ifdef CONFIG_SUSPEND for (s = &pm_states[state]; state < PM_SUSPEND_MAX; s++, state++) - if (s->state && len == strlen(s->label) - && !strncmp(buf, s->label, len)) - return s->state; + if (*s && len == strlen(*s) && !strncmp(buf, *s, len)) + return state; #endif return PM_SUSPEND_ON; @@ -446,8 +445,8 @@ static ssize_t autosleep_show(struct kobject *kobj, #ifdef CONFIG_SUSPEND if (state < PM_SUSPEND_MAX) - return sprintf(buf, "%s\n", pm_states[state].state ? - pm_states[state].label : "error"); + return sprintf(buf, "%s\n", valid_state(state) ? + pm_states[state] : "error"); #endif #ifdef CONFIG_HIBERNATION return sprintf(buf, "disk\n"); diff --git a/kernel/power/power.h b/kernel/power/power.h index 6ccb7f6b4dc..49deae3b345 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -175,20 +175,17 @@ extern void swsusp_show_speed(struct timeval *, struct timeval *, unsigned int, char *); #ifdef CONFIG_SUSPEND -struct pm_sleep_state { - const char *label; - suspend_state_t state; -}; - /* kernel/power/suspend.c */ -extern struct pm_sleep_state pm_states[]; +extern const char *const pm_states[]; +extern bool valid_state(suspend_state_t state); extern int suspend_devices_and_enter(suspend_state_t state); #else /* !CONFIG_SUSPEND */ static inline int suspend_devices_and_enter(suspend_state_t state) { return -ENOSYS; } +static inline bool valid_state(suspend_state_t state) { return false; } #endif /* !CONFIG_SUSPEND */ #ifdef CONFIG_PM_TEST_SUSPEND diff --git a/kernel/power/process.c b/kernel/power/process.c index f1fe7ec110b..06ec8869dbf 100644 --- a/kernel/power/process.c +++ b/kernel/power/process.c @@ -107,28 +107,6 @@ static int try_to_freeze_tasks(bool user_only) return todo ? -EBUSY : 0; } -/* - * Returns true if all freezable tasks (except for current) are frozen already - */ -static bool check_frozen_processes(void) -{ - struct task_struct *g, *p; - bool ret = true; - - read_lock(&tasklist_lock); - for_each_process_thread(g, p) { - if (p != current && !freezer_should_skip(p) && - !frozen(p)) { - ret = false; - goto done; - } - } -done: - read_unlock(&tasklist_lock); - - return ret; -} - /** * freeze_processes - Signal user space processes to enter the refrigerator. * The current thread will not be frozen. The same process that calls @@ -139,7 +117,6 @@ done: int freeze_processes(void) { int error; - int oom_kills_saved; error = __usermodehelper_disable(UMH_FREEZING); if (error) @@ -153,27 +130,12 @@ int freeze_processes(void) printk("Freezing user space processes ... "); pm_freezing = true; - oom_kills_saved = oom_kills_count(); error = try_to_freeze_tasks(true); if (!error) { + printk("done."); __usermodehelper_set_disable_depth(UMH_DISABLED); oom_killer_disable(); - - /* - * There might have been an OOM kill while we were - * freezing tasks and the killed task might be still - * on the way out so we have to double check for race. - */ - if (oom_kills_count() != oom_kills_saved && - !check_frozen_processes()) { - __usermodehelper_set_disable_depth(UMH_ENABLED); - printk("OOM in progress."); - error = -EBUSY; - goto done; - } - printk("done."); } -done: printk("\n"); BUG_ON(in_atomic()); @@ -222,7 +184,6 @@ void thaw_processes(void) printk("Restarting tasks ... "); - __usermodehelper_set_disable_depth(UMH_FREEZING); thaw_workqueues(); read_lock(&tasklist_lock); diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c index 86e021b76c3..454568e6c8d 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c @@ -30,10 +30,10 @@ #include "power.h" -struct pm_sleep_state pm_states[PM_SUSPEND_MAX] = { - [PM_SUSPEND_FREEZE] = { .label = "freeze", .state = PM_SUSPEND_FREEZE }, - [PM_SUSPEND_STANDBY] = { .label = "standby", }, - [PM_SUSPEND_MEM] = { .label = "mem", }, +const char *const pm_states[PM_SUSPEND_MAX] = { + [PM_SUSPEND_FREEZE] = "freeze", + [PM_SUSPEND_STANDBY] = "standby", + [PM_SUSPEND_MEM] = "mem", }; static const struct platform_suspend_ops *suspend_ops; @@ -63,34 +63,42 @@ void freeze_wake(void) } EXPORT_SYMBOL_GPL(freeze_wake); -static bool valid_state(suspend_state_t state) -{ - /* - * PM_SUSPEND_STANDBY and PM_SUSPEND_MEM states need low level - * support and need to be valid to the low level - * implementation, no valid callback implies that none are valid. - */ - return suspend_ops && suspend_ops->valid && suspend_ops->valid(state); -} - /** * suspend_set_ops - Set the global suspend method table. * @ops: Suspend operations to use. */ void suspend_set_ops(const struct platform_suspend_ops *ops) { - suspend_state_t i; - lock_system_sleep(); - suspend_ops = ops; - for (i = PM_SUSPEND_STANDBY; i <= PM_SUSPEND_MEM; i++) - pm_states[i].state = valid_state(i) ? i : 0; - unlock_system_sleep(); } EXPORT_SYMBOL_GPL(suspend_set_ops); +bool valid_state(suspend_state_t state) +{ + if (state == PM_SUSPEND_FREEZE) { +#ifdef CONFIG_PM_DEBUG + if (pm_test_level != TEST_NONE && + pm_test_level != TEST_FREEZER && + pm_test_level != TEST_DEVICES && + pm_test_level != TEST_PLATFORM) { + printk(KERN_WARNING "Unsupported pm_test mode for " + "freeze state, please choose " + "none/freezer/devices/platform.\n"); + return false; + } +#endif + return true; + } + /* + * PM_SUSPEND_STANDBY and PM_SUSPEND_MEMORY states need lowlevel + * support and need to be valid to the lowlevel + * implementation, no valid callback implies that none are valid. + */ + return suspend_ops && suspend_ops->valid && suspend_ops->valid(state); +} + /** * suspend_valid_only_mem - Generic memory-only valid callback. * @@ -317,17 +325,9 @@ static int enter_state(suspend_state_t state) { int error; - if (state == PM_SUSPEND_FREEZE) { -#ifdef CONFIG_PM_DEBUG - if (pm_test_level != TEST_NONE && pm_test_level <= TEST_CPUS) { - pr_warning("PM: Unsupported test mode for freeze state," - "please choose none/freezer/devices/platform.\n"); - return -EAGAIN; - } -#endif - } else if (!valid_state(state)) { - return -EINVAL; - } + if (!valid_state(state)) + return -ENODEV; + if (!mutex_trylock(&pm_mutex)) return -EBUSY; @@ -338,7 +338,7 @@ static int enter_state(suspend_state_t state) sys_sync(); printk("done.\n"); - pr_debug("PM: Preparing system for %s sleep\n", pm_states[state].label); + pr_debug("PM: Preparing system for %s sleep\n", pm_states[state]); error = suspend_prepare(state); if (error) goto Unlock; @@ -346,7 +346,7 @@ static int enter_state(suspend_state_t state) if (suspend_test(TEST_FREEZER)) goto Finish; - pr_debug("PM: Entering %s sleep\n", pm_states[state].label); + pr_debug("PM: Entering %s sleep\n", pm_states[state]); pm_restrict_gfp_mask(); error = suspend_devices_and_enter(state); pm_restore_gfp_mask(); diff --git a/kernel/power/suspend_test.c b/kernel/power/suspend_test.c index 269b097e78e..9b2a1d58558 100644 --- a/kernel/power/suspend_test.c +++ b/kernel/power/suspend_test.c @@ -92,13 +92,13 @@ static void __init test_wakealarm(struct rtc_device *rtc, suspend_state_t state) } if (state == PM_SUSPEND_MEM) { - printk(info_test, pm_states[state].label); + printk(info_test, pm_states[state]); status = pm_suspend(state); if (status == -ENODEV) state = PM_SUSPEND_STANDBY; } if (state == PM_SUSPEND_STANDBY) { - printk(info_test, pm_states[state].label); + printk(info_test, pm_states[state]); status = pm_suspend(state); } if (status < 0) @@ -136,16 +136,18 @@ static char warn_bad_state[] __initdata = static int __init setup_test_suspend(char *value) { - suspend_state_t i; + unsigned i; /* "=mem" ==> "mem" */ value++; - for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++) - if (!strcmp(pm_states[i].label, value)) { - test_state = pm_states[i].state; - return 0; - } - + for (i = 0; i < PM_SUSPEND_MAX; i++) { + if (!pm_states[i]) + continue; + if (strcmp(pm_states[i], value) != 0) + continue; + test_state = (__force suspend_state_t) i; + return 0; + } printk(warn_bad_state, value); return 0; } @@ -162,8 +164,8 @@ static int __init test_suspend(void) /* PM is initialized by now; is that state testable? */ if (test_state == PM_SUSPEND_ON) goto done; - if (!pm_states[test_state].state) { - printk(warn_bad_state, pm_states[test_state].label); + if (!valid_state(test_state)) { + printk(warn_bad_state, pm_states[test_state]); goto done; } diff --git a/kernel/printk.c b/kernel/printk.c index 4762db7536c..a79918ad901 100644 --- a/kernel/printk.c +++ b/kernel/printk.c @@ -2496,7 +2496,7 @@ void wake_up_klogd(void) preempt_enable(); } -int printk_deferred(const char *fmt, ...) +int printk_sched(const char *fmt, ...) { unsigned long flags; va_list args; diff --git a/kernel/rtmutex-debug.h b/kernel/rtmutex-debug.h index ab29b6a2266..14193d596d7 100644 --- a/kernel/rtmutex-debug.h +++ b/kernel/rtmutex-debug.h @@ -31,8 +31,3 @@ static inline int debug_rt_mutex_detect_deadlock(struct rt_mutex_waiter *waiter, { return (waiter != NULL); } - -static inline void rt_mutex_print_deadlock(struct rt_mutex_waiter *w) -{ - debug_rt_mutex_print_deadlock(w); -} diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c index d9ca207cec0..1e09308bf2a 100644 --- a/kernel/rtmutex.c +++ b/kernel/rtmutex.c @@ -82,47 +82,6 @@ static inline void mark_rt_mutex_waiters(struct rt_mutex *lock) owner = *p; } while (cmpxchg(p, owner, owner | RT_MUTEX_HAS_WAITERS) != owner); } - -/* - * Safe fastpath aware unlock: - * 1) Clear the waiters bit - * 2) Drop lock->wait_lock - * 3) Try to unlock the lock with cmpxchg - */ -static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock) - __releases(lock->wait_lock) -{ - struct task_struct *owner = rt_mutex_owner(lock); - - clear_rt_mutex_waiters(lock); - raw_spin_unlock(&lock->wait_lock); - /* - * If a new waiter comes in between the unlock and the cmpxchg - * we have two situations: - * - * unlock(wait_lock); - * lock(wait_lock); - * cmpxchg(p, owner, 0) == owner - * mark_rt_mutex_waiters(lock); - * acquire(lock); - * or: - * - * unlock(wait_lock); - * lock(wait_lock); - * mark_rt_mutex_waiters(lock); - * - * cmpxchg(p, owner, 0) != owner - * enqueue_waiter(); - * unlock(wait_lock); - * lock(wait_lock); - * wake waiter(); - * unlock(wait_lock); - * lock(wait_lock); - * acquire(lock); - */ - return rt_mutex_cmpxchg(lock, owner, NULL); -} - #else # define rt_mutex_cmpxchg(l,c,n) (0) static inline void mark_rt_mutex_waiters(struct rt_mutex *lock) @@ -130,17 +89,6 @@ static inline void mark_rt_mutex_waiters(struct rt_mutex *lock) lock->owner = (struct task_struct *) ((unsigned long)lock->owner | RT_MUTEX_HAS_WAITERS); } - -/* - * Simple slow path only version: lock->owner is protected by lock->wait_lock. - */ -static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock) - __releases(lock->wait_lock) -{ - lock->owner = NULL; - raw_spin_unlock(&lock->wait_lock); - return true; -} #endif /* @@ -194,11 +142,6 @@ static void rt_mutex_adjust_prio(struct task_struct *task) */ int max_lock_depth = 1024; -static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p) -{ - return p->pi_blocked_on ? p->pi_blocked_on->lock : NULL; -} - /* * Adjust the priority chain. Also used for deadlock detection. * Decreases task's usage by one - may thus free the task. @@ -207,7 +150,6 @@ static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p) static int rt_mutex_adjust_prio_chain(struct task_struct *task, int deadlock_detect, struct rt_mutex *orig_lock, - struct rt_mutex *next_lock, struct rt_mutex_waiter *orig_waiter, struct task_struct *top_task) { @@ -241,7 +183,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, } put_task_struct(task); - return -EDEADLK; + return deadlock_detect ? -EDEADLK : 0; } retry: /* @@ -266,32 +208,13 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, goto out_unlock_pi; /* - * We dropped all locks after taking a refcount on @task, so - * the task might have moved on in the lock chain or even left - * the chain completely and blocks now on an unrelated lock or - * on @orig_lock. - * - * We stored the lock on which @task was blocked in @next_lock, - * so we can detect the chain change. - */ - if (next_lock != waiter->lock) - goto out_unlock_pi; - - /* * Drop out, when the task has no waiters. Note, * top_waiter can be NULL, when we are in the deboosting * mode! */ - if (top_waiter) { - if (!task_has_pi_waiters(task)) - goto out_unlock_pi; - /* - * If deadlock detection is off, we stop here if we - * are not the top pi waiter of the task. - */ - if (!detect_deadlock && top_waiter != task_top_pi_waiter(task)) - goto out_unlock_pi; - } + if (top_waiter && (!task_has_pi_waiters(task) || + top_waiter != task_top_pi_waiter(task))) + goto out_unlock_pi; /* * When deadlock detection is off then we check, if further @@ -307,16 +230,11 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, goto retry; } - /* - * Deadlock detection. If the lock is the same as the original - * lock which caused us to walk the lock chain or if the - * current lock is owned by the task which initiated the chain - * walk, we detected a deadlock. - */ + /* Deadlock detection */ if (lock == orig_lock || rt_mutex_owner(lock) == top_task) { debug_rt_mutex_deadlock(deadlock_detect, orig_waiter, lock); raw_spin_unlock(&lock->wait_lock); - ret = -EDEADLK; + ret = deadlock_detect ? -EDEADLK : 0; goto out_unlock_pi; } @@ -363,26 +281,11 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, __rt_mutex_adjust_prio(task); } - /* - * Check whether the task which owns the current lock is pi - * blocked itself. If yes we store a pointer to the lock for - * the lock chain change detection above. After we dropped - * task->pi_lock next_lock cannot be dereferenced anymore. - */ - next_lock = task_blocked_on_lock(task); - raw_spin_unlock_irqrestore(&task->pi_lock, flags); top_waiter = rt_mutex_top_waiter(lock); raw_spin_unlock(&lock->wait_lock); - /* - * We reached the end of the lock chain. Stop right here. No - * point to go back just to figure that out. - */ - if (!next_lock) - goto out_put_task; - if (!detect_deadlock && waiter != top_waiter) goto out_put_task; @@ -493,21 +396,8 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, { struct task_struct *owner = rt_mutex_owner(lock); struct rt_mutex_waiter *top_waiter = waiter; - struct rt_mutex *next_lock; - int chain_walk = 0, res; unsigned long flags; - - /* - * Early deadlock detection. We really don't want the task to - * enqueue on itself just to untangle the mess later. It's not - * only an optimization. We drop the locks, so another waiter - * can come in before the chain walk detects the deadlock. So - * the other will detect the deadlock and return -EDEADLOCK, - * which is wrong, as the other waiter is not in a deadlock - * situation. - */ - if (owner == task) - return -EDEADLK; + int chain_walk = 0, res; raw_spin_lock_irqsave(&task->pi_lock, flags); __rt_mutex_adjust_prio(task); @@ -528,28 +418,20 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, if (!owner) return 0; - raw_spin_lock_irqsave(&owner->pi_lock, flags); if (waiter == rt_mutex_top_waiter(lock)) { + raw_spin_lock_irqsave(&owner->pi_lock, flags); plist_del(&top_waiter->pi_list_entry, &owner->pi_waiters); plist_add(&waiter->pi_list_entry, &owner->pi_waiters); __rt_mutex_adjust_prio(owner); if (owner->pi_blocked_on) chain_walk = 1; - } else if (debug_rt_mutex_detect_deadlock(waiter, detect_deadlock)) { - chain_walk = 1; + raw_spin_unlock_irqrestore(&owner->pi_lock, flags); } + else if (debug_rt_mutex_detect_deadlock(waiter, detect_deadlock)) + chain_walk = 1; - /* Store the lock on which owner is blocked or NULL */ - next_lock = task_blocked_on_lock(owner); - - raw_spin_unlock_irqrestore(&owner->pi_lock, flags); - /* - * Even if full deadlock detection is on, if the owner is not - * blocked itself, we can avoid finding this out in the chain - * walk. - */ - if (!chain_walk || !next_lock) + if (!chain_walk) return 0; /* @@ -561,8 +443,8 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, raw_spin_unlock(&lock->wait_lock); - res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock, - next_lock, waiter, task); + res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock, waiter, + task); raw_spin_lock(&lock->wait_lock); @@ -572,8 +454,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, /* * Wake up the next waiter on the lock. * - * Remove the top waiter from the current tasks pi waiter list and - * wake it up. + * Remove the top waiter from the current tasks waiter list and wake it up. * * Called with lock->wait_lock held. */ @@ -594,23 +475,10 @@ static void wakeup_next_waiter(struct rt_mutex *lock) */ plist_del(&waiter->pi_list_entry, ¤t->pi_waiters); - /* - * As we are waking up the top waiter, and the waiter stays - * queued on the lock until it gets the lock, this lock - * obviously has waiters. Just set the bit here and this has - * the added benefit of forcing all new tasks into the - * slow path making sure no task of lower priority than - * the top waiter can steal this lock. - */ - lock->owner = (void *) RT_MUTEX_HAS_WAITERS; + rt_mutex_set_owner(lock, NULL); raw_spin_unlock_irqrestore(¤t->pi_lock, flags); - /* - * It's safe to dereference waiter as it cannot go away as - * long as we hold lock->wait_lock. The waiter task needs to - * acquire it in order to dequeue the waiter. - */ wake_up_process(waiter->task); } @@ -625,8 +493,8 @@ static void remove_waiter(struct rt_mutex *lock, { int first = (waiter == rt_mutex_top_waiter(lock)); struct task_struct *owner = rt_mutex_owner(lock); - struct rt_mutex *next_lock = NULL; unsigned long flags; + int chain_walk = 0; raw_spin_lock_irqsave(¤t->pi_lock, flags); plist_del(&waiter->list_entry, &lock->wait_list); @@ -650,15 +518,15 @@ static void remove_waiter(struct rt_mutex *lock, } __rt_mutex_adjust_prio(owner); - /* Store the lock on which owner is blocked or NULL */ - next_lock = task_blocked_on_lock(owner); + if (owner->pi_blocked_on) + chain_walk = 1; raw_spin_unlock_irqrestore(&owner->pi_lock, flags); } WARN_ON(!plist_node_empty(&waiter->pi_list_entry)); - if (!next_lock) + if (!chain_walk) return; /* gets dropped in rt_mutex_adjust_prio_chain()! */ @@ -666,7 +534,7 @@ static void remove_waiter(struct rt_mutex *lock, raw_spin_unlock(&lock->wait_lock); - rt_mutex_adjust_prio_chain(owner, 0, lock, next_lock, NULL, current); + rt_mutex_adjust_prio_chain(owner, 0, lock, NULL, current); raw_spin_lock(&lock->wait_lock); } @@ -679,7 +547,6 @@ static void remove_waiter(struct rt_mutex *lock, void rt_mutex_adjust_pi(struct task_struct *task) { struct rt_mutex_waiter *waiter; - struct rt_mutex *next_lock; unsigned long flags; raw_spin_lock_irqsave(&task->pi_lock, flags); @@ -689,13 +556,12 @@ void rt_mutex_adjust_pi(struct task_struct *task) raw_spin_unlock_irqrestore(&task->pi_lock, flags); return; } - next_lock = waiter->lock; + raw_spin_unlock_irqrestore(&task->pi_lock, flags); /* gets dropped in rt_mutex_adjust_prio_chain()! */ get_task_struct(task); - - rt_mutex_adjust_prio_chain(task, 0, NULL, next_lock, NULL, task); + rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task); } /** @@ -747,26 +613,6 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state, return ret; } -static void rt_mutex_handle_deadlock(int res, int detect_deadlock, - struct rt_mutex_waiter *w) -{ - /* - * If the result is not -EDEADLOCK or the caller requested - * deadlock detection, nothing to do here. - */ - if (res != -EDEADLOCK || detect_deadlock) - return; - - /* - * Yell lowdly and stop the task right here. - */ - rt_mutex_print_deadlock(w); - while (1) { - set_current_state(TASK_INTERRUPTIBLE); - schedule(); - } -} - /* * Slow path lock function: */ @@ -804,10 +650,8 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, set_current_state(TASK_RUNNING); - if (unlikely(ret)) { + if (unlikely(ret)) remove_waiter(lock, &waiter); - rt_mutex_handle_deadlock(ret, detect_deadlock, &waiter); - } /* * try_to_take_rt_mutex() sets the waiter bit @@ -863,49 +707,12 @@ rt_mutex_slowunlock(struct rt_mutex *lock) rt_mutex_deadlock_account_unlock(current); - /* - * We must be careful here if the fast path is enabled. If we - * have no waiters queued we cannot set owner to NULL here - * because of: - * - * foo->lock->owner = NULL; - * rtmutex_lock(foo->lock); <- fast path - * free = atomic_dec_and_test(foo->refcnt); - * rtmutex_unlock(foo->lock); <- fast path - * if (free) - * kfree(foo); - * raw_spin_unlock(foo->lock->wait_lock); - * - * So for the fastpath enabled kernel: - * - * Nothing can set the waiters bit as long as we hold - * lock->wait_lock. So we do the following sequence: - * - * owner = rt_mutex_owner(lock); - * clear_rt_mutex_waiters(lock); - * raw_spin_unlock(&lock->wait_lock); - * if (cmpxchg(&lock->owner, owner, 0) == owner) - * return; - * goto retry; - * - * The fastpath disabled variant is simple as all access to - * lock->owner is serialized by lock->wait_lock: - * - * lock->owner = NULL; - * raw_spin_unlock(&lock->wait_lock); - */ - while (!rt_mutex_has_waiters(lock)) { - /* Drops lock->wait_lock ! */ - if (unlock_rt_mutex_safe(lock) == true) - return; - /* Relock the rtmutex and try again */ - raw_spin_lock(&lock->wait_lock); + if (!rt_mutex_has_waiters(lock)) { + lock->owner = NULL; + raw_spin_unlock(&lock->wait_lock); + return; } - /* - * The wakeup next waiter path does not suffer from the above - * race. See the comments there. - */ wakeup_next_waiter(lock); raw_spin_unlock(&lock->wait_lock); @@ -1152,8 +959,7 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock, return 1; } - /* We enforce deadlock detection for futexes */ - ret = task_blocks_on_rt_mutex(lock, waiter, task, 1); + ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock); if (ret && !rt_mutex_owner(lock)) { /* diff --git a/kernel/rtmutex.h b/kernel/rtmutex.h index f6a1f3c133b..a1a1dd06421 100644 --- a/kernel/rtmutex.h +++ b/kernel/rtmutex.h @@ -24,8 +24,3 @@ #define debug_rt_mutex_print_deadlock(w) do { } while (0) #define debug_rt_mutex_detect_deadlock(w,d) (d) #define debug_rt_mutex_reset_waiter(w) do { } while (0) - -static inline void rt_mutex_print_deadlock(struct rt_mutex_waiter *w) -{ - WARN(1, "rtmutex deadlock detected\n"); -} diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c index 4a073539c58..64de5f8b0c9 100644 --- a/kernel/sched/auto_group.c +++ b/kernel/sched/auto_group.c @@ -77,6 +77,8 @@ static inline struct autogroup *autogroup_create(void) if (IS_ERR(tg)) goto out_free; + sched_online_group(tg, &root_task_group); + kref_init(&ag->kref); init_rwsem(&ag->lock); ag->id = atomic_inc_return(&autogroup_seq_nr); @@ -96,7 +98,6 @@ static inline struct autogroup *autogroup_create(void) #endif tg->autogroup = ag; - sched_online_group(tg, &root_task_group); return ag; out_free: diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e1b42944e38..960acc06a81 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1247,7 +1247,7 @@ out: * leave kernel. */ if (p->mm && printk_ratelimit()) { - printk_deferred("process %d (%s) no longer affine to cpu%d\n", + printk_sched("process %d (%s) no longer affine to cpu%d\n", task_pid_nr(p), p->comm, cpu); } } @@ -5321,6 +5321,7 @@ static int __cpuinit sched_cpu_active(struct notifier_block *nfb, unsigned long action, void *hcpu) { switch (action & ~CPU_TASKS_FROZEN) { + case CPU_STARTING: case CPU_DOWN_FAILED: set_cpu_active((long)hcpu, true); return NOTIFY_OK; diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c index b3f0a278336..1095e878a46 100644 --- a/kernel/sched/cpupri.c +++ b/kernel/sched/cpupri.c @@ -70,7 +70,8 @@ int cpupri_find(struct cpupri *cp, struct task_struct *p, int idx = 0; int task_pri = convert_prio(p->prio); - BUG_ON(task_pri >= CPUPRI_NR_PRIORITIES); + if (task_pri >= MAX_RT_PRIO) + return 0; for (idx = 0; idx < task_pri; idx++) { struct cpupri_vec *vec = &cp->pri_to_cpu[idx]; diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index c23a8fd3614..1101d92635c 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -326,50 +326,50 @@ out: * softirq as those do not count in task exec_runtime any more. */ static void irqtime_account_process_tick(struct task_struct *p, int user_tick, - struct rq *rq, int ticks) + struct rq *rq) { - cputime_t scaled = cputime_to_scaled(cputime_one_jiffy); - u64 cputime = (__force u64) cputime_one_jiffy; + cputime_t one_jiffy_scaled = cputime_to_scaled(cputime_one_jiffy); u64 *cpustat = kcpustat_this_cpu->cpustat; if (steal_account_process_tick()) return; - cputime *= ticks; - scaled *= ticks; - if (irqtime_account_hi_update()) { - cpustat[CPUTIME_IRQ] += cputime; + cpustat[CPUTIME_IRQ] += (__force u64) cputime_one_jiffy; } else if (irqtime_account_si_update()) { - cpustat[CPUTIME_SOFTIRQ] += cputime; + cpustat[CPUTIME_SOFTIRQ] += (__force u64) cputime_one_jiffy; } else if (this_cpu_ksoftirqd() == p) { /* * ksoftirqd time do not get accounted in cpu_softirq_time. * So, we have to handle it separately here. * Also, p->stime needs to be updated for ksoftirqd. */ - __account_system_time(p, cputime, scaled, CPUTIME_SOFTIRQ); + __account_system_time(p, cputime_one_jiffy, one_jiffy_scaled, + CPUTIME_SOFTIRQ); } else if (user_tick) { - account_user_time(p, cputime, scaled); + account_user_time(p, cputime_one_jiffy, one_jiffy_scaled); } else if (p == rq->idle) { - account_idle_time(cputime); + account_idle_time(cputime_one_jiffy); } else if (p->flags & PF_VCPU) { /* System time or guest time */ - account_guest_time(p, cputime, scaled); + account_guest_time(p, cputime_one_jiffy, one_jiffy_scaled); } else { - __account_system_time(p, cputime, scaled, CPUTIME_SYSTEM); + __account_system_time(p, cputime_one_jiffy, one_jiffy_scaled, + CPUTIME_SYSTEM); } } static void irqtime_account_idle_ticks(int ticks) { + int i; struct rq *rq = this_rq(); - irqtime_account_process_tick(current, 0, rq, ticks); + for (i = 0; i < ticks; i++) + irqtime_account_process_tick(current, 0, rq); } #else /* CONFIG_IRQ_TIME_ACCOUNTING */ static inline void irqtime_account_idle_ticks(int ticks) {} static inline void irqtime_account_process_tick(struct task_struct *p, int user_tick, - struct rq *rq, int nr_ticks) {} + struct rq *rq) {} #endif /* CONFIG_IRQ_TIME_ACCOUNTING */ /* @@ -464,7 +464,7 @@ void account_process_tick(struct task_struct *p, int user_tick) return; if (sched_clock_irqtime) { - irqtime_account_process_tick(p, user_tick, rq, 1); + irqtime_account_process_tick(p, user_tick, rq); return; } diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index f49e4d76daf..b38d50312e2 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -552,7 +552,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) avg_atom = p->se.sum_exec_runtime; if (nr_switches) - avg_atom = div64_ul(avg_atom, nr_switches); + do_div(avg_atom, nr_switches); else avg_atom = -1LL; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c7ab8eab542..305ef886219 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5862,15 +5862,15 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p) struct cfs_rq *cfs_rq = cfs_rq_of(se); /* - * Ensure the task's vruntime is normalized, so that when it's + * Ensure the task's vruntime is normalized, so that when its * switched back to the fair class the enqueue_entity(.flags=0) will * do the right thing. * - * If it's on_rq, then the dequeue_entity(.flags=0) will already - * have normalized the vruntime, if it's !on_rq, then only when + * If it was on_rq, then the dequeue_entity(.flags=0) will already + * have normalized the vruntime, if it was !on_rq, then only when * the task is sleeping will it still have non-normalized vruntime. */ - if (!p->on_rq && p->state != TASK_RUNNING) { + if (!se->on_rq && p->state != TASK_RUNNING) { /* * Fix up our vruntime so that the current sleep doesn't * cause 'unlimited' sleep bonus. diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index d48edfeeef2..8f4243e7342 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -852,7 +852,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq) if (!once) { once = true; - printk_deferred("sched: RT throttling activated\n"); + printk_sched("sched: RT throttling activated\n"); } } else { /* diff --git a/kernel/smp.c b/kernel/smp.c index 88797cb0d23..4dba0f7b72a 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -658,7 +658,7 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info), if (cond_func(cpu, info)) { ret = smp_call_function_single(cpu, func, info, wait); - WARN_ON_ONCE(ret); + WARN_ON_ONCE(!ret); } preempt_enable(); } diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 2c58ffa2798..207454a598f 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -148,11 +148,6 @@ static int min_percpu_pagelist_fract = 8; static int ngroups_max = NGROUPS_MAX; static const int cap_last_cap = CAP_LAST_CAP; -/*this is needed for proc_doulongvec_minmax of sysctl_hung_task_timeout_secs */ -#ifdef CONFIG_DETECT_HUNG_TASK -static unsigned long hung_task_timeout_max = (LONG_MAX/HZ); -#endif - #ifdef CONFIG_INOTIFY_USER #include <linux/inotify.h> #endif @@ -989,7 +984,6 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(unsigned long), .mode = 0644, .proc_handler = proc_dohung_task_timeout_secs, - .extra2 = &hung_task_timeout_max, }, { .procname = "hung_task_warnings", @@ -1067,16 +1061,6 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(sysctl_perf_event_sample_rate), .mode = 0644, .proc_handler = perf_proc_update_handler, - .extra1 = &one, - }, - { - .procname = "perf_cpu_time_max_percent", - .data = &sysctl_perf_cpu_time_max_percent, - .maxlen = sizeof(sysctl_perf_cpu_time_max_percent), - .mode = 0644, - .proc_handler = perf_cpu_time_max_percent_handler, - .extra1 = &zero, - .extra2 = &one_hundred, }, #endif #ifdef CONFIG_KMEMCHECK diff --git a/kernel/time.c b/kernel/time.c index d21398e6da8..d3617dbd3dc 100644 --- a/kernel/time.c +++ b/kernel/time.c @@ -496,20 +496,17 @@ EXPORT_SYMBOL(usecs_to_jiffies); * that a remainder subtract here would not do the right thing as the * resolution values don't fall on second boundries. I.e. the line: * nsec -= nsec % TICK_NSEC; is NOT a correct resolution rounding. - * Note that due to the small error in the multiplier here, this - * rounding is incorrect for sufficiently large values of tv_nsec, but - * well formed timespecs should have tv_nsec < NSEC_PER_SEC, so we're - * OK. * * Rather, we just shift the bits off the right. * * The >> (NSEC_JIFFIE_SC - SEC_JIFFIE_SC) converts the scaled nsec * value to a scaled second value. */ -static unsigned long -__timespec_to_jiffies(unsigned long sec, long nsec) +unsigned long +timespec_to_jiffies(const struct timespec *value) { - nsec = nsec + TICK_NSEC - 1; + unsigned long sec = value->tv_sec; + long nsec = value->tv_nsec + TICK_NSEC - 1; if (sec >= MAX_SEC_IN_JIFFIES){ sec = MAX_SEC_IN_JIFFIES; @@ -520,13 +517,6 @@ __timespec_to_jiffies(unsigned long sec, long nsec) (NSEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC; } - -unsigned long -timespec_to_jiffies(const struct timespec *value) -{ - return __timespec_to_jiffies(value->tv_sec, value->tv_nsec); -} - EXPORT_SYMBOL(timespec_to_jiffies); void @@ -543,27 +533,31 @@ jiffies_to_timespec(const unsigned long jiffies, struct timespec *value) } EXPORT_SYMBOL(jiffies_to_timespec); -/* - * We could use a similar algorithm to timespec_to_jiffies (with a - * different multiplier for usec instead of nsec). But this has a - * problem with rounding: we can't exactly add TICK_NSEC - 1 to the - * usec value, since it's not necessarily integral. - * - * We could instead round in the intermediate scaled representation - * (i.e. in units of 1/2^(large scale) jiffies) but that's also - * perilous: the scaling introduces a small positive error, which - * combined with a division-rounding-upward (i.e. adding 2^(scale) - 1 - * units to the intermediate before shifting) leads to accidental - * overflow and overestimates. +/* Same for "timeval" * - * At the cost of one additional multiplication by a constant, just - * use the timespec implementation. + * Well, almost. The problem here is that the real system resolution is + * in nanoseconds and the value being converted is in micro seconds. + * Also for some machines (those that use HZ = 1024, in-particular), + * there is a LARGE error in the tick size in microseconds. + + * The solution we use is to do the rounding AFTER we convert the + * microsecond part. Thus the USEC_ROUND, the bits to be shifted off. + * Instruction wise, this should cost only an additional add with carry + * instruction above the way it was done above. */ unsigned long timeval_to_jiffies(const struct timeval *value) { - return __timespec_to_jiffies(value->tv_sec, - value->tv_usec * NSEC_PER_USEC); + unsigned long sec = value->tv_sec; + long usec = value->tv_usec; + + if (sec >= MAX_SEC_IN_JIFFIES){ + sec = MAX_SEC_IN_JIFFIES; + usec = 0; + } + return (((u64)sec * SEC_CONVERSION) + + (((u64)usec * USEC_CONVERSION + USEC_ROUND) >> + (USEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC; } EXPORT_SYMBOL(timeval_to_jiffies); diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c index 77cfe6842b9..62da1374fef 100644 --- a/kernel/time/alarmtimer.c +++ b/kernel/time/alarmtimer.c @@ -480,26 +480,18 @@ static enum alarmtimer_type clock2alarm(clockid_t clockid) static enum alarmtimer_restart alarm_handle_timer(struct alarm *alarm, ktime_t now) { - unsigned long flags; struct k_itimer *ptr = container_of(alarm, struct k_itimer, it.alarm.alarmtimer); - enum alarmtimer_restart result = ALARMTIMER_NORESTART; - - spin_lock_irqsave(&ptr->it_lock, flags); - if ((ptr->it_sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE) { - if (posix_timer_event(ptr, 0) != 0) - ptr->it_overrun++; - } + if (posix_timer_event(ptr, 0) != 0) + ptr->it_overrun++; /* Re-add periodic timers */ if (ptr->it.alarm.interval.tv64) { ptr->it_overrun += alarm_forward(alarm, now, ptr->it.alarm.interval); - result = ALARMTIMER_RESTART; + return ALARMTIMER_RESTART; } - spin_unlock_irqrestore(&ptr->it_lock, flags); - - return result; + return ALARMTIMER_NORESTART; } /** @@ -609,14 +601,9 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, struct itimerspec *new_setting, struct itimerspec *old_setting) { - ktime_t exp; - if (!rtcdev) return -ENOTSUPP; - if (flags & ~TIMER_ABSTIME) - return -EINVAL; - if (old_setting) alarm_timer_get(timr, old_setting); @@ -626,16 +613,8 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, /* start the timer */ timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval); - exp = timespec_to_ktime(new_setting->it_value); - /* Convert (if necessary) to absolute time */ - if (flags != TIMER_ABSTIME) { - ktime_t now; - - now = alarm_bases[timr->it.alarm.alarmtimer.type].gettime(); - exp = ktime_add(now, exp); - } - - alarm_start(&timr->it.alarm.alarmtimer, exp); + alarm_start(&timr->it.alarm.alarmtimer, + timespec_to_ktime(new_setting->it_value)); return 0; } @@ -767,9 +746,6 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags, if (!alarmtimer_get_rtcdev()) return -ENOTSUPP; - if (flags & ~TIMER_ABSTIME) - return -EINVAL; - if (!capable(CAP_WAKE_ALARM)) return -EPERM; diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c index 58e8430165b..9df0e3b19f0 100644 --- a/kernel/time/clockevents.c +++ b/kernel/time/clockevents.c @@ -138,8 +138,7 @@ static int clockevents_increase_min_delta(struct clock_event_device *dev) { /* Nothing to do if we already reached the limit */ if (dev->min_delta_ns >= MIN_DELTA_LIMIT) { - printk_deferred(KERN_WARNING - "CE: Reprogramming failure. Giving up\n"); + printk(KERN_WARNING "CE: Reprogramming failure. Giving up\n"); dev->next_event.tv64 = KTIME_MAX; return -ETIME; } @@ -152,10 +151,9 @@ static int clockevents_increase_min_delta(struct clock_event_device *dev) if (dev->min_delta_ns > MIN_DELTA_LIMIT) dev->min_delta_ns = MIN_DELTA_LIMIT; - printk_deferred(KERN_WARNING - "CE: %s increased min_delta_ns to %llu nsec\n", - dev->name ? dev->name : "?", - (unsigned long long) dev->min_delta_ns); + printk(KERN_WARNING "CE: %s increased min_delta_ns to %llu nsec\n", + dev->name ? dev->name : "?", + (unsigned long long) dev->min_delta_ns); return 0; } diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c index 19ee339a1d0..f681da32a2f 100644 --- a/kernel/time/tick-broadcast.c +++ b/kernel/time/tick-broadcast.c @@ -594,13 +594,6 @@ again: cpumask_clear(tick_broadcast_force_mask); /* - * Sanity check. Catch the case where we try to broadcast to - * offline cpus. - */ - if (WARN_ON_ONCE(!cpumask_subset(tmpmask, cpu_online_mask))) - cpumask_and(tmpmask, tmpmask, cpu_online_mask); - - /* * Wakeup the cpus which have an expired event. */ tick_do_broadcast(tmpmask); @@ -841,12 +834,10 @@ void tick_shutdown_broadcast_oneshot(unsigned int *cpup) raw_spin_lock_irqsave(&tick_broadcast_lock, flags); /* - * Clear the broadcast masks for the dead cpu, but do not stop - * the broadcast device! + * Clear the broadcast mask flag for the dead cpu, but do not + * stop the broadcast device! */ cpumask_clear_cpu(cpu, tick_broadcast_oneshot_mask); - cpumask_clear_cpu(cpu, tick_broadcast_pending_mask); - cpumask_clear_cpu(cpu, tick_broadcast_force_mask); raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags); } diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index c7243a86847..73b332ef4a5 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -721,10 +721,8 @@ static bool can_stop_idle_tick(int cpu, struct tick_sched *ts) return false; } - if (unlikely(ts->nohz_mode == NOHZ_MODE_INACTIVE)) { - ts->sleep_length = (ktime_t) { .tv64 = NSEC_PER_SEC/HZ }; + if (unlikely(ts->nohz_mode == NOHZ_MODE_INACTIVE)) return false; - } if (need_resched()) return false; diff --git a/kernel/timer.c b/kernel/timer.c index 20f45ea6f5a..15bc1b41021 100644 --- a/kernel/timer.c +++ b/kernel/timer.c @@ -822,7 +822,7 @@ unsigned long apply_slack(struct timer_list *timer, unsigned long expires) bit = find_last_bit(&mask, BITS_PER_LONG); - mask = (1UL << bit) - 1; + mask = (1 << bit) - 1; expires_limit = expires_limit & ~(mask); diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index 686417ba5cd..b8b8560bfb9 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -685,7 +685,6 @@ void blk_trace_shutdown(struct request_queue *q) * blk_add_trace_rq - Add a trace for a request oriented action * @q: queue the io is for * @rq: the source request - * @nr_bytes: number of completed bytes * @what: the action * * Description: @@ -693,7 +692,7 @@ void blk_trace_shutdown(struct request_queue *q) * **/ static void blk_add_trace_rq(struct request_queue *q, struct request *rq, - unsigned int nr_bytes, u32 what) + u32 what) { struct blk_trace *bt = q->blk_trace; @@ -702,11 +701,11 @@ static void blk_add_trace_rq(struct request_queue *q, struct request *rq, if (rq->cmd_type == REQ_TYPE_BLOCK_PC) { what |= BLK_TC_ACT(BLK_TC_PC); - __blk_add_trace(bt, 0, nr_bytes, rq->cmd_flags, + __blk_add_trace(bt, 0, blk_rq_bytes(rq), rq->cmd_flags, what, rq->errors, rq->cmd_len, rq->cmd); } else { what |= BLK_TC_ACT(BLK_TC_FS); - __blk_add_trace(bt, blk_rq_pos(rq), nr_bytes, + __blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq), rq->cmd_flags, what, rq->errors, 0, NULL); } } @@ -714,34 +713,33 @@ static void blk_add_trace_rq(struct request_queue *q, struct request *rq, static void blk_add_trace_rq_abort(void *ignore, struct request_queue *q, struct request *rq) { - blk_add_trace_rq(q, rq, blk_rq_bytes(rq), BLK_TA_ABORT); + blk_add_trace_rq(q, rq, BLK_TA_ABORT); } static void blk_add_trace_rq_insert(void *ignore, struct request_queue *q, struct request *rq) { - blk_add_trace_rq(q, rq, blk_rq_bytes(rq), BLK_TA_INSERT); + blk_add_trace_rq(q, rq, BLK_TA_INSERT); } static void blk_add_trace_rq_issue(void *ignore, struct request_queue *q, struct request *rq) { - blk_add_trace_rq(q, rq, blk_rq_bytes(rq), BLK_TA_ISSUE); + blk_add_trace_rq(q, rq, BLK_TA_ISSUE); } static void blk_add_trace_rq_requeue(void *ignore, struct request_queue *q, struct request *rq) { - blk_add_trace_rq(q, rq, blk_rq_bytes(rq), BLK_TA_REQUEUE); + blk_add_trace_rq(q, rq, BLK_TA_REQUEUE); } static void blk_add_trace_rq_complete(void *ignore, struct request_queue *q, - struct request *rq, - unsigned int nr_bytes) + struct request *rq) { - blk_add_trace_rq(q, rq, nr_bytes, BLK_TA_COMPLETE); + blk_add_trace_rq(q, rq, BLK_TA_COMPLETE); } /** diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 401d9bd1fe4..4b93b841225 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -331,12 +331,12 @@ static void update_ftrace_function(void) func = ftrace_ops_list_func; } - update_function_graph_func(); - /* If there's no change, then do nothing more here */ if (ftrace_trace_function == func) return; + update_function_graph_func(); + /* * If we are using the list function, it doesn't care * about the function_trace_ops. @@ -4222,11 +4222,16 @@ static void ftrace_init_module(struct module *mod, ftrace_process_locs(mod, start, end); } -void ftrace_module_init(struct module *mod) +static int ftrace_module_notify_enter(struct notifier_block *self, + unsigned long val, void *data) { - ftrace_init_module(mod, mod->ftrace_callsites, - mod->ftrace_callsites + - mod->num_ftrace_callsites); + struct module *mod = data; + + if (val == MODULE_STATE_COMING) + ftrace_init_module(mod, mod->ftrace_callsites, + mod->ftrace_callsites + + mod->num_ftrace_callsites); + return 0; } static int ftrace_module_notify_exit(struct notifier_block *self, @@ -4240,6 +4245,11 @@ static int ftrace_module_notify_exit(struct notifier_block *self, return 0; } #else +static int ftrace_module_notify_enter(struct notifier_block *self, + unsigned long val, void *data) +{ + return 0; +} static int ftrace_module_notify_exit(struct notifier_block *self, unsigned long val, void *data) { @@ -4247,6 +4257,11 @@ static int ftrace_module_notify_exit(struct notifier_block *self, } #endif /* CONFIG_MODULES */ +struct notifier_block ftrace_module_enter_nb = { + .notifier_call = ftrace_module_notify_enter, + .priority = INT_MAX, /* Run before anything that can use kprobes */ +}; + struct notifier_block ftrace_module_exit_nb = { .notifier_call = ftrace_module_notify_exit, .priority = INT_MIN, /* Run after anything that can remove kprobes */ @@ -4283,6 +4298,10 @@ void __init ftrace_init(void) __start_mcount_loc, __stop_mcount_loc); + ret = register_module_notifier(&ftrace_module_enter_nb); + if (ret) + pr_warning("Failed to register trace ftrace module enter notifier\n"); + ret = register_module_notifier(&ftrace_module_exit_nb); if (ret) pr_warning("Failed to register trace ftrace module exit notifier\n"); diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 3d9fee3a80b..fd12cc56371 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -543,7 +543,7 @@ static void rb_wake_up_waiters(struct irq_work *work) * as data is added to any of the @buffer's cpu buffers. Otherwise * it will wait for data to be added to a specific cpu buffer. */ -int ring_buffer_wait(struct ring_buffer *buffer, int cpu) +void ring_buffer_wait(struct ring_buffer *buffer, int cpu) { struct ring_buffer_per_cpu *cpu_buffer; DEFINE_WAIT(wait); @@ -557,8 +557,6 @@ int ring_buffer_wait(struct ring_buffer *buffer, int cpu) if (cpu == RING_BUFFER_ALL_CPUS) work = &buffer->irq_work; else { - if (!cpumask_test_cpu(cpu, buffer->cpumask)) - return -ENODEV; cpu_buffer = buffer->buffers[cpu]; work = &cpu_buffer->irq_work; } @@ -593,7 +591,6 @@ int ring_buffer_wait(struct ring_buffer *buffer, int cpu) schedule(); finish_wait(&work->waiters, &wait); - return 0; } /** @@ -616,6 +613,10 @@ int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu, struct ring_buffer_per_cpu *cpu_buffer; struct rb_irq_work *work; + if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || + (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) + return POLLIN | POLLRDNORM; + if (cpu == RING_BUFFER_ALL_CPUS) work = &buffer->irq_work; else { @@ -626,22 +627,8 @@ int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu, work = &cpu_buffer->irq_work; } - poll_wait(filp, &work->waiters, poll_table); work->waiters_pending = true; - /* - * There's a tight race between setting the waiters_pending and - * checking if the ring buffer is empty. Once the waiters_pending bit - * is set, the next event will wake the task up, but we can get stuck - * if there's only a single event in. - * - * FIXME: Ideally, we need a memory barrier on the writer side as well, - * but adding a memory barrier to all events will cause too much of a - * performance hit in the fast path. We only need a memory barrier when - * the buffer goes from empty to having content. But as this race is - * extremely small, and it's not a problem if another event comes in, we - * will fix it later. - */ - smp_mb(); + poll_wait(filp, &work->waiters, poll_table); if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) @@ -1994,7 +1981,7 @@ rb_add_time_stamp(struct ring_buffer_event *event, u64 delta) /** * rb_update_event - update event type and data - * @event: the event to update + * @event: the even to update * @type: the type of event * @length: the size of the event field in the ring buffer * @@ -3367,16 +3354,21 @@ static void rb_iter_reset(struct ring_buffer_iter *iter) struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer; /* Iterator usage is expected to have record disabled */ - iter->head_page = cpu_buffer->reader_page; - iter->head = cpu_buffer->reader_page->read; - - iter->cache_reader_page = iter->head_page; - iter->cache_read = cpu_buffer->read; - + if (list_empty(&cpu_buffer->reader_page->list)) { + iter->head_page = rb_set_head_page(cpu_buffer); + if (unlikely(!iter->head_page)) + return; + iter->head = iter->head_page->read; + } else { + iter->head_page = cpu_buffer->reader_page; + iter->head = cpu_buffer->reader_page->read; + } if (iter->head) iter->read_stamp = cpu_buffer->read_stamp; else iter->read_stamp = iter->head_page->page->time_stamp; + iter->cache_reader_page = cpu_buffer->reader_page; + iter->cache_read = cpu_buffer->read; } /** @@ -3769,14 +3761,12 @@ rb_iter_peek(struct ring_buffer_iter *iter, u64 *ts) return NULL; /* - * We repeat when a time extend is encountered or we hit - * the end of the page. Since the time extend is always attached - * to a data event, we should never loop more than three times. - * Once for going to next page, once on time extend, and - * finally once to get the event. - * (We never hit the following condition more than thrice). + * We repeat when a time extend is encountered. + * Since the time extend is always attached to a data event, + * we should never loop more than once. + * (We never hit the following condition more than twice). */ - if (RB_WARN_ON(cpu_buffer, ++nr_loops > 3)) + if (RB_WARN_ON(cpu_buffer, ++nr_loops > 2)) return NULL; if (rb_per_cpu_empty(cpu_buffer)) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 3bf9864c313..4c41e22f162 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -423,9 +423,6 @@ int __trace_puts(unsigned long ip, const char *str, int size) struct print_entry *entry; unsigned long irq_flags; int alloc; - int pc; - - pc = preempt_count(); if (unlikely(tracing_selftest_running || tracing_disabled)) return 0; @@ -435,7 +432,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) local_save_flags(irq_flags); buffer = global_trace.trace_buffer.buffer; event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, alloc, - irq_flags, pc); + irq_flags, preempt_count()); if (!event) return 0; @@ -452,7 +449,6 @@ int __trace_puts(unsigned long ip, const char *str, int size) entry->buf[size] = '\0'; __buffer_unlock_commit(buffer, event); - ftrace_trace_stack(buffer, irq_flags, 4, pc); return size; } @@ -470,9 +466,6 @@ int __trace_bputs(unsigned long ip, const char *str) struct bputs_entry *entry; unsigned long irq_flags; int size = sizeof(struct bputs_entry); - int pc; - - pc = preempt_count(); if (unlikely(tracing_selftest_running || tracing_disabled)) return 0; @@ -480,7 +473,7 @@ int __trace_bputs(unsigned long ip, const char *str) local_save_flags(irq_flags); buffer = global_trace.trace_buffer.buffer; event = trace_buffer_lock_reserve(buffer, TRACE_BPUTS, size, - irq_flags, pc); + irq_flags, preempt_count()); if (!event) return 0; @@ -489,7 +482,6 @@ int __trace_bputs(unsigned long ip, const char *str) entry->str = str; __buffer_unlock_commit(buffer, event); - ftrace_trace_stack(buffer, irq_flags, 4, pc); return 1; } @@ -742,7 +734,7 @@ static struct { { trace_clock_local, "local", 1 }, { trace_clock_global, "global", 1 }, { trace_clock_counter, "counter", 0 }, - { trace_clock_jiffies, "uptime", 0 }, + { trace_clock_jiffies, "uptime", 1 }, { trace_clock, "perf", 1 }, ARCH_TRACE_CLOCKS }; @@ -1036,13 +1028,13 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu) } #endif /* CONFIG_TRACER_MAX_TRACE */ -static int default_wait_pipe(struct trace_iterator *iter) +static void default_wait_pipe(struct trace_iterator *iter) { /* Iterators are static, they should be filled or empty */ if (trace_buffer_iter(iter, iter->cpu_file)) - return 0; + return; - return ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file); + ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file); } #ifdef CONFIG_FTRACE_STARTUP_TEST @@ -1316,6 +1308,7 @@ void tracing_start(void) arch_spin_unlock(&ftrace_max_lock); + ftrace_start(); out: raw_spin_unlock_irqrestore(&global_trace.start_lock, flags); } @@ -1362,6 +1355,7 @@ void tracing_stop(void) struct ring_buffer *buffer; unsigned long flags; + ftrace_stop(); raw_spin_lock_irqsave(&global_trace.start_lock, flags); if (global_trace.stop_count++) goto out; @@ -1408,12 +1402,12 @@ static void tracing_stop_tr(struct trace_array *tr) void trace_stop_cmdline_recording(void); -static int trace_save_cmdline(struct task_struct *tsk) +static void trace_save_cmdline(struct task_struct *tsk) { unsigned pid, idx; if (!tsk->pid || unlikely(tsk->pid > PID_MAX_DEFAULT)) - return 0; + return; /* * It's not the end of the world if we don't get @@ -1422,7 +1416,7 @@ static int trace_save_cmdline(struct task_struct *tsk) * so if we miss here, then better luck next time. */ if (!arch_spin_trylock(&trace_cmdline_lock)) - return 0; + return; idx = map_pid_to_cmdline[tsk->pid]; if (idx == NO_CMDLINE_MAP) { @@ -1448,8 +1442,6 @@ static int trace_save_cmdline(struct task_struct *tsk) saved_tgids[idx] = tsk->tgid; arch_spin_unlock(&trace_cmdline_lock); - - return 1; } void trace_find_cmdline(int pid, char comm[]) @@ -1510,8 +1502,9 @@ void tracing_record_cmdline(struct task_struct *tsk) if (!__this_cpu_read(trace_cmdline_save)) return; - if (trace_save_cmdline(tsk)) - __this_cpu_write(trace_cmdline_save, false); + __this_cpu_write(trace_cmdline_save, false); + + trace_save_cmdline(tsk); } void @@ -4153,19 +4146,17 @@ tracing_poll_pipe(struct file *filp, poll_table *poll_table) * * Anyway, this is really very primitive wakeup. */ -int poll_wait_pipe(struct trace_iterator *iter) +void poll_wait_pipe(struct trace_iterator *iter) { set_current_state(TASK_INTERRUPTIBLE); /* sleep for 100 msecs, and try again. */ schedule_timeout(HZ / 10); - return 0; } /* Must be called with trace_types_lock mutex held. */ static int tracing_wait_pipe(struct file *filp) { struct trace_iterator *iter = filp->private_data; - int ret; while (trace_empty(iter)) { @@ -4175,13 +4166,10 @@ static int tracing_wait_pipe(struct file *filp) mutex_unlock(&iter->mutex); - ret = iter->trace->wait_pipe(iter); + iter->trace->wait_pipe(iter); mutex_lock(&iter->mutex); - if (ret) - return ret; - if (signal_pending(current)) return -EINTR; @@ -5115,12 +5103,8 @@ tracing_buffers_read(struct file *filp, char __user *ubuf, goto out_unlock; } mutex_unlock(&trace_types_lock); - ret = iter->trace->wait_pipe(iter); + iter->trace->wait_pipe(iter); mutex_lock(&trace_types_lock); - if (ret) { - size = ret; - goto out_unlock; - } if (signal_pending(current)) { size = -EINTR; goto out_unlock; @@ -5332,10 +5316,8 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos, goto out; } mutex_unlock(&trace_types_lock); - ret = iter->trace->wait_pipe(iter); + iter->trace->wait_pipe(iter); mutex_lock(&trace_types_lock); - if (ret) - goto out; if (signal_pending(current)) { ret = -EINTR; goto out; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 79462443658..691cb4fba7e 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -342,7 +342,7 @@ struct tracer { void (*stop)(struct trace_array *tr); void (*open)(struct trace_iterator *iter); void (*pipe_open)(struct trace_iterator *iter); - int (*wait_pipe)(struct trace_iterator *iter); + void (*wait_pipe)(struct trace_iterator *iter); void (*close)(struct trace_iterator *iter); void (*pipe_close)(struct trace_iterator *iter); ssize_t (*read)(struct trace_iterator *iter, @@ -557,7 +557,7 @@ void trace_init_global_iter(struct trace_iterator *iter); void tracing_iter_reset(struct trace_iterator *iter, int cpu); -int poll_wait_pipe(struct trace_iterator *iter); +void poll_wait_pipe(struct trace_iterator *iter); void ftrace(struct trace_array *tr, struct trace_array_cpu *data, diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c index 57b67b1f24d..26dc348332b 100644 --- a/kernel/trace/trace_clock.c +++ b/kernel/trace/trace_clock.c @@ -59,14 +59,13 @@ u64 notrace trace_clock(void) /* * trace_jiffy_clock(): Simply use jiffies as a clock counter. - * Note that this use of jiffies_64 is not completely safe on - * 32-bit systems. But the window is tiny, and the effect if - * we are affected is that we will have an obviously bogus - * timestamp on a trace event - i.e. not life threatening. */ u64 notrace trace_clock_jiffies(void) { - return jiffies_64_to_clock_t(jiffies_64 - INITIAL_JIFFIES); + u64 jiffy = jiffies - INITIAL_JIFFIES; + + /* Return nsecs */ + return (u64)jiffies_to_usecs(jiffy) * 1000ULL; } /* diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 001b349af93..3d18aadef49 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -27,6 +27,12 @@ DEFINE_MUTEX(event_mutex); +DEFINE_MUTEX(event_storage_mutex); +EXPORT_SYMBOL_GPL(event_storage_mutex); + +char event_storage[EVENT_STORAGE_SIZE]; +EXPORT_SYMBOL_GPL(event_storage); + LIST_HEAD(ftrace_events); static LIST_HEAD(ftrace_common_fields); @@ -1854,16 +1860,6 @@ static void trace_module_add_events(struct module *mod) struct ftrace_module_file_ops *file_ops = NULL; struct ftrace_event_call **call, **start, **end; - if (!mod->num_trace_events) - return; - - /* Don't add infrastructure for mods without tracepoints */ - if (trace_module_has_bad_taint(mod)) { - pr_err("%s: module has bad taint, not creating trace events\n", - mod->name); - return; - } - start = mod->trace_events; end = mod->trace_events + mod->num_trace_events; diff --git a/kernel/trace/trace_export.c b/kernel/trace/trace_export.c index d7d0b50b1b7..d21a7467008 100644 --- a/kernel/trace/trace_export.c +++ b/kernel/trace/trace_export.c @@ -95,12 +95,15 @@ static void __always_unused ____ftrace_check_##name(void) \ #undef __array #define __array(type, item, len) \ do { \ - char *type_str = #type"["__stringify(len)"]"; \ BUILD_BUG_ON(len > MAX_FILTER_STR_VAL); \ - ret = trace_define_field(event_call, type_str, #item, \ + mutex_lock(&event_storage_mutex); \ + snprintf(event_storage, sizeof(event_storage), \ + "%s[%d]", #type, len); \ + ret = trace_define_field(event_call, event_storage, #item, \ offsetof(typeof(field), item), \ sizeof(field.item), \ is_signed_type(type), filter_type); \ + mutex_unlock(&event_storage_mutex); \ if (ret) \ return ret; \ } while (0); diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c index bdb9ee0af99..322e1646107 100644 --- a/kernel/trace/trace_syscalls.c +++ b/kernel/trace/trace_syscalls.c @@ -312,7 +312,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id) int size; syscall_nr = trace_get_syscall_nr(current, regs); - if (syscall_nr < 0 || syscall_nr >= NR_syscalls) + if (syscall_nr < 0) return; if (!test_bit(syscall_nr, tr->enabled_enter_syscalls)) return; @@ -354,7 +354,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret) int syscall_nr; syscall_nr = trace_get_syscall_nr(current, regs); - if (syscall_nr < 0 || syscall_nr >= NR_syscalls) + if (syscall_nr < 0) return; if (!test_bit(syscall_nr, tr->enabled_exit_syscalls)) return; @@ -557,7 +557,7 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id) int size; syscall_nr = trace_get_syscall_nr(current, regs); - if (syscall_nr < 0 || syscall_nr >= NR_syscalls) + if (syscall_nr < 0) return; if (!test_bit(syscall_nr, enabled_perf_enter_syscalls)) return; @@ -633,7 +633,7 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret) int size; syscall_nr = trace_get_syscall_nr(current, regs); - if (syscall_nr < 0 || syscall_nr >= NR_syscalls) + if (syscall_nr < 0) return; if (!test_bit(syscall_nr, enabled_perf_exit_syscalls)) return; diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c index 63630aef3bd..29f26540e9c 100644 --- a/kernel/tracepoint.c +++ b/kernel/tracepoint.c @@ -631,25 +631,17 @@ void tracepoint_iter_reset(struct tracepoint_iter *iter) EXPORT_SYMBOL_GPL(tracepoint_iter_reset); #ifdef CONFIG_MODULES -bool trace_module_has_bad_taint(struct module *mod) -{ - return mod->taints & ~((1 << TAINT_OOT_MODULE) | (1 << TAINT_CRAP)); -} - static int tracepoint_module_coming(struct module *mod) { struct tp_module *tp_mod, *iter; int ret = 0; - if (!mod->num_tracepoints) - return 0; - /* * We skip modules that taint the kernel, especially those with different * module headers (for forced load), to make sure we don't cause a crash. * Staging and out-of-tree GPL modules are fine. */ - if (trace_module_has_bad_taint(mod)) + if (mod->taints & ~((1 << TAINT_OOT_MODULE) | (1 << TAINT_CRAP))) return 0; mutex_lock(&tracepoints_mutex); tp_mod = kmalloc(sizeof(struct tp_module), GFP_KERNEL); @@ -687,9 +679,6 @@ static int tracepoint_module_going(struct module *mod) { struct tp_module *pos; - if (!mod->num_tracepoints) - return 0; - mutex_lock(&tracepoints_mutex); tracepoint_update_probe_range(mod->tracepoints_ptrs, mod->tracepoints_ptrs + mod->num_tracepoints); diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c index 9bea1d7dd21..9064b919a40 100644 --- a/kernel/user_namespace.c +++ b/kernel/user_namespace.c @@ -148,7 +148,7 @@ static u32 map_id_range_down(struct uid_gid_map *map, u32 id, u32 count) /* Find the matching extent */ extents = map->nr_extents; - smp_rmb(); + smp_read_barrier_depends(); for (idx = 0; idx < extents; idx++) { first = map->extent[idx].first; last = first + map->extent[idx].count - 1; @@ -172,7 +172,7 @@ static u32 map_id_down(struct uid_gid_map *map, u32 id) /* Find the matching extent */ extents = map->nr_extents; - smp_rmb(); + smp_read_barrier_depends(); for (idx = 0; idx < extents; idx++) { first = map->extent[idx].first; last = first + map->extent[idx].count - 1; @@ -195,7 +195,7 @@ static u32 map_id_up(struct uid_gid_map *map, u32 id) /* Find the matching extent */ extents = map->nr_extents; - smp_rmb(); + smp_read_barrier_depends(); for (idx = 0; idx < extents; idx++) { first = map->extent[idx].lower_first; last = first + map->extent[idx].count - 1; @@ -611,8 +611,9 @@ static ssize_t map_write(struct file *file, const char __user *buf, * were written before the count of the extents. * * To achieve this smp_wmb() is used on guarantee the write - * order and smp_rmb() is guaranteed that we don't have crazy - * architectures returning stale data. + * order and smp_read_barrier_depends() is guaranteed that we + * don't have crazy architectures returning stale data. + * */ mutex_lock(&id_map_mutex); diff --git a/kernel/workqueue.c b/kernel/workqueue.c index c2f9d6ca7e5..db7a6ac7c0a 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1881,12 +1881,6 @@ static void send_mayday(struct work_struct *work) /* mayday mayday mayday */ if (list_empty(&pwq->mayday_node)) { - /* - * If @pwq is for an unbound wq, its base ref may be put at - * any time due to an attribute change. Pin @pwq until the - * rescuer is done with it. - */ - get_pwq(pwq); list_add_tail(&pwq->mayday_node, &wq->maydays); wake_up_process(wq->rescuer->task); } @@ -2362,7 +2356,6 @@ static int rescuer_thread(void *__rescuer) struct worker *rescuer = __rescuer; struct workqueue_struct *wq = rescuer->rescue_wq; struct list_head *scheduled = &rescuer->scheduled; - bool should_stop; set_user_nice(current, RESCUER_NICE_LEVEL); @@ -2374,15 +2367,11 @@ static int rescuer_thread(void *__rescuer) repeat: set_current_state(TASK_INTERRUPTIBLE); - /* - * By the time the rescuer is requested to stop, the workqueue - * shouldn't have any work pending, but @wq->maydays may still have - * pwq(s) queued. This can happen by non-rescuer workers consuming - * all the work items before the rescuer got to them. Go through - * @wq->maydays processing before acting on should_stop so that the - * list is always empty on exit. - */ - should_stop = kthread_should_stop(); + if (kthread_should_stop()) { + __set_current_state(TASK_RUNNING); + rescuer->task->flags &= ~PF_WQ_WORKER; + return 0; + } /* see whether any pwq is asking for help */ spin_lock_irq(&wq_mayday_lock); @@ -2414,12 +2403,6 @@ repeat: process_scheduled_works(rescuer); /* - * Put the reference grabbed by send_mayday(). @pool won't - * go away while we're holding its lock. - */ - put_pwq(pwq); - - /* * Leave this pool. If keep_working() is %true, notify a * regular worker; otherwise, we end up with 0 concurrency * and stalling the execution. @@ -2434,12 +2417,6 @@ repeat: spin_unlock_irq(&wq_mayday_lock); - if (should_stop) { - __set_current_state(TASK_RUNNING); - rescuer->task->flags &= ~PF_WQ_WORKER; - return 0; - } - /* rescuers should never participate in concurrency management */ WARN_ON_ONCE(!(rescuer->flags & WORKER_NOT_RUNNING)); schedule(); @@ -3373,7 +3350,6 @@ int workqueue_sysfs_register(struct workqueue_struct *wq) } } - dev_set_uevent_suppress(&wq_dev->dev, false); kobject_uevent(&wq_dev->dev.kobj, KOBJ_ADD); return 0; } @@ -4067,8 +4043,7 @@ static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu, if (!pwq) { pr_warning("workqueue: allocation failed while updating NUMA affinity of \"%s\"\n", wq->name); - mutex_lock(&wq->mutex); - goto use_dfl_pwq; + goto out_unlock; } /* @@ -4968,7 +4943,7 @@ static void __init wq_numa_init(void) BUG_ON(!tbl); for_each_node(node) - BUG_ON(!zalloc_cpumask_var_node(&tbl[node], GFP_KERNEL, + BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL, node_online(node) ? node : NUMA_NO_NODE)); for_each_possible_cpu(cpu) { diff --git a/lib/bitmap.c b/lib/bitmap.c index e5c4ebe586b..06f7e4fe8d2 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -131,9 +131,7 @@ void __bitmap_shift_right(unsigned long *dst, lower = src[off + k]; if (left && off + k == lim - 1) lower &= mask; - dst[k] = lower >> rem; - if (rem) - dst[k] |= upper << (BITS_PER_LONG - rem); + dst[k] = upper << (BITS_PER_LONG - rem) | lower >> rem; if (left && k == lim - 1) dst[k] &= mask; } @@ -174,9 +172,7 @@ void __bitmap_shift_left(unsigned long *dst, upper = src[k]; if (left && k == lim - 1) upper &= (1UL << left) - 1; - dst[k + off] = upper << rem; - if (rem) - dst[k + off] |= lower >> (BITS_PER_LONG - rem); + dst[k + off] = lower >> (BITS_PER_LONG - rem) | upper << rem; if (left && k + off == lim - 1) dst[k + off] &= (1UL << left) - 1; } diff --git a/lib/btree.c b/lib/btree.c index 4264871ea1a..f9a484676cb 100644 --- a/lib/btree.c +++ b/lib/btree.c @@ -198,7 +198,6 @@ EXPORT_SYMBOL_GPL(btree_init); void btree_destroy(struct btree_head *head) { - mempool_free(head->node, head->mempool); mempool_destroy(head->mempool); head->mempool = NULL; } diff --git a/lib/idr.c b/lib/idr.c index a3bfde8ad60..cca4b9302a7 100644 --- a/lib/idr.c +++ b/lib/idr.c @@ -250,7 +250,7 @@ static int sub_alloc(struct idr *idp, int *starting_id, struct idr_layer **pa, id = (id | ((1 << (IDR_BITS * l)) - 1)) + 1; /* if already at the top layer, we need to grow */ - if (id > idr_max(idp->layers)) { + if (id >= 1 << (idp->layers * IDR_BITS)) { *starting_id = id; return -EAGAIN; } @@ -829,10 +829,12 @@ void *idr_replace(struct idr *idp, void *ptr, int id) if (!p) return ERR_PTR(-EINVAL); - if (id > idr_max(p->layer + 1)) + n = (p->layer+1) * IDR_BITS; + + if (id >= (1 << n)) return ERR_PTR(-EINVAL); - n = p->layer * IDR_BITS; + n -= IDR_BITS; while ((n > 0) && p) { p = p->ary[(id >> n) & IDR_MASK]; n -= IDR_BITS; diff --git a/lib/lzo/lzo1x_decompress_safe.c b/lib/lzo/lzo1x_decompress_safe.c index a1c387f6afb..569985d522d 100644 --- a/lib/lzo/lzo1x_decompress_safe.c +++ b/lib/lzo/lzo1x_decompress_safe.c @@ -25,16 +25,6 @@ #define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun #define TEST_LB(m_pos) if ((m_pos) < out) goto lookbehind_overrun -/* This MAX_255_COUNT is the maximum number of times we can add 255 to a base - * count without overflowing an integer. The multiply will overflow when - * multiplying 255 by more than MAXINT/255. The sum will overflow earlier - * depending on the base count. Since the base count is taken from a u8 - * and a few bits, it is safe to assume that it will always be lower than - * or equal to 2*255, thus we can always prevent any overflow by accepting - * two less 255 steps. See Documentation/lzo.txt for more information. - */ -#define MAX_255_COUNT ((((size_t)~0) / 255) - 2) - int lzo1x_decompress_safe(const unsigned char *in, size_t in_len, unsigned char *out, size_t *out_len) { @@ -65,19 +55,12 @@ int lzo1x_decompress_safe(const unsigned char *in, size_t in_len, if (t < 16) { if (likely(state == 0)) { if (unlikely(t == 0)) { - size_t offset; - const unsigned char *ip_last = ip; - while (unlikely(*ip == 0)) { + t += 255; ip++; NEED_IP(1); } - offset = ip - ip_last; - if (unlikely(offset > MAX_255_COUNT)) - return LZO_E_ERROR; - - offset = (offset << 8) - offset; - t += offset + 15 + *ip++; + t += 15 + *ip++; } t += 3; copy_literal_run: @@ -133,19 +116,12 @@ copy_literal_run: } else if (t >= 32) { t = (t & 31) + (3 - 1); if (unlikely(t == 2)) { - size_t offset; - const unsigned char *ip_last = ip; - while (unlikely(*ip == 0)) { + t += 255; ip++; NEED_IP(1); } - offset = ip - ip_last; - if (unlikely(offset > MAX_255_COUNT)) - return LZO_E_ERROR; - - offset = (offset << 8) - offset; - t += offset + 31 + *ip++; + t += 31 + *ip++; NEED_IP(2); } m_pos = op - 1; @@ -158,19 +134,12 @@ copy_literal_run: m_pos -= (t & 8) << 11; t = (t & 7) + (3 - 1); if (unlikely(t == 2)) { - size_t offset; - const unsigned char *ip_last = ip; - while (unlikely(*ip == 0)) { + t += 255; ip++; NEED_IP(1); } - offset = ip - ip_last; - if (unlikely(offset > MAX_255_COUNT)) - return LZO_E_ERROR; - - offset = (offset << 8) - offset; - t += offset + 7 + *ip++; + t += 7 + *ip++; NEED_IP(2); } next = get_unaligned_le16(ip); diff --git a/lib/nlattr.c b/lib/nlattr.c index 10ad042d01b..18eca7809b0 100644 --- a/lib/nlattr.c +++ b/lib/nlattr.c @@ -201,8 +201,8 @@ int nla_parse(struct nlattr **tb, int maxtype, const struct nlattr *head, } if (unlikely(rem > 0)) - pr_warn_ratelimited("netlink: %d bytes leftover after parsing attributes in process `%s'.\n", - rem, current->comm); + printk(KERN_WARNING "netlink: %d bytes leftover after parsing " + "attributes.\n", rem); err = 0; errout: @@ -303,15 +303,9 @@ int nla_memcmp(const struct nlattr *nla, const void *data, */ int nla_strcmp(const struct nlattr *nla, const char *str) { - int len = strlen(str); - char *buf = nla_data(nla); - int attrlen = nla_len(nla); - int d; + int len = strlen(str) + 1; + int d = nla_len(nla) - len; - if (attrlen > 0 && buf[attrlen - 1] == '\0') - attrlen--; - - d = attrlen - len; if (d == 0) d = memcmp(nla_data(nla), str, len); diff --git a/lib/string.c b/lib/string.c index 43d0781daf4..e5878de4f10 100644 --- a/lib/string.c +++ b/lib/string.c @@ -586,22 +586,6 @@ void *memset(void *s, int c, size_t count) EXPORT_SYMBOL(memset); #endif -/** - * memzero_explicit - Fill a region of memory (e.g. sensitive - * keying data) with 0s. - * @s: Pointer to the start of the area. - * @count: The size of the area. - * - * memzero_explicit() doesn't need an arch-specific version as - * it just invokes the one of memset() implicitly. - */ -void memzero_explicit(void *s, size_t count) -{ - memset(s, 0, count); - OPTIMIZER_HIDE_VAR(s); -} -EXPORT_SYMBOL(memzero_explicit); - #ifndef __HAVE_ARCH_MEMCPY /** * memcpy - Copy one area of memory to another diff --git a/mm/backing-dev.c b/mm/backing-dev.c index a25744357c3..3c6b16c883c 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -287,9 +287,6 @@ int bdi_has_dirty_io(struct backing_dev_info *bdi) * Note, we wouldn't bother setting up the timer, but this function is on the * fast-path (used by '__mark_inode_dirty()'), so we save few context switches * by delaying the wake-up. - * - * We have to be careful not to postpone flush work if it is scheduled for - * earlier. Thus we use queue_delayed_work(). */ void bdi_wakeup_thread_delayed(struct backing_dev_info *bdi) { diff --git a/mm/compaction.c b/mm/compaction.c index 2104c458f84..e37cb678cc6 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -252,6 +252,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, { int nr_scanned = 0, total_isolated = 0; struct page *cursor, *valid_page = NULL; + unsigned long nr_strict_required = end_pfn - blockpfn; unsigned long flags; bool locked = false; @@ -264,12 +265,11 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, nr_scanned++; if (!pfn_valid_within(blockpfn)) - goto isolate_fail; - + continue; if (!valid_page) valid_page = page; if (!PageBuddy(page)) - goto isolate_fail; + continue; /* * The zone lock must be held to isolate freepages. @@ -290,10 +290,12 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, /* Recheck this is a buddy page under lock */ if (!PageBuddy(page)) - goto isolate_fail; + continue; /* Found a free page, break it into order-0 pages */ isolated = split_free_page(page); + if (!isolated && strict) + break; total_isolated += isolated; for (i = 0; i < isolated; i++) { list_add(&page->lru, freelist); @@ -304,15 +306,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, if (isolated) { blockpfn += isolated - 1; cursor += isolated - 1; - continue; } - -isolate_fail: - if (strict) - break; - else - continue; - } trace_mm_compaction_isolate_freepages(nr_scanned, total_isolated); @@ -322,7 +316,7 @@ isolate_fail: * pages requested were isolated. If there were any failures, 0 is * returned and CMA will fail. */ - if (strict && blockpfn < end_pfn) + if (strict && nr_strict_required > total_isolated) total_isolated = 0; if (locked) @@ -658,21 +652,17 @@ static void isolate_freepages(struct zone *zone, struct compact_control *cc) { struct page *page; - unsigned long high_pfn, low_pfn, pfn, z_end_pfn; + unsigned long high_pfn, low_pfn, pfn, z_end_pfn, end_pfn; int nr_freepages = cc->nr_freepages; struct list_head *freelist = &cc->freepages; /* * Initialise the free scanner. The starting point is where we last - * successfully isolated from, zone-cached value, or the end of the - * zone when isolating for the first time. We need this aligned to - * the pageblock boundary, because we do pfn -= pageblock_nr_pages - * in the for loop. - * The low boundary is the end of the pageblock the migration scanner - * is using. + * scanned from (or the end of the zone if starting). The low point + * is the end of the pageblock the migration scanner is using. */ - pfn = cc->free_pfn & ~(pageblock_nr_pages-1); - low_pfn = ALIGN(cc->migrate_pfn + 1, pageblock_nr_pages); + pfn = cc->free_pfn; + low_pfn = cc->migrate_pfn + pageblock_nr_pages; /* * Take care that if the migration scanner is at the end of the zone @@ -688,10 +678,9 @@ static void isolate_freepages(struct zone *zone, * pages on cc->migratepages. We stop searching if the migrate * and free page scanners meet or enough free pages are isolated. */ - for (; pfn >= low_pfn && cc->nr_migratepages > nr_freepages; + for (; pfn > low_pfn && cc->nr_migratepages > nr_freepages; pfn -= pageblock_nr_pages) { unsigned long isolated; - unsigned long end_pfn; if (!pfn_valid(pfn)) continue; @@ -719,10 +708,13 @@ static void isolate_freepages(struct zone *zone, isolated = 0; /* - * Take care when isolating in last pageblock of a zone which - * ends in the middle of a pageblock. + * As pfn may not start aligned, pfn+pageblock_nr_page + * may cross a MAX_ORDER_NR_PAGES boundary and miss + * a pfn_valid check. Ensure isolate_freepages_block() + * only scans within a pageblock */ - end_pfn = min(pfn + pageblock_nr_pages, z_end_pfn); + end_pfn = ALIGN(pfn + 1, pageblock_nr_pages); + end_pfn = min(end_pfn, z_end_pfn); isolated = isolate_freepages_block(cc, pfn, end_pfn, freelist, false); nr_freepages += isolated; @@ -741,14 +733,7 @@ static void isolate_freepages(struct zone *zone, /* split_free_page does not map the pages */ map_pages(freelist); - /* - * If we crossed the migrate scanner, we want to keep it that way - * so that compact_finished() may detect this - */ - if (pfn < low_pfn) - cc->free_pfn = max(pfn, zone->zone_start_pfn); - else - cc->free_pfn = high_pfn; + cc->free_pfn = high_pfn; cc->nr_freepages = nr_freepages; } @@ -957,14 +942,6 @@ static int compact_zone(struct zone *zone, struct compact_control *cc) } /* - * Clear pageblock skip if there were failures recently and compaction - * is about to be retried after being deferred. kswapd does not do - * this reset as it'll reset the cached information when going to sleep. - */ - if (compaction_restarting(zone, cc->order) && !current_is_kswapd()) - __reset_isolation_suitable(zone); - - /* * Setup to move all movable pages to the end of the zone. Used cached * information on where the scanners should start but check that it * is initialised by ensuring the values are within zone boundaries. @@ -980,6 +957,14 @@ static int compact_zone(struct zone *zone, struct compact_control *cc) zone->compact_cached_migrate_pfn = cc->migrate_pfn; } + /* + * Clear pageblock skip if there were failures recently and compaction + * is about to be retried after being deferred. kswapd does not do + * this reset as it'll reset the cached information when going to sleep. + */ + if (compaction_restarting(zone, cc->order) && !current_is_kswapd()) + __reset_isolation_suitable(zone); + migrate_prep_local(); while ((ret = compact_finished(zone, cc)) == COMPACT_CONTINUE) { @@ -1013,11 +998,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc) if (err) { putback_movable_pages(&cc->migratepages); cc->nr_migratepages = 0; - /* - * migrate_pages() may return -ENOMEM when scanners meet - * and we want compact_finished() to detect it - */ - if (err == -ENOMEM && cc->free_pfn > cc->migrate_pfn) { + if (err == -ENOMEM) { ret = COMPACT_PARTIAL; goto out; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d21c9ef0943..eb00e81601a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1733,24 +1733,21 @@ static int __split_huge_page_map(struct page *page, if (pmd) { pgtable = pgtable_trans_huge_withdraw(mm); pmd_populate(mm, &_pmd, pgtable); - if (pmd_write(*pmd)) - BUG_ON(page_mapcount(page) != 1); haddr = address; for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { pte_t *pte, entry; BUG_ON(PageCompound(page+i)); - /* - * Note that pmd_numa is not transferred deliberately - * to avoid any possibility that pte_numa leaks to - * a PROT_NONE VMA by accident. - */ entry = mk_pte(page + i, vma->vm_page_prot); entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (!pmd_write(*pmd)) entry = pte_wrprotect(entry); + else + BUG_ON(page_mapcount(page) != 1); if (!pmd_young(*pmd)) entry = pte_mkold(entry); + if (pmd_numa(*pmd)) + entry = pte_mknuma(entry); pte = pte_offset_map(&_pmd, haddr); BUG_ON(!pte_none(*pte)); set_pte_at(mm, haddr, pte, entry); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7de4f67c81f..aa3b9a63394 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1100,7 +1100,6 @@ static void return_unused_surplus_pages(struct hstate *h, while (nr_pages--) { if (!free_pool_huge_page(h, &node_states[N_MEMORY], 1)) break; - cond_resched_lock(&hugetlb_lock); } } @@ -1488,7 +1487,6 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count, while (min_count < persistent_huge_pages(h)) { if (!free_pool_huge_page(h, nodes_allowed, 0)) break; - cond_resched_lock(&hugetlb_lock); } while (count < persistent_huge_pages(h)) { if (!adjust_pool_surplus(h, nodes_allowed, 1)) @@ -2328,31 +2326,6 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, update_mmu_cache(vma, address, ptep); } -static int is_hugetlb_entry_migration(pte_t pte) -{ - swp_entry_t swp; - - if (huge_pte_none(pte) || pte_present(pte)) - return 0; - swp = pte_to_swp_entry(pte); - if (non_swap_entry(swp) && is_migration_entry(swp)) - return 1; - else - return 0; -} - -static int is_hugetlb_entry_hwpoisoned(pte_t pte) -{ - swp_entry_t swp; - - if (huge_pte_none(pte) || pte_present(pte)) - return 0; - swp = pte_to_swp_entry(pte); - if (non_swap_entry(swp) && is_hwpoison_entry(swp)) - return 1; - else - return 0; -} int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *vma) @@ -2380,24 +2353,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, spin_lock(&dst->page_table_lock); spin_lock_nested(&src->page_table_lock, SINGLE_DEPTH_NESTING); - entry = huge_ptep_get(src_pte); - if (huge_pte_none(entry)) { /* skip none entry */ - ; - } else if (unlikely(is_hugetlb_entry_migration(entry) || - is_hugetlb_entry_hwpoisoned(entry))) { - swp_entry_t swp_entry = pte_to_swp_entry(entry); - - if (is_write_migration_entry(swp_entry) && cow) { - /* - * COW mappings require pages in both - * parent and child to be set to read. - */ - make_migration_entry_read(&swp_entry); - entry = swp_entry_to_pte(swp_entry); - set_huge_pte_at(src, addr, src_pte, entry); - } - set_huge_pte_at(dst, addr, dst_pte, entry); - } else { + if (!huge_pte_none(huge_ptep_get(src_pte))) { if (cow) huge_ptep_set_wrprotect(src, addr, src_pte); entry = huge_ptep_get(src_pte); @@ -2415,6 +2371,32 @@ nomem: return -ENOMEM; } +static int is_hugetlb_entry_migration(pte_t pte) +{ + swp_entry_t swp; + + if (huge_pte_none(pte) || pte_present(pte)) + return 0; + swp = pte_to_swp_entry(pte); + if (non_swap_entry(swp) && is_migration_entry(swp)) + return 1; + else + return 0; +} + +static int is_hugetlb_entry_hwpoisoned(pte_t pte) +{ + swp_entry_t swp; + + if (huge_pte_none(pte) || pte_present(pte)) + return 0; + swp = pte_to_swp_entry(pte); + if (non_swap_entry(swp) && is_hwpoison_entry(swp)) + return 1; + else + return 0; +} + void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page) @@ -444,7 +444,7 @@ static void break_cow(struct rmap_item *rmap_item) static struct page *page_trans_compound_anon(struct page *page) { if (PageTransCompound(page)) { - struct page *head = compound_head(page); + struct page *head = compound_trans_head(page); /* * head may actually be splitted and freed from under * us but it's ok here. diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7849660665d..96f5e2d8390 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6395,23 +6395,9 @@ static void mem_cgroup_invalidate_reclaim_iterators(struct mem_cgroup *memcg) static void mem_cgroup_css_offline(struct cgroup *cont) { struct mem_cgroup *memcg = mem_cgroup_from_cont(cont); - struct cgroup *iter; mem_cgroup_invalidate_reclaim_iterators(memcg); - - /* - * This requires that offlining is serialized. Right now that is - * guaranteed because css_killed_work_fn() holds the cgroup_mutex. - */ - rcu_read_lock(); - cgroup_for_each_descendant_post(iter, cont) { - rcu_read_unlock(); - mem_cgroup_reparent_charges(mem_cgroup_from_cont(iter)); - rcu_read_lock(); - } - rcu_read_unlock(); mem_cgroup_reparent_charges(memcg); - mem_cgroup_destroy_all_caches(memcg); } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 603f1fa1b7a..e386beefc99 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -208,9 +208,9 @@ static int kill_proc(struct task_struct *t, unsigned long addr, int trapno, #endif si.si_addr_lsb = compound_trans_order(compound_head(page)) + PAGE_SHIFT; - if ((flags & MF_ACTION_REQUIRED) && t->mm == current->mm) { + if ((flags & MF_ACTION_REQUIRED) && t == current) { si.si_code = BUS_MCEERR_AR; - ret = force_sig_info(SIGBUS, &si, current); + ret = force_sig_info(SIGBUS, &si, t); } else { /* * Don't use force here, it's convenient if the signal @@ -382,12 +382,10 @@ static void kill_procs(struct list_head *to_kill, int forcekill, int trapno, } } -static int task_early_kill(struct task_struct *tsk, int force_early) +static int task_early_kill(struct task_struct *tsk) { if (!tsk->mm) return 0; - if (force_early) - return 1; if (tsk->flags & PF_MCE_PROCESS) return !!(tsk->flags & PF_MCE_EARLY); return sysctl_memory_failure_early_kill; @@ -397,7 +395,7 @@ static int task_early_kill(struct task_struct *tsk, int force_early) * Collect processes when the error hit an anonymous page. */ static void collect_procs_anon(struct page *page, struct list_head *to_kill, - struct to_kill **tkc, int force_early) + struct to_kill **tkc) { struct vm_area_struct *vma; struct task_struct *tsk; @@ -413,7 +411,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, for_each_process (tsk) { struct anon_vma_chain *vmac; - if (!task_early_kill(tsk, force_early)) + if (!task_early_kill(tsk)) continue; anon_vma_interval_tree_foreach(vmac, &av->rb_root, pgoff, pgoff) { @@ -432,7 +430,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, * Collect processes when the error hit a file mapped page. */ static void collect_procs_file(struct page *page, struct list_head *to_kill, - struct to_kill **tkc, int force_early) + struct to_kill **tkc) { struct vm_area_struct *vma; struct task_struct *tsk; @@ -443,7 +441,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill, for_each_process(tsk) { pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); - if (!task_early_kill(tsk, force_early)) + if (!task_early_kill(tsk)) continue; vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, @@ -469,8 +467,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill, * First preallocate one tokill structure outside the spin locks, * so that we can kill at least one process reasonably reliable. */ -static void collect_procs(struct page *page, struct list_head *tokill, - int force_early) +static void collect_procs(struct page *page, struct list_head *tokill) { struct to_kill *tk; @@ -481,9 +478,9 @@ static void collect_procs(struct page *page, struct list_head *tokill, if (!tk) return; if (PageAnon(page)) - collect_procs_anon(page, tokill, &tk, force_early); + collect_procs_anon(page, tokill, &tk); else - collect_procs_file(page, tokill, &tk, force_early); + collect_procs_file(page, tokill, &tk); kfree(tk); } @@ -968,7 +965,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, * there's nothing that can be done. */ if (kill) - collect_procs(ppage, &tokill, flags & MF_ACTION_REQUIRED); + collect_procs(ppage, &tokill); ret = try_to_unmap(ppage, ttu); if (ret != SWAP_SUCCESS) @@ -1086,16 +1083,15 @@ int memory_failure(unsigned long pfn, int trapno, int flags) return 0; } else if (PageHuge(hpage)) { /* - * Check "filter hit" and "race with other subpage." + * Check "just unpoisoned", "filter hit", and + * "race with other subpage." */ lock_page(hpage); - if (PageHWPoison(hpage)) { - if ((hwpoison_filter(p) && TestClearPageHWPoison(p)) - || (p != hpage && TestSetPageHWPoison(hpage))) { - atomic_long_sub(nr_pages, &num_poisoned_pages); - unlock_page(hpage); - return 0; - } + if (!PageHWPoison(hpage) + || (hwpoison_filter(p) && TestClearPageHWPoison(p)) + || (p != hpage && TestSetPageHWPoison(hpage))) { + atomic_long_sub(nr_pages, &num_poisoned_pages); + return 0; } set_page_hwpoison_huge_page(hpage); res = dequeue_hwpoisoned_huge_page(hpage); @@ -1156,8 +1152,6 @@ int memory_failure(unsigned long pfn, int trapno, int flags) */ if (!PageHWPoison(p)) { printk(KERN_ERR "MCE %#lx: just unpoisoned\n", pfn); - atomic_long_sub(nr_pages, &num_poisoned_pages); - put_page(hpage); res = 0; goto out; } @@ -1550,7 +1544,7 @@ int soft_offline_page(struct page *page, int flags) { int ret; unsigned long pfn = page_to_pfn(page); - struct page *hpage = compound_head(page); + struct page *hpage = compound_trans_head(page); if (PageHWPoison(page)) { pr_info("soft offline: %#lx page already poisoned\n", pfn); diff --git a/mm/memory.c b/mm/memory.c index 88603f7cdfe..03ee9cfb99f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2039,17 +2039,12 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags) { struct vm_area_struct *vma; - vm_flags_t vm_flags; int ret; vma = find_extend_vma(mm, address); if (!vma || address < vma->vm_start) return -EFAULT; - vm_flags = (fault_flags & FAULT_FLAG_WRITE) ? VM_WRITE : VM_READ; - if (!(vm_flags & vma->vm_flags)) - return -EFAULT; - ret = handle_mm_fault(mm, vma, address, fault_flags); if (ret & VM_FAULT_ERROR) { if (ret & VM_FAULT_OOM) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 420e4b97ffa..763be19461b 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -608,18 +608,19 @@ static unsigned long change_prot_numa(struct vm_area_struct *vma, * If pagelist != NULL then isolate pages from the LRU and * put them on the pagelist. */ -static int +static struct vm_area_struct * check_range(struct mm_struct *mm, unsigned long start, unsigned long end, const nodemask_t *nodes, unsigned long flags, void *private) { - int err = 0; - struct vm_area_struct *vma, *prev; + int err; + struct vm_area_struct *first, *vma, *prev; - vma = find_vma(mm, start); - if (!vma) - return -EFAULT; + + first = find_vma(mm, start); + if (!first) + return ERR_PTR(-EFAULT); prev = NULL; - for (; vma && vma->vm_start < end; vma = vma->vm_next) { + for (vma = first; vma && vma->vm_start < end; vma = vma->vm_next) { unsigned long endvma = vma->vm_end; if (endvma > end) @@ -629,9 +630,9 @@ check_range(struct mm_struct *mm, unsigned long start, unsigned long end, if (!(flags & MPOL_MF_DISCONTIG_OK)) { if (!vma->vm_next && vma->vm_end < end) - return -EFAULT; + return ERR_PTR(-EFAULT); if (prev && prev->vm_end < vma->vm_start) - return -EFAULT; + return ERR_PTR(-EFAULT); } if (is_vm_hugetlb_page(vma)) @@ -648,13 +649,15 @@ check_range(struct mm_struct *mm, unsigned long start, unsigned long end, err = check_pgd_range(vma, start, endvma, nodes, flags, private); - if (err) + if (err) { + first = ERR_PTR(err); break; + } } next: prev = vma; } - return err; + return first; } /* @@ -1135,17 +1138,16 @@ out: /* * Allocate a new page for page migration based on vma policy. - * Start by assuming the page is mapped by the same vma as contains @start. + * Start assuming that page is mapped by vma pointed to by @private. * Search forward from there, if not. N.B., this assumes that the * list of pages handed to migrate_pages()--which is how we get here-- * is in virtual address order. */ -static struct page *new_page(struct page *page, unsigned long start, int **x) +static struct page *new_vma_page(struct page *page, unsigned long private, int **x) { - struct vm_area_struct *vma; + struct vm_area_struct *vma = (struct vm_area_struct *)private; unsigned long uninitialized_var(address); - vma = find_vma(current->mm, start); while (vma) { address = page_address_in_vma(page, vma); if (address != -EFAULT) @@ -1171,7 +1173,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, return -ENOSYS; } -static struct page *new_page(struct page *page, unsigned long start, int **x) +static struct page *new_vma_page(struct page *page, unsigned long private, int **x) { return NULL; } @@ -1181,6 +1183,7 @@ static long do_mbind(unsigned long start, unsigned long len, unsigned short mode, unsigned short mode_flags, nodemask_t *nmask, unsigned long flags) { + struct vm_area_struct *vma; struct mm_struct *mm = current->mm; struct mempolicy *new; unsigned long end; @@ -1246,9 +1249,11 @@ static long do_mbind(unsigned long start, unsigned long len, if (err) goto mpol_out; - err = check_range(mm, start, end, nmask, + vma = check_range(mm, start, end, nmask, flags | MPOL_MF_INVERT, &pagelist); - if (!err) + + err = PTR_ERR(vma); /* maybe ... */ + if (!IS_ERR(vma)) err = mbind_range(mm, start, end, new); if (!err) { @@ -1256,8 +1261,9 @@ static long do_mbind(unsigned long start, unsigned long len, if (!list_empty(&pagelist)) { WARN_ON_ONCE(flags & MPOL_MF_LAZY); - nr_failed = migrate_pages(&pagelist, new_page, - start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND); + nr_failed = migrate_pages(&pagelist, new_vma_page, + (unsigned long)vma, + MIGRATE_SYNC, MR_MEMPOLICY_MBIND); if (nr_failed) putback_lru_pages(&pagelist); } @@ -2086,6 +2092,7 @@ struct mempolicy *__mpol_dup(struct mempolicy *old) } else *new = *old; + rcu_read_lock(); if (current_cpuset_is_being_rebound()) { nodemask_t mems = cpuset_mems_allowed(current); if (new->flags & MPOL_F_REBINDING) @@ -2093,6 +2100,7 @@ struct mempolicy *__mpol_dup(struct mempolicy *old) else mpol_rebind_policy(new, &mems, MPOL_REBIND_ONCE); } + rcu_read_unlock(); atomic_set(&new->refcnt, 1); return new; } diff --git a/mm/mlock.c b/mm/mlock.c index 3dcea72277b..33861c78007 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -76,7 +76,6 @@ void clear_page_mlock(struct page *page) */ void mlock_vma_page(struct page *page) { - /* Serialize with page migration */ BUG_ON(!PageLocked(page)); if (!TestSetPageMlocked(page)) { @@ -107,7 +106,6 @@ unsigned int munlock_vma_page(struct page *page) { unsigned int page_mask = 0; - /* For try_to_munlock() and to serialize with page migration */ BUG_ON(!PageLocked(page)); if (TestClearPageMlocked(page)) { diff --git a/mm/mremap.c b/mm/mremap.c index 2201d060c31..463a25705ac 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -175,17 +175,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma, break; if (pmd_trans_huge(*old_pmd)) { int err = 0; - if (extent == HPAGE_PMD_SIZE) { - VM_BUG_ON(vma->vm_file || !vma->anon_vma); - /* See comment in move_ptes() */ - if (need_rmap_locks) - anon_vma_lock_write(vma->anon_vma); + if (extent == HPAGE_PMD_SIZE) err = move_huge_pmd(vma, new_vma, old_addr, new_addr, old_end, old_pmd, new_pmd); - if (need_rmap_locks) - anon_vma_unlock_write(vma->anon_vma); - } if (err > 0) { need_flush = true; continue; diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 1a582e3aee3..f22859784de 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -47,21 +47,19 @@ static DEFINE_SPINLOCK(zone_scan_lock); #ifdef CONFIG_NUMA /** * has_intersects_mems_allowed() - check task eligiblity for kill - * @start: task struct of which task to consider + * @tsk: task struct of which task to consider * @mask: nodemask passed to page allocator for mempolicy ooms * * Task eligibility is determined by whether or not a candidate task, @tsk, * shares the same mempolicy nodes as current if it is bound by such a policy * and whether or not it has the same set of allowed cpuset nodes. */ -static bool has_intersects_mems_allowed(struct task_struct *start, +static bool has_intersects_mems_allowed(struct task_struct *tsk, const nodemask_t *mask) { - struct task_struct *tsk; - bool ret = false; + struct task_struct *start = tsk; - rcu_read_lock(); - for_each_thread(start, tsk) { + do { if (mask) { /* * If this is a mempolicy constrained oom, tsk's @@ -69,20 +67,19 @@ static bool has_intersects_mems_allowed(struct task_struct *start, * mempolicy intersects current, otherwise it may be * needlessly killed. */ - ret = mempolicy_nodemask_intersects(tsk, mask); + if (mempolicy_nodemask_intersects(tsk, mask)) + return true; } else { /* * This is not a mempolicy constrained oom, so only * check the mems of tsk's cpuset. */ - ret = cpuset_mems_allowed_intersects(current, tsk); + if (cpuset_mems_allowed_intersects(current, tsk)) + return true; } - if (ret) - break; - } - rcu_read_unlock(); + } while_each_thread(start, tsk); - return ret; + return false; } #else static bool has_intersects_mems_allowed(struct task_struct *tsk, @@ -100,21 +97,16 @@ static bool has_intersects_mems_allowed(struct task_struct *tsk, */ struct task_struct *find_lock_task_mm(struct task_struct *p) { - struct task_struct *t; - - rcu_read_lock(); + struct task_struct *t = p; - for_each_thread(p, t) { + do { task_lock(t); if (likely(t->mm)) - goto found; + return t; task_unlock(t); - } - t = NULL; -found: - rcu_read_unlock(); + } while_each_thread(p, t); - return t; + return NULL; } /* return true if the task is not adequate as candidate victim task. */ @@ -309,7 +301,7 @@ static struct task_struct *select_bad_process(unsigned int *ppoints, unsigned long chosen_points = 0; rcu_read_lock(); - for_each_process_thread(g, p) { + do_each_thread(g, p) { unsigned int points; switch (oom_scan_process_thread(p, totalpages, nodemask, @@ -331,7 +323,7 @@ static struct task_struct *select_bad_process(unsigned int *ppoints, chosen = p; chosen_points = points; } - } + } while_each_thread(g, p); if (chosen) get_task_struct(chosen); rcu_read_unlock(); @@ -402,23 +394,6 @@ static void dump_header(struct task_struct *p, gfp_t gfp_mask, int order, dump_tasks(memcg, nodemask); } -/* - * Number of OOM killer invocations (including memcg OOM killer). - * Primarily used by PM freezer to check for potential races with - * OOM killed frozen task. - */ -static atomic_t oom_kills = ATOMIC_INIT(0); - -int oom_kills_count(void) -{ - return atomic_read(&oom_kills); -} - -void note_oom_kill(void) -{ - atomic_inc(&oom_kills); -} - #define K(x) ((x) << (PAGE_SHIFT-10)) /* * Must be called while holding a reference to p, which will be released upon @@ -431,7 +406,7 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, { struct task_struct *victim = p; struct task_struct *child; - struct task_struct *t; + struct task_struct *t = p; struct mm_struct *mm; unsigned int victim_points = 0; static DEFINE_RATELIMIT_STATE(oom_rs, DEFAULT_RATELIMIT_INTERVAL, @@ -462,7 +437,7 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, * still freeing memory. */ read_lock(&tasklist_lock); - for_each_thread(p, t) { + do { list_for_each_entry(child, &t->children, sibling) { unsigned int child_points; @@ -480,11 +455,13 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, get_task_struct(victim); } } - } + } while_each_thread(p, t); read_unlock(&tasklist_lock); + rcu_read_lock(); p = find_lock_task_mm(victim); if (!p) { + rcu_read_unlock(); put_task_struct(victim); return; } else if (victim != p) { @@ -510,7 +487,6 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, * That thread will now get access to memory reserves since it has a * pending fatal signal. */ - rcu_read_lock(); for_each_process(p) if (p->mm == mm && !same_thread_group(p, victim) && !(p->flags & PF_KTHREAD)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 61cb192965d..15f801b726d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -374,11 +374,9 @@ void prep_compound_page(struct page *page, unsigned long order) __SetPageHead(page); for (i = 1; i < nr_pages; i++) { struct page *p = page + i; + __SetPageTail(p); set_page_count(p, 0); p->first_page = page; - /* Make sure p->first_page is always valid for PageTail() */ - smp_wmb(); - __SetPageTail(p); } } @@ -2159,14 +2157,6 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, } /* - * PM-freezer should be notified that there might be an OOM killer on - * its way to kill and wake somebody up. This is too early and we might - * end up not killing anything but false positives are acceptable. - * See freeze_processes. - */ - note_oom_kill(); - - /* * Go through the zonelist yet one more time, keep very high watermark * here, this is only to catch a parallel oom killing, we must fail if * we're still under heavy pressure. @@ -2386,7 +2376,7 @@ static inline int gfp_to_alloc_flags(gfp_t gfp_mask) { int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET; - const bool atomic = !(gfp_mask & (__GFP_WAIT | __GFP_NO_KSWAPD)); + const gfp_t wait = gfp_mask & __GFP_WAIT; /* __GFP_HIGH is assumed to be the same as ALLOC_HIGH to save a branch. */ BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_HIGH); @@ -2395,20 +2385,20 @@ gfp_to_alloc_flags(gfp_t gfp_mask) * The caller may dip into page reserves a bit more if the caller * cannot run direct reclaim, or if the caller has realtime scheduling * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will - * set both ALLOC_HARDER (atomic == true) and ALLOC_HIGH (__GFP_HIGH). + * set both ALLOC_HARDER (!wait) and ALLOC_HIGH (__GFP_HIGH). */ alloc_flags |= (__force int) (gfp_mask & __GFP_HIGH); - if (atomic) { + if (!wait) { /* - * Not worth trying to allocate harder for __GFP_NOMEMALLOC even - * if it can't schedule. + * Not worth trying to allocate harder for + * __GFP_NOMEMALLOC even if it can't schedule. */ - if (!(gfp_mask & __GFP_NOMEMALLOC)) + if (!(gfp_mask & __GFP_NOMEMALLOC)) alloc_flags |= ALLOC_HARDER; /* - * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the - * comment for __cpuset_node_allowed_softwall(). + * Ignore cpuset if GFP_ATOMIC (!wait) rather than fail alloc. + * See also cpuset_zone_allowed() comment in kernel/cpuset.c. */ alloc_flags &= ~ALLOC_CPUSET; } else if (unlikely(rt_task(current)) && !in_interrupt()) diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c index e007236f345..6d757e3a872 100644 --- a/mm/page_cgroup.c +++ b/mm/page_cgroup.c @@ -170,7 +170,6 @@ static void free_page_cgroup(void *addr) sizeof(struct page_cgroup) * PAGES_PER_SECTION; BUG_ON(PageReserved(page)); - kmemleak_free(addr); free_pages_exact(addr, table_size); } } diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c index 51108165f82..3707c71ae4c 100644 --- a/mm/percpu-vm.c +++ b/mm/percpu-vm.c @@ -108,7 +108,7 @@ static int pcpu_alloc_pages(struct pcpu_chunk *chunk, int page_start, int page_end) { const gfp_t gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_COLD; - unsigned int cpu, tcpu; + unsigned int cpu; int i; for_each_possible_cpu(cpu) { @@ -116,23 +116,14 @@ static int pcpu_alloc_pages(struct pcpu_chunk *chunk, struct page **pagep = &pages[pcpu_page_idx(cpu, i)]; *pagep = alloc_pages_node(cpu_to_node(cpu), gfp, 0); - if (!*pagep) - goto err; + if (!*pagep) { + pcpu_free_pages(chunk, pages, populated, + page_start, page_end); + return -ENOMEM; + } } } return 0; - -err: - while (--i >= page_start) - __free_page(pages[pcpu_page_idx(cpu, i)]); - - for_each_possible_cpu(tcpu) { - if (tcpu == cpu) - break; - for (i = page_start; i < page_end; i++) - __free_page(pages[pcpu_page_idx(tcpu, i)]); - } - return -ENOMEM; } /** @@ -272,7 +263,6 @@ err: __pcpu_unmap_pages(pcpu_chunk_addr(chunk, tcpu, page_start), page_end - page_start); } - pcpu_post_unmap_tlb_flush(chunk, page_start, page_end); return err; } diff --git a/mm/percpu.c b/mm/percpu.c index 25e2ea52db8..8c8e08f3a69 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -612,7 +612,7 @@ static struct pcpu_chunk *pcpu_alloc_chunk(void) chunk->map = pcpu_mem_zalloc(PCPU_DFL_MAP_ALLOC * sizeof(chunk->map[0])); if (!chunk->map) { - pcpu_mem_free(chunk, pcpu_chunk_struct_size); + kfree(chunk); return NULL; } diff --git a/mm/rmap.c b/mm/rmap.c index 705bfc8e6fc..3f6077461ae 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -103,7 +103,6 @@ static inline void anon_vma_free(struct anon_vma *anon_vma) * LOCK should suffice since the actual taking of the lock must * happen _before_ what follows. */ - might_sleep(); if (rwsem_is_locked(&anon_vma->root->rwsem)) { anon_vma_lock_write(anon_vma); anon_vma_unlock_write(anon_vma); @@ -427,9 +426,8 @@ struct anon_vma *page_get_anon_vma(struct page *page) * above cannot corrupt). */ if (!page_mapped(page)) { - rcu_read_unlock(); put_anon_vma(anon_vma); - return NULL; + anon_vma = NULL; } out: rcu_read_unlock(); @@ -479,9 +477,9 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page) } if (!page_mapped(page)) { - rcu_read_unlock(); put_anon_vma(anon_vma); - return NULL; + anon_vma = NULL; + goto out; } /* we pinned the anon_vma, its safe to sleep */ @@ -1392,19 +1390,9 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount, BUG_ON(!page || PageAnon(page)); if (locked_vma) { - if (page == check_page) { - /* we know we have check_page locked */ - mlock_vma_page(page); + mlock_vma_page(page); /* no-op if already mlocked */ + if (page == check_page) ret = SWAP_MLOCK; - } else if (trylock_page(page)) { - /* - * If we can lock the page, perform mlock. - * Otherwise leave the page alone, it will be - * eventually encountered again later. - */ - mlock_vma_page(page); - unlock_page(page); - } continue; /* don't unmap */ } @@ -1677,9 +1665,10 @@ void __put_anon_vma(struct anon_vma *anon_vma) { struct anon_vma *root = anon_vma->root; - anon_vma_free(anon_vma); if (root != anon_vma && atomic_dec_and_test(&root->refcount)) anon_vma_free(root); + + anon_vma_free(anon_vma); } #ifdef CONFIG_MIGRATION diff --git a/mm/shmem.c b/mm/shmem.c index 5373c7fffd9..6019778b951 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -80,12 +80,11 @@ static struct vfsmount *shm_mnt; #define SHORT_SYMLINK_LEN 128 /* - * shmem_fallocate communicates with shmem_fault or shmem_writepage via - * inode->i_private (with i_mutex making sure that it has only one user at - * a time): we would prefer not to enlarge the shmem inode just for that. + * shmem_fallocate and shmem_writepage communicate via inode->i_private + * (with i_mutex making sure that it has only one user at a time): + * we would prefer not to enlarge the shmem inode just for that. */ struct shmem_falloc { - wait_queue_head_t *waitq; /* faults into hole wait for punch to end */ pgoff_t start; /* start of range currently being fallocated */ pgoff_t next; /* the next page offset to be fallocated */ pgoff_t nr_falloced; /* how many new pages have been fallocated */ @@ -534,19 +533,22 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, return; index = start; - while (index < end) { + for ( ; ; ) { cond_resched(); pvec.nr = shmem_find_get_pages_and_swap(mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), pvec.pages, indices); if (!pvec.nr) { - /* If all gone or hole-punch or unfalloc, we're done */ - if (index == start || end != -1) + if (index == start || unfalloc) break; - /* But if truncating, restart to make sure all gone */ index = start; continue; } + if ((index == start || unfalloc) && indices[0] >= end) { + shmem_deswap_pagevec(&pvec); + pagevec_release(&pvec); + break; + } mem_cgroup_uncharge_start(); for (i = 0; i < pagevec_count(&pvec); i++) { struct page *page = pvec.pages[i]; @@ -558,12 +560,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (radix_tree_exceptional_entry(page)) { if (unfalloc) continue; - if (shmem_free_swap(mapping, index, page)) { - /* Swap was replaced by page: retry */ - index--; - break; - } - nr_swaps_freed++; + nr_swaps_freed += !shmem_free_swap(mapping, + index, page); continue; } @@ -572,11 +570,6 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (page->mapping == mapping) { VM_BUG_ON(PageWriteback(page)); truncate_inode_page(mapping, page); - } else { - /* Page was replaced by swap: retry */ - unlock_page(page); - index--; - break; } } unlock_page(page); @@ -833,7 +826,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) spin_lock(&inode->i_lock); shmem_falloc = inode->i_private; if (shmem_falloc && - !shmem_falloc->waitq && index >= shmem_falloc->start && index < shmem_falloc->next) shmem_falloc->nr_unswapped++; @@ -1308,64 +1300,6 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) int error; int ret = VM_FAULT_LOCKED; - /* - * Trinity finds that probing a hole which tmpfs is punching can - * prevent the hole-punch from ever completing: which in turn - * locks writers out with its hold on i_mutex. So refrain from - * faulting pages into the hole while it's being punched. Although - * shmem_undo_range() does remove the additions, it may be unable to - * keep up, as each new page needs its own unmap_mapping_range() call, - * and the i_mmap tree grows ever slower to scan if new vmas are added. - * - * It does not matter if we sometimes reach this check just before the - * hole-punch begins, so that one fault then races with the punch: - * we just need to make racing faults a rare case. - * - * The implementation below would be much simpler if we just used a - * standard mutex or completion: but we cannot take i_mutex in fault, - * and bloating every shmem inode for this unlikely case would be sad. - */ - if (unlikely(inode->i_private)) { - struct shmem_falloc *shmem_falloc; - - spin_lock(&inode->i_lock); - shmem_falloc = inode->i_private; - if (shmem_falloc && - shmem_falloc->waitq && - vmf->pgoff >= shmem_falloc->start && - vmf->pgoff < shmem_falloc->next) { - wait_queue_head_t *shmem_falloc_waitq; - DEFINE_WAIT(shmem_fault_wait); - - ret = VM_FAULT_NOPAGE; - if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) && - !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { - /* It's polite to up mmap_sem if we can */ - up_read(&vma->vm_mm->mmap_sem); - ret = VM_FAULT_RETRY; - } - - shmem_falloc_waitq = shmem_falloc->waitq; - prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait, - TASK_UNINTERRUPTIBLE); - spin_unlock(&inode->i_lock); - schedule(); - - /* - * shmem_falloc_waitq points into the shmem_fallocate() - * stack of the hole-punching task: shmem_falloc_waitq - * is usually invalid by the time we reach here, but - * finish_wait() does not dereference it in that case; - * though i_lock needed lest racing with wake_up_all(). - */ - spin_lock(&inode->i_lock); - finish_wait(shmem_falloc_waitq, &shmem_fault_wait); - spin_unlock(&inode->i_lock); - return ret; - } - spin_unlock(&inode->i_lock); - } - error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret); if (error) return ((error == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS); @@ -1887,25 +1821,12 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, struct address_space *mapping = file->f_mapping; loff_t unmap_start = round_up(offset, PAGE_SIZE); loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1; - DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq); - - shmem_falloc.waitq = &shmem_falloc_waitq; - shmem_falloc.start = unmap_start >> PAGE_SHIFT; - shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; - spin_lock(&inode->i_lock); - inode->i_private = &shmem_falloc; - spin_unlock(&inode->i_lock); if ((u64)unmap_end > (u64)unmap_start) unmap_mapping_range(mapping, unmap_start, 1 + unmap_end - unmap_start, 0); shmem_truncate_range(inode, offset, offset + len - 1); /* No need to unmap again: hole-punching leaves COWed pages */ - - spin_lock(&inode->i_lock); - inode->i_private = NULL; - wake_up_all(&shmem_falloc_waitq); - spin_unlock(&inode->i_lock); error = 0; goto out; } @@ -1923,7 +1844,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, goto out; } - shmem_falloc.waitq = NULL; shmem_falloc.start = start; shmem_falloc.next = start; shmem_falloc.nr_falloced = 0; @@ -2128,10 +2048,8 @@ static int shmem_rename(struct inode *old_dir, struct dentry *old_dentry, struct if (new_dentry->d_inode) { (void) shmem_unlink(new_dir, new_dentry); - if (they_are_dirs) { - drop_nlink(new_dentry->d_inode); + if (they_are_dirs) drop_nlink(old_dir); - } } else if (they_are_dirs) { drop_nlink(old_dir); inc_nlink(new_dir); diff --git a/mm/slab_common.c b/mm/slab_common.c index 7d21d3fddbf..2d414508e9e 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -55,7 +55,6 @@ static int kmem_cache_sanity_check(struct mem_cgroup *memcg, const char *name, continue; } -#if !defined(CONFIG_SLUB) /* * For simplicity, we won't check this in the list of memcg * caches. We have control over memcg naming, and if there @@ -69,7 +68,6 @@ static int kmem_cache_sanity_check(struct mem_cgroup *memcg, const char *name, s = NULL; return -EINVAL; } -#endif } WARN_ON(strchr(name, ' ')); /* It confuses parsers */ diff --git a/mm/swap.c b/mm/swap.c index 4e35f3ff042..ea58dbde788 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -81,7 +81,7 @@ static void put_compound_page(struct page *page) { if (unlikely(PageTail(page))) { /* __split_huge_page_refcount can run under us */ - struct page *page_head = compound_head(page); + struct page *page_head = compound_trans_head(page); if (likely(page != page_head && get_page_unless_zero(page_head))) { @@ -219,7 +219,7 @@ bool __get_page_tail(struct page *page) */ unsigned long flags; bool got = false; - struct page *page_head = compound_head(page); + struct page *page_head = compound_trans_head(page); if (likely(page != page_head && get_page_unless_zero(page_head))) { /* Ref to put_compound_page() comment. */ diff --git a/mm/truncate.c b/mm/truncate.c index 2d6151fc8f0..c75b736e54b 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -20,7 +20,6 @@ #include <linux/buffer_head.h> /* grr. try_to_release_page, do_invalidatepage */ #include <linux/cleancache.h> -#include <linux/rmap.h> #include "internal.h" @@ -568,67 +567,16 @@ EXPORT_SYMBOL(truncate_pagecache); */ void truncate_setsize(struct inode *inode, loff_t newsize) { - loff_t oldsize = inode->i_size; + loff_t oldsize; + oldsize = inode->i_size; i_size_write(inode, newsize); - if (newsize > oldsize) - pagecache_isize_extended(inode, oldsize, newsize); + truncate_pagecache(inode, oldsize, newsize); } EXPORT_SYMBOL(truncate_setsize); /** - * pagecache_isize_extended - update pagecache after extension of i_size - * @inode: inode for which i_size was extended - * @from: original inode size - * @to: new inode size - * - * Handle extension of inode size either caused by extending truncate or by - * write starting after current i_size. We mark the page straddling current - * i_size RO so that page_mkwrite() is called on the nearest write access to - * the page. This way filesystem can be sure that page_mkwrite() is called on - * the page before user writes to the page via mmap after the i_size has been - * changed. - * - * The function must be called after i_size is updated so that page fault - * coming after we unlock the page will already see the new i_size. - * The function must be called while we still hold i_mutex - this not only - * makes sure i_size is stable but also that userspace cannot observe new - * i_size value before we are prepared to store mmap writes at new inode size. - */ -void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to) -{ - int bsize = 1 << inode->i_blkbits; - loff_t rounded_from; - struct page *page; - pgoff_t index; - - WARN_ON(to > inode->i_size); - - if (from >= to || bsize == PAGE_CACHE_SIZE) - return; - /* Page straddling @from will not have any hole block created? */ - rounded_from = round_up(from, bsize); - if (to <= rounded_from || !(rounded_from & (PAGE_CACHE_SIZE - 1))) - return; - - index = from >> PAGE_CACHE_SHIFT; - page = find_lock_page(inode->i_mapping, index); - /* Page not cached? Nothing to do */ - if (!page) - return; - /* - * See clear_page_dirty_for_io() for details why set_page_dirty() - * is needed. - */ - if (page_mkclean(page)) - set_page_dirty(page); - unlock_page(page); - page_cache_release(page); -} -EXPORT_SYMBOL(pagecache_isize_extended); - -/** * truncate_pagecache_range - unmap and remove pagecache that is hole-punched * @inode: inode * @lstart: offset of beginning of hole diff --git a/mm/util.c b/mm/util.c index 0b1725254ff..ab1424dbe2e 100644 --- a/mm/util.c +++ b/mm/util.c @@ -272,14 +272,17 @@ pid_t vm_is_stack(struct task_struct *task, if (in_group) { struct task_struct *t; - rcu_read_lock(); - for_each_thread(task, t) { + if (!pid_alive(task)) + goto done; + + t = task; + do { if (vm_is_stack_for_task(t, vma)) { ret = t->pid; goto done; } - } + } while_each_thread(task, t); done: rcu_read_unlock(); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 18bf2f96eff..bcb26ea8b03 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2330,17 +2330,10 @@ static bool pfmemalloc_watermark_ok(pg_data_t *pgdat) for (i = 0; i <= ZONE_NORMAL; i++) { zone = &pgdat->node_zones[i]; - if (!populated_zone(zone)) - continue; - pfmemalloc_reserve += min_wmark_pages(zone); free_pages += zone_page_state(zone, NR_FREE_PAGES); } - /* If there are no reserves (unexpected config) then do not throttle */ - if (!pfmemalloc_reserve) - return true; - wmark_ok = free_pages > pfmemalloc_reserve / 2; /* kswapd must be awake if processes are being throttled */ @@ -2365,9 +2358,9 @@ static bool pfmemalloc_watermark_ok(pg_data_t *pgdat) static bool throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist, nodemask_t *nodemask) { - struct zoneref *z; struct zone *zone; - pg_data_t *pgdat = NULL; + int high_zoneidx = gfp_zone(gfp_mask); + pg_data_t *pgdat; /* * Kernel threads should not be throttled as they may be indirectly @@ -2386,34 +2379,10 @@ static bool throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist, if (fatal_signal_pending(current)) goto out; - /* - * Check if the pfmemalloc reserves are ok by finding the first node - * with a usable ZONE_NORMAL or lower zone. The expectation is that - * GFP_KERNEL will be required for allocating network buffers when - * swapping over the network so ZONE_HIGHMEM is unusable. - * - * Throttling is based on the first usable node and throttled processes - * wait on a queue until kswapd makes progress and wakes them. There - * is an affinity then between processes waking up and where reclaim - * progress has been made assuming the process wakes on the same node. - * More importantly, processes running on remote nodes will not compete - * for remote pfmemalloc reserves and processes on different nodes - * should make reasonable progress. - */ - for_each_zone_zonelist_nodemask(zone, z, zonelist, - gfp_mask, nodemask) { - if (zone_idx(zone) > ZONE_NORMAL) - continue; - - /* Throttle based on the first usable node */ - pgdat = zone->zone_pgdat; - if (pfmemalloc_watermark_ok(pgdat)) - goto out; - break; - } - - /* If no zone was usable by the allocation flags then do not throttle */ - if (!pgdat) + /* Check if the pfmemalloc reserves are ok */ + first_zones_zonelist(zonelist, high_zoneidx, NULL, &zone); + pgdat = zone->zone_pgdat; + if (pfmemalloc_watermark_ok(pgdat)) goto out; /* Account for the throttling */ @@ -3149,10 +3118,7 @@ static int kswapd(void *p) } } - tsk->flags &= ~(PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD); current->reclaim_state = NULL; - lockdep_clear_current_reclaim_state(); - return 0; } diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c index 86abb2e59ae..9424f3718ea 100644 --- a/net/8021q/vlan.c +++ b/net/8021q/vlan.c @@ -305,11 +305,9 @@ static void vlan_sync_address(struct net_device *dev, static void vlan_transfer_features(struct net_device *dev, struct net_device *vlandev) { - struct vlan_dev_priv *vlan = vlan_dev_priv(vlandev); - vlandev->gso_max_size = dev->gso_max_size; - if (vlan_hw_offload_capable(dev->features, vlan->vlan_proto)) + if (dev->features & NETIF_F_HW_VLAN_CTAG_TX) vlandev->hard_header_len = dev->hard_header_len; else vlandev->hard_header_len = dev->hard_header_len + VLAN_HLEN; diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c index 42ef36a85e6..4a78c4de9f2 100644 --- a/net/8021q/vlan_core.c +++ b/net/8021q/vlan_core.c @@ -103,11 +103,8 @@ EXPORT_SYMBOL(vlan_dev_vlan_id); static struct sk_buff *vlan_reorder_header(struct sk_buff *skb) { - if (skb_cow(skb, skb_headroom(skb)) < 0) { - kfree_skb(skb); + if (skb_cow(skb, skb_headroom(skb)) < 0) return NULL; - } - memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 2 * ETH_ALEN); skb->mac_header += VLAN_HLEN; return skb; diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index 698e922f41e..4af64afc702 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -512,48 +512,10 @@ static void vlan_dev_change_rx_flags(struct net_device *dev, int change) } } -static int vlan_calculate_locking_subclass(struct net_device *real_dev) -{ - int subclass = 0; - - while (is_vlan_dev(real_dev)) { - subclass++; - real_dev = vlan_dev_priv(real_dev)->real_dev; - } - - return subclass; -} - -static void vlan_dev_mc_sync(struct net_device *to, struct net_device *from) -{ - int err = 0, subclass; - - subclass = vlan_calculate_locking_subclass(to); - - spin_lock_nested(&to->addr_list_lock, subclass); - err = __hw_addr_sync(&to->mc, &from->mc, to->addr_len); - if (!err) - __dev_set_rx_mode(to); - spin_unlock(&to->addr_list_lock); -} - -static void vlan_dev_uc_sync(struct net_device *to, struct net_device *from) -{ - int err = 0, subclass; - - subclass = vlan_calculate_locking_subclass(to); - - spin_lock_nested(&to->addr_list_lock, subclass); - err = __hw_addr_sync(&to->uc, &from->uc, to->addr_len); - if (!err) - __dev_set_rx_mode(to); - spin_unlock(&to->addr_list_lock); -} - static void vlan_dev_set_rx_mode(struct net_device *vlan_dev) { - vlan_dev_mc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev); - vlan_dev_uc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev); + dev_mc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev); + dev_uc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev); } /* @@ -595,9 +557,6 @@ static int vlan_passthru_hard_header(struct sk_buff *skb, struct net_device *dev struct vlan_dev_priv *vlan = vlan_dev_priv(dev); struct net_device *real_dev = vlan->real_dev; - if (saddr == NULL) - saddr = dev->dev_addr; - return dev_hard_header(skb, real_dev, type, daddr, saddr, len); } @@ -649,8 +608,7 @@ static int vlan_dev_init(struct net_device *dev) #endif dev->needed_headroom = real_dev->needed_headroom; - if (vlan_hw_offload_capable(real_dev->features, - vlan_dev_priv(dev)->vlan_proto)) { + if (real_dev->features & NETIF_F_HW_VLAN_CTAG_TX) { dev->header_ops = &vlan_passthru_header_ops; dev->hard_header_len = real_dev->hard_header_len; } else { @@ -662,7 +620,9 @@ static int vlan_dev_init(struct net_device *dev) SET_NETDEV_DEVTYPE(dev, &vlan_type); - subclass = vlan_calculate_locking_subclass(dev); + if (is_vlan_dev(real_dev)) + subclass = 1; + vlan_dev_set_lockdep_class(dev, subclass); vlan_dev_priv(dev)->vlan_pcpu_stats = alloc_percpu(struct vlan_pcpu_stats); diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c index 8799e171add..0018daccdea 100644 --- a/net/appletalk/ddp.c +++ b/net/appletalk/ddp.c @@ -1489,6 +1489,8 @@ static int atalk_rcv(struct sk_buff *skb, struct net_device *dev, goto drop; /* Queue packet (standard) */ + skb->sk = sock; + if (sock_queue_rcv_skb(sock, skb) < 0) goto drop; @@ -1642,6 +1644,7 @@ static int atalk_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr if (!skb) goto out; + skb->sk = sk; skb_reserve(skb, ddp_dl->header_length); skb_reserve(skb, dev->hard_header_len); skb->dev = dev; diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index bf3f15ca752..1c24dcf17dd 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -663,17 +663,14 @@ static int hci_conn_auth(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->flags)) { struct hci_cp_auth_requested cp; + /* encrypt must be pending if auth is also pending */ + set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); + cp.handle = cpu_to_le16(conn->handle); hci_send_cmd(conn->hdev, HCI_OP_AUTH_REQUESTED, sizeof(cp), &cp); - - /* If we're already encrypted set the REAUTH_PEND flag, - * otherwise set the ENCRYPT_PEND. - */ - if (conn->link_mode & HCI_LM_ENCRYPT) + if (conn->key_type != 0xff) set_bit(HCI_CONN_REAUTH_PEND, &conn->flags); - else - set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); } return 0; diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 1526fb232b3..49d5c941c9c 100755 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -3063,12 +3063,6 @@ static void hci_key_refresh_complete_evt(struct hci_dev *hdev, if (!conn) goto unlock; - /* For BR/EDR the necessary steps are taken through the - * auth_complete event. - */ - if (conn->type != LE_LINK) - goto unlock; - if (!ev->status) conn->sec_level = conn->pending_sec_level; @@ -3230,11 +3224,8 @@ static void hci_user_confirm_request_evt(struct hci_dev *hdev, /* If we're not the initiators request authorization to * proceed from user space (mgmt_user_confirm with - * confirm_hint set to 1). The exception is if neither - * side had MITM in which case we do auto-accept. - */ - if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) && - (loc_mitm || rem_mitm)) { + * confirm_hint set to 1). */ + if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags)) { BT_DBG("Confirming auto-accept as acceptor"); confirm_hint = 1; goto confirm; @@ -3640,13 +3631,7 @@ static void hci_le_ltk_request_evt(struct hci_dev *hdev, struct sk_buff *skb) hci_send_cmd(hdev, HCI_OP_LE_LTK_REPLY, sizeof(cp), &cp); - /* Ref. Bluetooth Core SPEC pages 1975 and 2004. STK is a - * temporary key used to encrypt a connection following - * pairing. It is used during the Encrypted Session Setup to - * distribute the keys. Later, security can be re-established - * using a distributed LTK. - */ - if (ltk->type == HCI_SMP_STK_SLAVE) { + if (ltk->type & HCI_SMP_STK) { list_del(<k->list); kfree(ltk); } diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c index 88882377340..81b2de6db5a 100644 --- a/net/bluetooth/l2cap_sock.c +++ b/net/bluetooth/l2cap_sock.c @@ -887,8 +887,7 @@ static int l2cap_sock_shutdown(struct socket *sock, int how) l2cap_chan_close(chan, 0); lock_sock(sk); - if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime && - !(current->flags & PF_EXITING)) + if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime) err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime); } @@ -950,16 +949,13 @@ static struct l2cap_chan *l2cap_sock_new_connection_cb(struct l2cap_chan *chan) /* Check for backlog size */ if (sk_acceptq_is_full(parent)) { BT_DBG("backlog full %d", parent->sk_ack_backlog); - release_sock(parent); return NULL; } sk = l2cap_sock_alloc(sock_net(parent), NULL, BTPROTO_L2CAP, GFP_ATOMIC); - if (!sk) { - release_sock(parent); + if (!sk) return NULL; - } bt_sock_reclassify_lock(sk, BTPROTO_L2CAP); diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index 3e574540b2c..3817728878d 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -2333,13 +2333,8 @@ static int user_pairing_resp(struct sock *sk, struct hci_dev *hdev, } if (addr->type == BDADDR_LE_PUBLIC || addr->type == BDADDR_LE_RANDOM) { - /* Continue with pairing via SMP. The hdev lock must be - * released as SMP may try to recquire it for crypto - * purposes. - */ - hci_dev_unlock(hdev); + /* Continue with pairing via SMP */ err = smp_user_confirm_reply(conn, mgmt_op, passkey); - hci_dev_lock(hdev); if (!err) err = cmd_complete(sk, hdev->id, mgmt_op, diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c index 3ca5e40fe39..0c77476d33d 100644 --- a/net/bluetooth/rfcomm/core.c +++ b/net/bluetooth/rfcomm/core.c @@ -1856,13 +1856,10 @@ static struct rfcomm_session *rfcomm_process_rx(struct rfcomm_session *s) /* Get data directly from socket receive queue without copying it. */ while ((skb = skb_dequeue(&sk->sk_receive_queue))) { skb_orphan(skb); - if (!skb_linearize(skb)) { + if (!skb_linearize(skb)) s = rfcomm_recv_frame(s, skb); - if (!s) - break; - } else { + else kfree_skb(skb); - } } if (s && (sk->sk_state == BT_CLOSED)) diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c index 7ca014daa5a..c1c6028e389 100644 --- a/net/bluetooth/rfcomm/sock.c +++ b/net/bluetooth/rfcomm/sock.c @@ -887,8 +887,7 @@ static int rfcomm_sock_shutdown(struct socket *sock, int how) sk->sk_shutdown = SHUTDOWN_MASK; __rfcomm_sock_close(sk); - if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime && - !(current->flags & PF_EXITING)) + if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime) err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime); } release_sock(sk); diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c index de9c955b247..3178c7b4a17 100644 --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -875,8 +875,7 @@ static int sco_sock_shutdown(struct socket *sock, int how) sco_sock_clear_timer(sk); __sco_sock_close(sk); - if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime && - !(current->flags & PF_EXITING)) + if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime) err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime); } @@ -896,8 +895,7 @@ static int sco_sock_release(struct socket *sock) sco_sock_close(sk); - if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime && - !(current->flags & PF_EXITING)) { + if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime) { lock_sock(sk); err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime); release_sock(sk); diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index 0a3bc82782c..828e2bcc1f5 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -71,7 +71,7 @@ int br_handle_frame_finish(struct sk_buff *skb) goto drop; if (!br_allowed_ingress(p->br, nbp_get_vlan_info(p), skb, &vid)) - goto out; + goto drop; /* insert into forwarding database after filtering to avoid spoofing */ br = p->br; diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 81de0106528..2a180a38018 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -1176,12 +1176,6 @@ static int br_ip6_multicast_query(struct net_bridge *br, br_multicast_query_received(br, port, !ipv6_addr_any(&ip6h->saddr)); - /* RFC2710+RFC3810 (MLDv1+MLDv2) require link-local source addresses */ - if (!(ipv6_addr_type(&ip6h->saddr) & IPV6_ADDR_LINKLOCAL)) { - err = -EINVAL; - goto out; - } - if (skb->len == sizeof(*mld)) { if (!pskb_may_pull(skb, sizeof(*mld))) { err = -EINVAL; diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c index f16e9e48775..06873e80a43 100644 --- a/net/bridge/br_netlink.c +++ b/net/bridge/br_netlink.c @@ -438,20 +438,6 @@ static int br_validate(struct nlattr *tb[], struct nlattr *data[]) return 0; } -static int br_dev_newlink(struct net *src_net, struct net_device *dev, - struct nlattr *tb[], struct nlattr *data[]) -{ - struct net_bridge *br = netdev_priv(dev); - - if (tb[IFLA_ADDRESS]) { - spin_lock_bh(&br->lock); - br_stp_change_bridge_id(br, nla_data(tb[IFLA_ADDRESS])); - spin_unlock_bh(&br->lock); - } - - return register_netdevice(dev); -} - static size_t br_get_link_af_size(const struct net_device *dev) { struct net_port_vlans *pv; @@ -480,7 +466,6 @@ struct rtnl_link_ops br_link_ops __read_mostly = { .priv_size = sizeof(struct net_bridge), .setup = br_dev_setup, .validate = br_validate, - .newlink = br_dev_newlink, .dellink = br_dev_delete, }; diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index d8deb8bda73..9a9ffe7e401 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -202,7 +202,7 @@ bool br_allowed_ingress(struct net_bridge *br, struct net_port_vlans *v, * rejected. */ if (!v) - goto drop; + return false; if (br_vlan_get_tag(skb, vid)) { u16 pvid = br_get_pvid(v); @@ -212,7 +212,7 @@ bool br_allowed_ingress(struct net_bridge *br, struct net_port_vlans *v, * traffic belongs to. */ if (pvid == VLAN_N_VID) - goto drop; + return false; /* PVID is set on this port. Any untagged ingress * frame is considered to belong to this vlan. @@ -224,8 +224,7 @@ bool br_allowed_ingress(struct net_bridge *br, struct net_port_vlans *v, /* Frame had a valid vlan tag. See if vlan is allowed */ if (test_bit(*vid, v->vlan_bitmap)) return true; -drop: - kfree_skb(skb); + return false; } diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c index 6651a7797d4..3d110c4fc78 100644 --- a/net/bridge/netfilter/ebtables.c +++ b/net/bridge/netfilter/ebtables.c @@ -1044,9 +1044,10 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl, if (repl->num_counters && copy_to_user(repl->counters, counterstmp, repl->num_counters * sizeof(struct ebt_counter))) { - /* Silent error, can't fail, new table is already in place */ - net_warn_ratelimited("ebtables: counters copy to user failed while replacing table\n"); + ret = -EFAULT; } + else + ret = 0; /* decrease module count and free resources */ EBT_ENTRY_ITERATE(table->entries, table->entries_size, diff --git a/net/can/gw.c b/net/can/gw.c index de25455b4e3..3ee690e8c7d 100644 --- a/net/can/gw.c +++ b/net/can/gw.c @@ -784,7 +784,7 @@ static int cgw_create_job(struct sk_buff *skb, struct nlmsghdr *nlh) struct cgw_job *gwj; int err = 0; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; if (nlmsg_len(nlh) < sizeof(*r)) @@ -876,7 +876,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh) struct can_can_gw ccgw; int err = 0; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; if (nlmsg_len(nlh) < sizeof(*r)) diff --git a/net/ceph/auth_x.c b/net/ceph/auth_x.c index de6662b14e1..96238ba95f2 100644 --- a/net/ceph/auth_x.c +++ b/net/ceph/auth_x.c @@ -13,6 +13,8 @@ #include "auth_x.h" #include "auth_x_protocol.h" +#define TEMP_TICKET_BUF_LEN 256 + static void ceph_x_validate_tickets(struct ceph_auth_client *ac, int *pneed); static int ceph_x_is_authenticated(struct ceph_auth_client *ac) @@ -62,7 +64,7 @@ static int ceph_x_encrypt(struct ceph_crypto_key *secret, } static int ceph_x_decrypt(struct ceph_crypto_key *secret, - void **p, void *end, void **obuf, size_t olen) + void **p, void *end, void *obuf, size_t olen) { struct ceph_x_encrypt_header head; size_t head_len = sizeof(head); @@ -73,14 +75,8 @@ static int ceph_x_decrypt(struct ceph_crypto_key *secret, return -EINVAL; dout("ceph_x_decrypt len %d\n", len); - if (*obuf == NULL) { - *obuf = kmalloc(len, GFP_NOFS); - if (!*obuf) - return -ENOMEM; - olen = len; - } - - ret = ceph_decrypt2(secret, &head, &head_len, *obuf, &olen, *p, len); + ret = ceph_decrypt2(secret, &head, &head_len, obuf, &olen, + *p, len); if (ret) return ret; if (head.struct_v != 1 || le64_to_cpu(head.magic) != CEPHX_ENC_MAGIC) @@ -133,120 +129,139 @@ static void remove_ticket_handler(struct ceph_auth_client *ac, kfree(th); } -static int process_one_ticket(struct ceph_auth_client *ac, - struct ceph_crypto_key *secret, - void **p, void *end) +static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac, + struct ceph_crypto_key *secret, + void *buf, void *end) { struct ceph_x_info *xi = ac->private; - int type; - u8 tkt_struct_v, blob_struct_v; - struct ceph_x_ticket_handler *th; - void *dbuf = NULL; - void *dp, *dend; - int dlen; - char is_enc; - struct timespec validity; - struct ceph_crypto_key old_key; - void *ticket_buf = NULL; - void *tp, *tpend; - struct ceph_timespec new_validity; - struct ceph_crypto_key new_session_key; - struct ceph_buffer *new_ticket_blob; - unsigned long new_expires, new_renew_after; - u64 new_secret_id; + int num; + void *p = buf; int ret; + char *dbuf; + char *ticket_buf; + u8 reply_struct_v; - ceph_decode_need(p, end, sizeof(u32) + 1, bad); + dbuf = kmalloc(TEMP_TICKET_BUF_LEN, GFP_NOFS); + if (!dbuf) + return -ENOMEM; - type = ceph_decode_32(p); - dout(" ticket type %d %s\n", type, ceph_entity_type_name(type)); + ret = -ENOMEM; + ticket_buf = kmalloc(TEMP_TICKET_BUF_LEN, GFP_NOFS); + if (!ticket_buf) + goto out_dbuf; - tkt_struct_v = ceph_decode_8(p); - if (tkt_struct_v != 1) + ceph_decode_need(&p, end, 1 + sizeof(u32), bad); + reply_struct_v = ceph_decode_8(&p); + if (reply_struct_v != 1) goto bad; + num = ceph_decode_32(&p); + dout("%d tickets\n", num); + while (num--) { + int type; + u8 tkt_struct_v, blob_struct_v; + struct ceph_x_ticket_handler *th; + void *dp, *dend; + int dlen; + char is_enc; + struct timespec validity; + struct ceph_crypto_key old_key; + void *tp, *tpend; + struct ceph_timespec new_validity; + struct ceph_crypto_key new_session_key; + struct ceph_buffer *new_ticket_blob; + unsigned long new_expires, new_renew_after; + u64 new_secret_id; + + ceph_decode_need(&p, end, sizeof(u32) + 1, bad); + + type = ceph_decode_32(&p); + dout(" ticket type %d %s\n", type, ceph_entity_type_name(type)); + + tkt_struct_v = ceph_decode_8(&p); + if (tkt_struct_v != 1) + goto bad; + + th = get_ticket_handler(ac, type); + if (IS_ERR(th)) { + ret = PTR_ERR(th); + goto out; + } - th = get_ticket_handler(ac, type); - if (IS_ERR(th)) { - ret = PTR_ERR(th); - goto out; - } + /* blob for me */ + dlen = ceph_x_decrypt(secret, &p, end, dbuf, + TEMP_TICKET_BUF_LEN); + if (dlen <= 0) { + ret = dlen; + goto out; + } + dout(" decrypted %d bytes\n", dlen); + dend = dbuf + dlen; + dp = dbuf; - /* blob for me */ - dlen = ceph_x_decrypt(secret, p, end, &dbuf, 0); - if (dlen <= 0) { - ret = dlen; - goto out; - } - dout(" decrypted %d bytes\n", dlen); - dp = dbuf; - dend = dp + dlen; + tkt_struct_v = ceph_decode_8(&dp); + if (tkt_struct_v != 1) + goto bad; - tkt_struct_v = ceph_decode_8(&dp); - if (tkt_struct_v != 1) - goto bad; + memcpy(&old_key, &th->session_key, sizeof(old_key)); + ret = ceph_crypto_key_decode(&new_session_key, &dp, dend); + if (ret) + goto out; - memcpy(&old_key, &th->session_key, sizeof(old_key)); - ret = ceph_crypto_key_decode(&new_session_key, &dp, dend); - if (ret) - goto out; + ceph_decode_copy(&dp, &new_validity, sizeof(new_validity)); + ceph_decode_timespec(&validity, &new_validity); + new_expires = get_seconds() + validity.tv_sec; + new_renew_after = new_expires - (validity.tv_sec / 4); + dout(" expires=%lu renew_after=%lu\n", new_expires, + new_renew_after); - ceph_decode_copy(&dp, &new_validity, sizeof(new_validity)); - ceph_decode_timespec(&validity, &new_validity); - new_expires = get_seconds() + validity.tv_sec; - new_renew_after = new_expires - (validity.tv_sec / 4); - dout(" expires=%lu renew_after=%lu\n", new_expires, - new_renew_after); - - /* ticket blob for service */ - ceph_decode_8_safe(p, end, is_enc, bad); - if (is_enc) { - /* encrypted */ - dout(" encrypted ticket\n"); - dlen = ceph_x_decrypt(&old_key, p, end, &ticket_buf, 0); - if (dlen < 0) { - ret = dlen; - goto out; - } + /* ticket blob for service */ + ceph_decode_8_safe(&p, end, is_enc, bad); tp = ticket_buf; - dlen = ceph_decode_32(&tp); - } else { - /* unencrypted */ - ceph_decode_32_safe(p, end, dlen, bad); - ticket_buf = kmalloc(dlen, GFP_NOFS); - if (!ticket_buf) { - ret = -ENOMEM; - goto out; + if (is_enc) { + /* encrypted */ + dout(" encrypted ticket\n"); + dlen = ceph_x_decrypt(&old_key, &p, end, ticket_buf, + TEMP_TICKET_BUF_LEN); + if (dlen < 0) { + ret = dlen; + goto out; + } + dlen = ceph_decode_32(&tp); + } else { + /* unencrypted */ + ceph_decode_32_safe(&p, end, dlen, bad); + ceph_decode_need(&p, end, dlen, bad); + ceph_decode_copy(&p, ticket_buf, dlen); } - tp = ticket_buf; - ceph_decode_need(p, end, dlen, bad); - ceph_decode_copy(p, ticket_buf, dlen); - } - tpend = tp + dlen; - dout(" ticket blob is %d bytes\n", dlen); - ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad); - blob_struct_v = ceph_decode_8(&tp); - new_secret_id = ceph_decode_64(&tp); - ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend); - if (ret) - goto out; + tpend = tp + dlen; + dout(" ticket blob is %d bytes\n", dlen); + ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad); + blob_struct_v = ceph_decode_8(&tp); + new_secret_id = ceph_decode_64(&tp); + ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend); + if (ret) + goto out; - /* all is well, update our ticket */ - ceph_crypto_key_destroy(&th->session_key); - if (th->ticket_blob) - ceph_buffer_put(th->ticket_blob); - th->session_key = new_session_key; - th->ticket_blob = new_ticket_blob; - th->validity = new_validity; - th->secret_id = new_secret_id; - th->expires = new_expires; - th->renew_after = new_renew_after; - dout(" got ticket service %d (%s) secret_id %lld len %d\n", - type, ceph_entity_type_name(type), th->secret_id, - (int)th->ticket_blob->vec.iov_len); - xi->have_keys |= th->service; + /* all is well, update our ticket */ + ceph_crypto_key_destroy(&th->session_key); + if (th->ticket_blob) + ceph_buffer_put(th->ticket_blob); + th->session_key = new_session_key; + th->ticket_blob = new_ticket_blob; + th->validity = new_validity; + th->secret_id = new_secret_id; + th->expires = new_expires; + th->renew_after = new_renew_after; + dout(" got ticket service %d (%s) secret_id %lld len %d\n", + type, ceph_entity_type_name(type), th->secret_id, + (int)th->ticket_blob->vec.iov_len); + xi->have_keys |= th->service; + } + ret = 0; out: kfree(ticket_buf); +out_dbuf: kfree(dbuf); return ret; @@ -255,34 +270,6 @@ bad: goto out; } -static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac, - struct ceph_crypto_key *secret, - void *buf, void *end) -{ - void *p = buf; - u8 reply_struct_v; - u32 num; - int ret; - - ceph_decode_8_safe(&p, end, reply_struct_v, bad); - if (reply_struct_v != 1) - return -EINVAL; - - ceph_decode_32_safe(&p, end, num, bad); - dout("%d tickets\n", num); - - while (num--) { - ret = process_one_ticket(ac, secret, &p, end); - if (ret) - return ret; - } - - return 0; - -bad: - return -EINVAL; -} - static int ceph_x_build_authorizer(struct ceph_auth_client *ac, struct ceph_x_ticket_handler *th, struct ceph_x_authorizer *au) @@ -596,14 +583,13 @@ static int ceph_x_verify_authorizer_reply(struct ceph_auth_client *ac, struct ceph_x_ticket_handler *th; int ret = 0; struct ceph_x_authorize_reply reply; - void *preply = &reply; void *p = au->reply_buf; void *end = p + sizeof(au->reply_buf); th = get_ticket_handler(ac, au->service); if (IS_ERR(th)) return PTR_ERR(th); - ret = ceph_x_decrypt(&th->session_key, &p, end, &preply, sizeof(reply)); + ret = ceph_x_decrypt(&th->session_key, &p, end, &reply, sizeof(reply)); if (ret < 0) return ret; if (ret != sizeof(reply)) diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index e3bea2e0821..eb0a46a49bd 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -290,8 +290,7 @@ int ceph_msgr_init(void) if (ceph_msgr_slab_init()) return -ENOMEM; - ceph_msgr_wq = alloc_workqueue("ceph-msgr", - WQ_NON_REENTRANT | WQ_MEM_RECLAIM, 0); + ceph_msgr_wq = alloc_workqueue("ceph-msgr", WQ_NON_REENTRANT, 0); if (ceph_msgr_wq) return 0; @@ -557,7 +556,7 @@ static int ceph_tcp_sendmsg(struct socket *sock, struct kvec *iov, return r; } -static int __ceph_tcp_sendpage(struct socket *sock, struct page *page, +static int ceph_tcp_sendpage(struct socket *sock, struct page *page, int offset, size_t size, bool more) { int flags = MSG_DONTWAIT | MSG_NOSIGNAL | (more ? MSG_MORE : MSG_EOR); @@ -570,24 +569,6 @@ static int __ceph_tcp_sendpage(struct socket *sock, struct page *page, return ret; } -static int ceph_tcp_sendpage(struct socket *sock, struct page *page, - int offset, size_t size, bool more) -{ - int ret; - struct kvec iov; - - /* sendpage cannot properly handle pages with page_count == 0, - * we need to fallback to sendmsg if that's the case */ - if (page_count(page) >= 1) - return __ceph_tcp_sendpage(sock, page, offset, size, more); - - iov.iov_base = kmap(page) + offset; - iov.iov_len = size; - ret = ceph_tcp_sendmsg(sock, &iov, 1, size, more); - kunmap(page); - - return ret; -} /* * Shutdown/close the socket for the given connection. @@ -905,7 +886,7 @@ static void ceph_msg_data_pages_cursor_init(struct ceph_msg_data_cursor *cursor, BUG_ON(page_count > (int)USHRT_MAX); cursor->page_count = (unsigned short)page_count; BUG_ON(length > SIZE_MAX - cursor->page_offset); - cursor->last_piece = cursor->page_offset + cursor->resid <= PAGE_SIZE; + cursor->last_piece = (size_t)cursor->page_offset + length <= PAGE_SIZE; } static struct page * @@ -3145,7 +3126,7 @@ struct ceph_msg *ceph_msg_new(int type, int front_len, gfp_t flags, INIT_LIST_HEAD(&m->data); /* front */ - m->front_alloc_len = front_len; + m->front_max = front_len; if (front_len) { if (front_len > PAGE_CACHE_SIZE) { m->front.iov_base = __vmalloc(front_len, flags, @@ -3320,8 +3301,8 @@ EXPORT_SYMBOL(ceph_msg_last_put); void ceph_msg_dump(struct ceph_msg *msg) { - pr_debug("msg_dump %p (front_alloc_len %d length %zd)\n", msg, - msg->front_alloc_len, msg->data_length); + pr_debug("msg_dump %p (front_max %d length %zd)\n", msg, + msg->front_max, msg->data_length); print_hex_dump(KERN_DEBUG, "header: ", DUMP_PREFIX_OFFSET, 16, 1, &msg->hdr, sizeof(msg->hdr), true); diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c index dbcbf5a4707..1fe25cd29d0 100644 --- a/net/ceph/mon_client.c +++ b/net/ceph/mon_client.c @@ -152,7 +152,7 @@ static int __open_session(struct ceph_mon_client *monc) /* initiatiate authentication handshake */ ret = ceph_auth_build_hello(monc->auth, monc->m_auth->front.iov_base, - monc->m_auth->front_alloc_len); + monc->m_auth->front_max); __send_prepared_auth_request(monc, ret); } else { dout("open_session mon%d already open\n", monc->cur_mon); @@ -196,7 +196,7 @@ static void __send_subscribe(struct ceph_mon_client *monc) int num; p = msg->front.iov_base; - end = p + msg->front_alloc_len; + end = p + msg->front_max; num = 1 + !!monc->want_next_osdmap + !!monc->want_mdsmap; ceph_encode_32(&p, num); @@ -897,7 +897,7 @@ static void handle_auth_reply(struct ceph_mon_client *monc, ret = ceph_handle_auth_reply(monc->auth, msg->front.iov_base, msg->front.iov_len, monc->m_auth->front.iov_base, - monc->m_auth->front_alloc_len); + monc->m_auth->front_max); if (ret < 0) { monc->client->auth_err = ret; wake_up_all(&monc->client->auth_wq); @@ -939,7 +939,7 @@ static int __validate_auth(struct ceph_mon_client *monc) return 0; ret = ceph_build_auth(monc->auth, monc->m_auth->front.iov_base, - monc->m_auth->front_alloc_len); + monc->m_auth->front_max); if (ret <= 0) return ret; /* either an error, or no need to authenticate */ __send_prepared_auth_request(monc, ret); @@ -1041,15 +1041,7 @@ static struct ceph_msg *mon_alloc_msg(struct ceph_connection *con, if (!m) { pr_info("alloc_msg unknown type %d\n", type); *skip = 1; - } else if (front_len > m->front_alloc_len) { - pr_warning("mon_alloc_msg front %d > prealloc %d (%u#%llu)\n", - front_len, m->front_alloc_len, - (unsigned int)con->peer_name.type, - le64_to_cpu(con->peer_name.num)); - ceph_msg_put(m); - m = ceph_msg_new(type, front_len, GFP_NOFS, false); } - return m; } diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index 3663a305daf..bc0016e3e5a 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -1225,22 +1225,6 @@ void ceph_osdc_set_request_linger(struct ceph_osd_client *osdc, EXPORT_SYMBOL(ceph_osdc_set_request_linger); /* - * Returns whether a request should be blocked from being sent - * based on the current osdmap and osd_client settings. - * - * Caller should hold map_sem for read. - */ -static bool __req_should_be_paused(struct ceph_osd_client *osdc, - struct ceph_osd_request *req) -{ - bool pauserd = ceph_osdmap_flag(osdc->osdmap, CEPH_OSDMAP_PAUSERD); - bool pausewr = ceph_osdmap_flag(osdc->osdmap, CEPH_OSDMAP_PAUSEWR) || - ceph_osdmap_flag(osdc->osdmap, CEPH_OSDMAP_FULL); - return (req->r_flags & CEPH_OSD_FLAG_READ && pauserd) || - (req->r_flags & CEPH_OSD_FLAG_WRITE && pausewr); -} - -/* * Pick an osd (the first 'up' osd in the pg), allocate the osd struct * (as needed), and set the request r_osd appropriately. If there is * no up osd, set r_osd to NULL. Move the request to the appropriate list @@ -1257,7 +1241,6 @@ static int __map_request(struct ceph_osd_client *osdc, int acting[CEPH_PG_MAX_SIZE]; int o = -1, num = 0; int err; - bool was_paused; dout("map_request %p tid %lld\n", req, req->r_tid); err = ceph_calc_ceph_pg(&pgid, req->r_oid, osdc->osdmap, @@ -1274,18 +1257,12 @@ static int __map_request(struct ceph_osd_client *osdc, num = err; } - was_paused = req->r_paused; - req->r_paused = __req_should_be_paused(osdc, req); - if (was_paused && !req->r_paused) - force_resend = 1; - if ((!force_resend && req->r_osd && req->r_osd->o_osd == o && req->r_sent >= req->r_osd->o_incarnation && req->r_num_pg_osds == num && memcmp(req->r_pg_osds, acting, sizeof(acting[0])*num) == 0) || - (req->r_osd == NULL && o == -1) || - req->r_paused) + (req->r_osd == NULL && o == -1)) return 0; /* no change */ dout("map_request tid %llu pgid %lld.%x osd%d (was osd%d)\n", @@ -1629,17 +1606,14 @@ static void reset_changed_osds(struct ceph_osd_client *osdc) * * Caller should hold map_sem for read. */ -static void kick_requests(struct ceph_osd_client *osdc, bool force_resend, - bool force_resend_writes) +static void kick_requests(struct ceph_osd_client *osdc, int force_resend) { struct ceph_osd_request *req, *nreq; struct rb_node *p; int needmap = 0; int err; - bool force_resend_req; - dout("kick_requests %s %s\n", force_resend ? " (force resend)" : "", - force_resend_writes ? " (force resend writes)" : ""); + dout("kick_requests %s\n", force_resend ? " (force resend)" : ""); mutex_lock(&osdc->request_mutex); for (p = rb_first(&osdc->requests); p; ) { req = rb_entry(p, struct ceph_osd_request, r_node); @@ -1664,10 +1638,7 @@ static void kick_requests(struct ceph_osd_client *osdc, bool force_resend, continue; } - force_resend_req = force_resend || - (force_resend_writes && - req->r_flags & CEPH_OSD_FLAG_WRITE); - err = __map_request(osdc, req, force_resend_req); + err = __map_request(osdc, req, force_resend); if (err < 0) continue; /* error */ if (req->r_osd == NULL) { @@ -1687,8 +1658,7 @@ static void kick_requests(struct ceph_osd_client *osdc, bool force_resend, r_linger_item) { dout("linger req=%p req->r_osd=%p\n", req, req->r_osd); - err = __map_request(osdc, req, - force_resend || force_resend_writes); + err = __map_request(osdc, req, force_resend); dout("__map_request returned %d\n", err); if (err == 0) continue; /* no change and no osd was specified */ @@ -1730,7 +1700,6 @@ void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg) struct ceph_osdmap *newmap = NULL, *oldmap; int err; struct ceph_fsid fsid; - bool was_full; dout("handle_map have %u\n", osdc->osdmap ? osdc->osdmap->epoch : 0); p = msg->front.iov_base; @@ -1744,8 +1713,6 @@ void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg) down_write(&osdc->map_sem); - was_full = ceph_osdmap_flag(osdc->osdmap, CEPH_OSDMAP_FULL); - /* incremental maps */ ceph_decode_32_safe(&p, end, nr_maps, bad); dout(" %d inc maps\n", nr_maps); @@ -1770,10 +1737,7 @@ void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg) ceph_osdmap_destroy(osdc->osdmap); osdc->osdmap = newmap; } - was_full = was_full || - ceph_osdmap_flag(osdc->osdmap, - CEPH_OSDMAP_FULL); - kick_requests(osdc, 0, was_full); + kick_requests(osdc, 0); } else { dout("ignoring incremental map %u len %d\n", epoch, maplen); @@ -1816,10 +1780,7 @@ void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg) skipped_map = 1; ceph_osdmap_destroy(oldmap); } - was_full = was_full || - ceph_osdmap_flag(osdc->osdmap, - CEPH_OSDMAP_FULL); - kick_requests(osdc, skipped_map, was_full); + kick_requests(osdc, skipped_map); } p += maplen; nr_maps--; @@ -1836,9 +1797,7 @@ done: * we find out when we are no longer full and stop returning * ENOSPC. */ - if (ceph_osdmap_flag(osdc->osdmap, CEPH_OSDMAP_FULL) || - ceph_osdmap_flag(osdc->osdmap, CEPH_OSDMAP_PAUSERD) || - ceph_osdmap_flag(osdc->osdmap, CEPH_OSDMAP_PAUSEWR)) + if (ceph_osdmap_flag(osdc->osdmap, CEPH_OSDMAP_FULL)) ceph_monc_request_next_osdmap(&osdc->client->monc); mutex_lock(&osdc->request_mutex); diff --git a/net/compat.c b/net/compat.c index cbc1a2a2658..f50161fb812 100644 --- a/net/compat.c +++ b/net/compat.c @@ -85,7 +85,7 @@ int verify_compat_iovec(struct msghdr *kern_msg, struct iovec *kern_iov, { int tot_len; - if (kern_msg->msg_name && kern_msg->msg_namelen) { + if (kern_msg->msg_namelen) { if (mode == VERIFY_READ) { int err = move_addr_to_kernel(kern_msg->msg_name, kern_msg->msg_namelen, @@ -93,11 +93,10 @@ int verify_compat_iovec(struct msghdr *kern_msg, struct iovec *kern_iov, if (err < 0) return err; } - kern_msg->msg_name = kern_address; - } else { + if (kern_msg->msg_name) + kern_msg->msg_name = kern_address; + } else kern_msg->msg_name = NULL; - kern_msg->msg_namelen = 0; - } tot_len = iov_from_user_compat_to_kern(kern_iov, (struct compat_iovec __user *)kern_msg->msg_iov, diff --git a/net/core/dev.c b/net/core/dev.c index cca7ae0ba91..a0e55ffc03c 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3898,7 +3898,6 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) skb->vlan_tci = 0; skb->dev = napi->dev; skb->skb_iif = 0; - skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); napi->skb = skb; } @@ -4635,7 +4634,6 @@ void __dev_set_rx_mode(struct net_device *dev) if (ops->ndo_set_rx_mode) ops->ndo_set_rx_mode(dev); } -EXPORT_SYMBOL(__dev_set_rx_mode); void dev_set_rx_mode(struct net_device *dev) { @@ -5827,9 +5825,6 @@ EXPORT_SYMBOL(unregister_netdevice_queue); /** * unregister_netdevice_many - unregister many devices * @head: list of devices - * - * Note: As most callers use a stack allocated list_head, - * we force a list_del() to make sure stack wont be corrupted later. */ void unregister_netdevice_many(struct list_head *head) { @@ -5839,7 +5834,6 @@ void unregister_netdevice_many(struct list_head *head) rollback_registered_many(head); list_for_each_entry(dev, head, unreg_list) net_set_todo(dev); - list_del(head); } } EXPORT_SYMBOL(unregister_netdevice_many); @@ -6256,6 +6250,7 @@ static void __net_exit default_device_exit_batch(struct list_head *net_list) } } unregister_netdevice_many(&dev_kill_list); + list_del(&dev_kill_list); rtnl_unlock(); } diff --git a/net/core/dst.c b/net/core/dst.c index c0e021871df..df9cc810ec8 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -267,15 +267,6 @@ again: } EXPORT_SYMBOL(dst_destroy); -static void dst_destroy_rcu(struct rcu_head *head) -{ - struct dst_entry *dst = container_of(head, struct dst_entry, rcu_head); - - dst = dst_destroy(dst); - if (dst) - __dst_free(dst); -} - void dst_release(struct dst_entry *dst) { if (dst) { @@ -283,8 +274,11 @@ void dst_release(struct dst_entry *dst) newrefcnt = atomic_dec_return(&dst->__refcnt); WARN_ON(newrefcnt < 0); - if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) - call_rcu(&dst->rcu_head, dst_destroy_rcu); + if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) { + dst = dst_destroy(dst); + if (dst) + __dst_free(dst); + } } } EXPORT_SYMBOL(dst_release); diff --git a/net/core/filter.c b/net/core/filter.c index c6c18d8a2d8..52f01229ee0 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -355,8 +355,6 @@ load_b: if (skb_is_nonlinear(skb)) return 0; - if (skb->len < sizeof(struct nlattr)) - return 0; if (A > skb->len - sizeof(struct nlattr)) return 0; @@ -373,13 +371,11 @@ load_b: if (skb_is_nonlinear(skb)) return 0; - if (skb->len < sizeof(struct nlattr)) - return 0; if (A > skb->len - sizeof(struct nlattr)) return 0; nla = (struct nlattr *)&skb->data[A]; - if (nla->nla_len > skb->len - A) + if (nla->nla_len > A - skb->len) return 0; nla = nla_find_nested(nla, X); diff --git a/net/core/iovec.c b/net/core/iovec.c index 1117a26a854..9a31515fb8e 100644 --- a/net/core/iovec.c +++ b/net/core/iovec.c @@ -39,7 +39,7 @@ int verify_iovec(struct msghdr *m, struct iovec *iov, struct sockaddr_storage *a { int size, ct, err; - if (m->msg_name && m->msg_namelen) { + if (m->msg_namelen) { if (mode == VERIFY_READ) { void __user *namep; namep = (void __user __force *) m->msg_name; @@ -48,10 +48,10 @@ int verify_iovec(struct msghdr *m, struct iovec *iov, struct sockaddr_storage *a if (err < 0) return err; } - m->msg_name = address; + if (m->msg_name) + m->msg_name = address; } else { m->msg_name = NULL; - m->msg_namelen = 0; } size = m->msg_iovlen * sizeof(struct iovec); @@ -107,10 +107,6 @@ EXPORT_SYMBOL(memcpy_toiovecend); int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, int offset, int len) { - /* No data? Done! */ - if (len == 0) - return 0; - /* Skip over the finished iovecs */ while (offset >= iov->iov_len) { offset -= iov->iov_len; diff --git a/net/core/neighbour.c b/net/core/neighbour.c index b49e8bafab1..49aeab86f31 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -764,6 +764,9 @@ static void neigh_periodic_work(struct work_struct *work) nht = rcu_dereference_protected(tbl->nht, lockdep_is_held(&tbl->lock)); + if (atomic_read(&tbl->entries) < tbl->gc_thresh1) + goto out; + /* * periodically recompute ReachableTime from random function */ @@ -776,9 +779,6 @@ static void neigh_periodic_work(struct work_struct *work) neigh_rand_reach_time(p->base_reachable_time); } - if (atomic_read(&tbl->entries) < tbl->gc_thresh1) - goto out; - for (i = 0 ; i < (1 << nht->hash_shift); i++) { np = &nht->hash_buckets[i]; diff --git a/net/core/netpoll.c b/net/core/netpoll.c index e861438d545..433a1051d32 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -745,7 +745,7 @@ static bool pkt_is_ns(struct sk_buff *skb) struct nd_msg *msg; struct ipv6hdr *hdr; - if (skb->protocol != htons(ETH_P_IPV6)) + if (skb->protocol != htons(ETH_P_ARP)) return false; if (!pskb_may_pull(skb, sizeof(struct ipv6hdr) + sizeof(struct nd_msg))) return false; diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index ae43dd807bb..fd01eca52a1 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -714,8 +714,7 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev, return 0; } -static size_t rtnl_port_size(const struct net_device *dev, - u32 ext_filter_mask) +static size_t rtnl_port_size(const struct net_device *dev) { size_t port_size = nla_total_size(4) /* PORT_VF */ + nla_total_size(PORT_PROFILE_MAX) /* PORT_PROFILE */ @@ -731,8 +730,7 @@ static size_t rtnl_port_size(const struct net_device *dev, size_t port_self_size = nla_total_size(sizeof(struct nlattr)) + port_size; - if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent || - !(ext_filter_mask & RTEXT_FILTER_VF)) + if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent) return 0; if (dev_num_vf(dev->dev.parent)) return port_self_size + vf_ports_size + @@ -767,7 +765,7 @@ static noinline size_t if_nlmsg_size(const struct net_device *dev, + nla_total_size(ext_filter_mask & RTEXT_FILTER_VF ? 4 : 0) /* IFLA_NUM_VF */ + rtnl_vfinfo_size(dev, ext_filter_mask) /* IFLA_VFINFO_LIST */ - + rtnl_port_size(dev, ext_filter_mask) /* IFLA_VF_PORTS + IFLA_PORT_SELF */ + + rtnl_port_size(dev) /* IFLA_VF_PORTS + IFLA_PORT_SELF */ + rtnl_link_get_size(dev) /* IFLA_LINKINFO */ + rtnl_link_get_af_size(dev); /* IFLA_AF_SPEC */ } @@ -828,13 +826,11 @@ static int rtnl_port_self_fill(struct sk_buff *skb, struct net_device *dev) return 0; } -static int rtnl_port_fill(struct sk_buff *skb, struct net_device *dev, - u32 ext_filter_mask) +static int rtnl_port_fill(struct sk_buff *skb, struct net_device *dev) { int err; - if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent || - !(ext_filter_mask & RTEXT_FILTER_VF)) + if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent) return 0; err = rtnl_port_self_fill(skb, dev); @@ -989,7 +985,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev, nla_nest_end(skb, vfinfo); } - if (rtnl_port_fill(skb, dev, ext_filter_mask)) + if (rtnl_port_fill(skb, dev)) goto nla_put_failure; if (dev->rtnl_link_ops) { @@ -1043,8 +1039,6 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb) struct hlist_head *head; struct nlattr *tb[IFLA_MAX+1]; u32 ext_filter_mask = 0; - int err; - int hdrlen; s_h = cb->args[0]; s_idx = cb->args[1]; @@ -1052,17 +1046,8 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb) rcu_read_lock(); cb->seq = net->dev_base_seq; - /* A hack to preserve kernel<->userspace interface. - * The correct header is ifinfomsg. It is consistent with rtnl_getlink. - * However, before Linux v3.9 the code here assumed rtgenmsg and that's - * what iproute2 < v3.9.0 used. - * We can detect the old iproute2. Even including the IFLA_EXT_MASK - * attribute, its netlink message is shorter than struct ifinfomsg. - */ - hdrlen = nlmsg_len(cb->nlh) < sizeof(struct ifinfomsg) ? - sizeof(struct rtgenmsg) : sizeof(struct ifinfomsg); - - if (nlmsg_parse(cb->nlh, hdrlen, tb, IFLA_MAX, ifla_policy) >= 0) { + if (nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb, IFLA_MAX, + ifla_policy) >= 0) { if (tb[IFLA_EXT_MASK]) ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); @@ -1074,17 +1059,11 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb) hlist_for_each_entry_rcu(dev, head, index_hlist) { if (idx < s_idx) goto cont; - err = rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK, - NETLINK_CB(cb->skb).portid, - cb->nlh->nlmsg_seq, 0, - NLM_F_MULTI, - ext_filter_mask); - /* If we ran out of room on the first message, - * we're in trouble - */ - WARN_ON((err == -EMSGSIZE) && (skb->len == 0)); - - if (err <= 0) + if (rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK, + NETLINK_CB(cb->skb).portid, + cb->nlh->nlmsg_seq, 0, + NLM_F_MULTI, + ext_filter_mask) <= 0) goto out; nl_dump_check_consistent(cb, nlmsg_hdr(skb)); @@ -1304,8 +1283,7 @@ static int do_set_master(struct net_device *dev, int ifindex) return 0; } -static int do_setlink(const struct sk_buff *skb, - struct net_device *dev, struct ifinfomsg *ifm, +static int do_setlink(struct net_device *dev, struct ifinfomsg *ifm, struct nlattr **tb, char *ifname, int modified) { const struct net_device_ops *ops = dev->netdev_ops; @@ -1317,7 +1295,7 @@ static int do_setlink(const struct sk_buff *skb, err = PTR_ERR(net); goto errout; } - if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN)) { + if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) { err = -EPERM; goto errout; } @@ -1571,7 +1549,7 @@ static int rtnl_setlink(struct sk_buff *skb, struct nlmsghdr *nlh) if (err < 0) goto errout; - err = do_setlink(skb, dev, ifm, tb, ifname, 0); + err = do_setlink(dev, ifm, tb, ifname, 0); errout: return err; } @@ -1611,6 +1589,7 @@ static int rtnl_dellink(struct sk_buff *skb, struct nlmsghdr *nlh) ops->dellink(dev, &list_kill); unregister_netdevice_many(&list_kill); + list_del(&list_kill); return 0; } @@ -1688,8 +1667,7 @@ err: } EXPORT_SYMBOL(rtnl_create_link); -static int rtnl_group_changelink(const struct sk_buff *skb, - struct net *net, int group, +static int rtnl_group_changelink(struct net *net, int group, struct ifinfomsg *ifm, struct nlattr **tb) { @@ -1698,7 +1676,7 @@ static int rtnl_group_changelink(const struct sk_buff *skb, for_each_netdev(net, dev) { if (dev->group == group) { - err = do_setlink(skb, dev, ifm, tb, NULL, 0); + err = do_setlink(dev, ifm, tb, NULL, 0); if (err < 0) return err; } @@ -1800,12 +1778,12 @@ replay: modified = 1; } - return do_setlink(skb, dev, ifm, tb, ifname, modified); + return do_setlink(dev, ifm, tb, ifname, modified); } if (!(nlh->nlmsg_flags & NLM_F_CREATE)) { if (ifm->ifi_index == 0 && tb[IFLA_GROUP]) - return rtnl_group_changelink(skb, net, + return rtnl_group_changelink(net, nla_get_u32(tb[IFLA_GROUP]), ifm, tb); return -ENODEV; @@ -1917,13 +1895,9 @@ static u16 rtnl_calcit(struct sk_buff *skb, struct nlmsghdr *nlh) struct nlattr *tb[IFLA_MAX+1]; u32 ext_filter_mask = 0; u16 min_ifinfo_dump_size = 0; - int hdrlen; - - /* Same kernel<->userspace interface hack as in rtnl_dump_ifinfo. */ - hdrlen = nlmsg_len(nlh) < sizeof(struct ifinfomsg) ? - sizeof(struct rtgenmsg) : sizeof(struct ifinfomsg); - if (nlmsg_parse(nlh, hdrlen, tb, IFLA_MAX, ifla_policy) >= 0) { + if (nlmsg_parse(nlh, sizeof(struct ifinfomsg), tb, IFLA_MAX, + ifla_policy) >= 0) { if (tb[IFLA_EXT_MASK]) ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); } @@ -1999,13 +1973,12 @@ EXPORT_SYMBOL(rtmsg_ifinfo); static int nlmsg_populate_fdb_fill(struct sk_buff *skb, struct net_device *dev, u8 *addr, u32 pid, u32 seq, - int type, unsigned int flags, - int nlflags) + int type, unsigned int flags) { struct nlmsghdr *nlh; struct ndmsg *ndm; - nlh = nlmsg_put(skb, pid, seq, type, sizeof(*ndm), nlflags); + nlh = nlmsg_put(skb, pid, seq, type, sizeof(*ndm), NLM_F_MULTI); if (!nlh) return -EMSGSIZE; @@ -2043,7 +2016,7 @@ static void rtnl_fdb_notify(struct net_device *dev, u8 *addr, int type) if (!skb) goto errout; - err = nlmsg_populate_fdb_fill(skb, dev, addr, 0, 0, type, NTF_SELF, 0); + err = nlmsg_populate_fdb_fill(skb, dev, addr, 0, 0, type, NTF_SELF); if (err < 0) { kfree_skb(skb); goto errout; @@ -2194,7 +2167,7 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh) int err = -EINVAL; __u8 *addr; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; err = nlmsg_parse(nlh, sizeof(*ndm), tb, NDA_MAX, NULL); @@ -2276,8 +2249,7 @@ static int nlmsg_populate_fdb(struct sk_buff *skb, err = nlmsg_populate_fdb_fill(skb, dev, ha->addr, portid, seq, - RTM_NEWNEIGH, NTF_SELF, - NLM_F_MULTI); + RTM_NEWNEIGH, NTF_SELF); if (err < 0) return err; skip: @@ -2650,7 +2622,7 @@ static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) sz_idx = type>>2; kind = type&3; - if (kind != 2 && !netlink_net_capable(skb, CAP_NET_ADMIN)) + if (kind != 2 && !ns_capable(net->user_ns, CAP_NET_ADMIN)) return -EPERM; if (kind == 2 && nlh->nlmsg_flags&NLM_F_DUMP) { diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c index d0afc322b96..8d9d05edd2e 100644 --- a/net/core/secure_seq.c +++ b/net/core/secure_seq.c @@ -95,6 +95,31 @@ EXPORT_SYMBOL(secure_ipv6_port_ephemeral); #endif #ifdef CONFIG_INET +__u32 secure_ip_id(__be32 daddr) +{ + u32 hash[MD5_DIGEST_WORDS]; + + net_secret_init(); + hash[0] = (__force __u32) daddr; + hash[1] = net_secret[13]; + hash[2] = net_secret[14]; + hash[3] = net_secret[15]; + + md5_transform(hash, net_secret); + + return hash[0]; +} + +__u32 secure_ipv6_id(const __be32 daddr[4]) +{ + __u32 hash[4]; + + net_secret_init(); + memcpy(hash, daddr, 16); + md5_transform(hash, net_secret); + + return hash[0]; +} __u32 secure_tcp_sequence_number(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 6148716884a..79143b7af7e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2810,6 +2810,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) tail = nskb; __copy_skb_header(nskb, skb); + nskb->mac_len = skb->mac_len; /* nskb and skb might have different headroom */ if (nskb->ip_summed == CHECKSUM_PARTIAL) @@ -2819,7 +2820,6 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) skb_set_network_header(nskb, skb->mac_len); nskb->transport_header = (nskb->network_header + skb_network_header_len(skb)); - skb_reset_mac_len(nskb); skb_copy_from_linear_data_offset(skb, -tnl_hlen, nskb->data - tnl_hlen, @@ -2844,8 +2844,6 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) skb_shinfo(nskb)->tx_flags = skb_shinfo(skb)->tx_flags & SKBTX_SHARED_FRAG; while (pos < offset + len && i < nfrags) { - if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC))) - goto err; *frag = skb_shinfo(skb)->frags[i]; __skb_frag_ref(frag); size = skb_frag_size(frag); @@ -3489,14 +3487,12 @@ EXPORT_SYMBOL(skb_try_coalesce); unsigned int skb_gso_transport_seglen(const struct sk_buff *skb) { const struct skb_shared_info *shinfo = skb_shinfo(skb); + unsigned int hdr_len; if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) - return tcp_hdrlen(skb) + shinfo->gso_size; - - /* UFO sets gso_size to the size of the fragmentation - * payload, i.e. the size of the L4 (UDP) header is already - * accounted for. - */ - return shinfo->gso_size; + hdr_len = tcp_hdrlen(skb); + else + hdr_len = sizeof(struct udphdr); + return hdr_len + shinfo->gso_size; } EXPORT_SYMBOL_GPL(skb_gso_transport_seglen); diff --git a/net/core/sock.c b/net/core/sock.c index f5325353b90..9c81a348d50 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -146,55 +146,6 @@ static DEFINE_MUTEX(proto_list_mutex); static LIST_HEAD(proto_list); -/** - * sk_ns_capable - General socket capability test - * @sk: Socket to use a capability on or through - * @user_ns: The user namespace of the capability to use - * @cap: The capability to use - * - * Test to see if the opener of the socket had when the socket was - * created and the current process has the capability @cap in the user - * namespace @user_ns. - */ -bool sk_ns_capable(const struct sock *sk, - struct user_namespace *user_ns, int cap) -{ - return file_ns_capable(sk->sk_socket->file, user_ns, cap) && - ns_capable(user_ns, cap); -} -EXPORT_SYMBOL(sk_ns_capable); - -/** - * sk_capable - Socket global capability test - * @sk: Socket to use a capability on or through - * @cap: The global capbility to use - * - * Test to see if the opener of the socket had when the socket was - * created and the current process has the capability @cap in all user - * namespaces. - */ -bool sk_capable(const struct sock *sk, int cap) -{ - return sk_ns_capable(sk, &init_user_ns, cap); -} -EXPORT_SYMBOL(sk_capable); - -/** - * sk_net_capable - Network namespace socket capability test - * @sk: Socket to use a capability on or through - * @cap: The capability to use - * - * Test to see if the opener of the socket had when the socke was created - * and the current process has the capability @cap over the network namespace - * the socket is a member of. - */ -bool sk_net_capable(const struct sock *sk, int cap) -{ - return sk_ns_capable(sk, sock_net(sk)->user_ns, cap); -} -EXPORT_SYMBOL(sk_net_capable); - - #ifdef CONFIG_MEMCG_KMEM int mem_cgroup_sockets_init(struct mem_cgroup *memcg, struct cgroup_subsys *ss) { @@ -2362,13 +2313,10 @@ void release_sock(struct sock *sk) if (sk->sk_backlog.tail) __release_sock(sk); - /* Warning : release_cb() might need to release sk ownership, - * ie call sock_release_ownership(sk) before us. - */ if (sk->sk_prot->release_cb) sk->sk_prot->release_cb(sk); - sock_release_ownership(sk); + sk->sk_lock.owned = 0; if (waitqueue_active(&sk->sk_lock.wq)) wake_up(&sk->sk_lock.wq); spin_unlock_bh(&sk->sk_lock.slock); diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c index c38e7a2b5a8..a0e9cf6379d 100644 --- a/net/core/sock_diag.c +++ b/net/core/sock_diag.c @@ -49,7 +49,7 @@ int sock_diag_put_meminfo(struct sock *sk, struct sk_buff *skb, int attrtype) } EXPORT_SYMBOL_GPL(sock_diag_put_meminfo); -int sock_diag_put_filterinfo(bool may_report_filterinfo, struct sock *sk, +int sock_diag_put_filterinfo(struct user_namespace *user_ns, struct sock *sk, struct sk_buff *skb, int attrtype) { struct nlattr *attr; @@ -57,7 +57,7 @@ int sock_diag_put_filterinfo(bool may_report_filterinfo, struct sock *sk, unsigned int len; int err = 0; - if (!may_report_filterinfo) { + if (!ns_capable(user_ns, CAP_NET_ADMIN)) { nla_reserve(skb, attrtype, 0); return 0; } diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c index 1074ffb6d53..40d5829ed36 100644 --- a/net/dcb/dcbnl.c +++ b/net/dcb/dcbnl.c @@ -1670,7 +1670,7 @@ static int dcb_doit(struct sk_buff *skb, struct nlmsghdr *nlh) struct nlmsghdr *reply_nlh = NULL; const struct reply_func *fn; - if ((nlh->nlmsg_type == RTM_SETDCB) && !netlink_capable(skb, CAP_NET_ADMIN)) + if ((nlh->nlmsg_type == RTM_SETDCB) && !capable(CAP_NET_ADMIN)) return -EPERM; ret = nlmsg_parse(nlh, sizeof(*dcb), tb, DCB_ATTR_MAX, diff --git a/net/decnet/dn_dev.c b/net/decnet/dn_dev.c index b5e52100a89..7d9197063eb 100644 --- a/net/decnet/dn_dev.c +++ b/net/decnet/dn_dev.c @@ -573,7 +573,7 @@ static int dn_nl_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh) struct dn_ifaddr __rcu **ifap; int err = -EINVAL; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; if (!net_eq(net, &init_net)) @@ -617,7 +617,7 @@ static int dn_nl_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh) struct dn_ifaddr *ifa; int err; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; if (!net_eq(net, &init_net)) diff --git a/net/decnet/dn_fib.c b/net/decnet/dn_fib.c index d332aefb084..57dc159245e 100644 --- a/net/decnet/dn_fib.c +++ b/net/decnet/dn_fib.c @@ -505,7 +505,7 @@ static int dn_fib_rtm_delroute(struct sk_buff *skb, struct nlmsghdr *nlh) struct nlattr *attrs[RTA_MAX+1]; int err; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; if (!net_eq(net, &init_net)) @@ -530,7 +530,7 @@ static int dn_fib_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh) struct nlattr *attrs[RTA_MAX+1]; int err; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; if (!net_eq(net, &init_net)) diff --git a/net/decnet/netfilter/dn_rtmsg.c b/net/decnet/netfilter/dn_rtmsg.c index f3dc69a41d6..2a7efe38834 100644 --- a/net/decnet/netfilter/dn_rtmsg.c +++ b/net/decnet/netfilter/dn_rtmsg.c @@ -107,7 +107,7 @@ static inline void dnrmg_receive_user_skb(struct sk_buff *skb) if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len) return; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) RCV_SKB_FAIL(-EPERM); /* Eventually we might send routing messages too */ diff --git a/net/dns_resolver/dns_query.c b/net/dns_resolver/dns_query.c index 2022b46ab38..c32be292c7e 100644 --- a/net/dns_resolver/dns_query.c +++ b/net/dns_resolver/dns_query.c @@ -150,9 +150,7 @@ int dns_query(const char *type, const char *name, size_t namelen, if (!*_result) goto put; - memcpy(*_result, upayload->data, len); - (*_result)[len] = '\0'; - + memcpy(*_result, upayload->data, len + 1); if (_expiry) *_expiry = rkey->expiry; diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c index 5f3dc1df04b..19e36376d2a 100644 --- a/net/ipv4/datagram.c +++ b/net/ipv4/datagram.c @@ -86,26 +86,18 @@ out: } EXPORT_SYMBOL(ip4_datagram_connect); -/* Because UDP xmit path can manipulate sk_dst_cache without holding - * socket lock, we need to use sk_dst_set() here, - * even if we own the socket lock. - */ void ip4_datagram_release_cb(struct sock *sk) { const struct inet_sock *inet = inet_sk(sk); const struct ip_options_rcu *inet_opt; __be32 daddr = inet->inet_daddr; - struct dst_entry *dst; struct flowi4 fl4; struct rtable *rt; - rcu_read_lock(); - - dst = __sk_dst_get(sk); - if (!dst || !dst->obsolete || dst->ops->check(dst, 0)) { - rcu_read_unlock(); + if (! __sk_dst_get(sk) || __sk_dst_check(sk, 0)) return; - } + + rcu_read_lock(); inet_opt = rcu_dereference(inet->inet_opt); if (inet_opt && inet_opt->opt.srr) daddr = inet_opt->opt.faddr; @@ -113,10 +105,8 @@ void ip4_datagram_release_cb(struct sock *sk) inet->inet_saddr, inet->inet_dport, inet->inet_sport, sk->sk_protocol, RT_CONN_FLAGS(sk), sk->sk_bound_dev_if); - - dst = !IS_ERR(rt) ? &rt->dst : NULL; - sk_dst_set(sk, dst); - + if (!IS_ERR(rt)) + __sk_dst_set(sk, &rt->dst); rcu_read_unlock(); } EXPORT_SYMBOL_GPL(ip4_datagram_release_cb); diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index bc773a10dca..8f6cb7a87cd 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -533,7 +533,7 @@ int fib_nh_match(struct fib_config *cfg, struct fib_info *fi) return 1; attrlen = rtnh_attrlen(rtnh); - if (attrlen > 0) { + if (attrlen < 0) { struct nlattr *nla, *attrs = rtnh_attrs(rtnh); nla = nla_find(attrs, attrlen, RTA_GATEWAY); @@ -818,13 +818,13 @@ struct fib_info *fib_create_info(struct fib_config *cfg) fi = kzalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL); if (fi == NULL) goto failure; - fib_info_cnt++; if (cfg->fc_mx) { fi->fib_metrics = kzalloc(sizeof(u32) * RTAX_MAX, GFP_KERNEL); if (!fi->fib_metrics) goto failure; } else fi->fib_metrics = (u32 *) dst_default_metrics; + fib_info_cnt++; fi->fib_net = hold_net(net); fi->fib_protocol = cfg->fc_protocol; diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 5af8781b65e..cc38f44306e 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -704,6 +704,8 @@ static void icmp_unreach(struct sk_buff *skb) &iph->daddr); } else { info = ntohs(icmph->un.frag.mtu); + if (!info) + goto out; } break; case ICMP_SR_FAILED: diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 155adf8729c..089b4af4fec 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -343,7 +343,7 @@ static struct sk_buff *igmpv3_newpack(struct net_device *dev, int size) pip->saddr = fl4.saddr; pip->protocol = IPPROTO_IGMP; pip->tot_len = 0; /* filled in later */ - ip_select_ident(skb, NULL); + ip_select_ident(skb, &rt->dst, NULL); ((u8 *)&pip[1])[0] = IPOPT_RA; ((u8 *)&pip[1])[1] = 4; ((u8 *)&pip[1])[2] = 0; @@ -687,7 +687,7 @@ static int igmp_send_report(struct in_device *in_dev, struct ip_mc_list *pmc, iph->daddr = dst; iph->saddr = fl4.saddr; iph->protocol = IPPROTO_IGMP; - ip_select_ident(skb, NULL); + ip_select_ident(skb, &rt->dst, NULL); ((u8 *)&iph[1])[0] = IPOPT_RA; ((u8 *)&iph[1])[1] = 4; ((u8 *)&iph[1])[2] = 0; @@ -1874,10 +1874,6 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) rtnl_lock(); in_dev = ip_mc_find_dev(net, imr); - if (!in_dev) { - ret = -ENODEV; - goto out; - } ifindex = imr->imr_ifindex; for (imlp = &inet->mc_list; (iml = rtnl_dereference(*imlp)) != NULL; @@ -1895,14 +1891,16 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) *imlp = iml->next_rcu; - ip_mc_dec_group(in_dev, group); + if (in_dev) + ip_mc_dec_group(in_dev, group); rtnl_unlock(); /* decrease mem now to avoid the memleak warning */ atomic_sub(sizeof(*iml), &sk->sk_omem_alloc); kfree_rcu(iml, rcu); return 0; } -out: + if (!in_dev) + ret = -ENODEV; rtnl_unlock(); return ret; } diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c index af02a39175e..7e06641e36a 100644 --- a/net/ipv4/inet_fragment.c +++ b/net/ipv4/inet_fragment.c @@ -211,7 +211,7 @@ int inet_frag_evictor(struct netns_frags *nf, struct inet_frags *f, bool force) } work = frag_mem_limit(nf) - nf->low_thresh; - while (work > 0 || force) { + while (work > 0) { spin_lock(&nf->lru_lock); if (list_empty(&nf->lru_list)) { @@ -283,10 +283,9 @@ static struct inet_frag_queue *inet_frag_intern(struct netns_frags *nf, atomic_inc(&qp->refcnt); hlist_add_head(&qp->list, &hb->chain); - inet_frag_lru_add(nf, qp); spin_unlock(&hb->chain_lock); read_unlock(&f->lock); - + inet_frag_lru_add(nf, qp); return qp; } diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c index 67140efc15f..33d5537881e 100644 --- a/net/ipv4/inetpeer.c +++ b/net/ipv4/inetpeer.c @@ -26,7 +26,20 @@ * Theory of operations. * We keep one entry for each peer IP address. The nodes contains long-living * information about the peer which doesn't depend on routes. + * At this moment this information consists only of ID field for the next + * outgoing IP packet. This field is incremented with each packet as encoded + * in inet_getid() function (include/net/inetpeer.h). + * At the moment of writing this notes identifier of IP packets is generated + * to be unpredictable using this code only for packets subjected + * (actually or potentially) to defragmentation. I.e. DF packets less than + * PMTU in size when local fragmentation is disabled use a constant ID and do + * not use this code (see ip_select_ident() in include/net/ip.h). * + * Route cache entries hold references to our nodes. + * New cache entries get references via lookup by destination IP address in + * the avl tree. The reference is grabbed only when it's needed i.e. only + * when we try to output IP packet which needs an unpredictable ID (see + * __ip_select_ident() in net/ipv4/route.c). * Nodes are removed only when reference counter goes to 0. * When it's happened the node may be removed when a sufficient amount of * time has been passed since its last use. The less-recently-used entry can @@ -49,6 +62,7 @@ * refcnt: atomically against modifications on other CPU; * usually under some other lock to prevent node disappearing * daddr: unchangeable + * ip_id_count: atomic value (no lock needed) */ static struct kmem_cache *peer_cachep __read_mostly; @@ -490,6 +504,10 @@ relookup: p->daddr = *daddr; atomic_set(&p->refcnt, 1); atomic_set(&p->rid, 0); + atomic_set(&p->ip_id_count, + (daddr->family == AF_INET) ? + secure_ip_id(daddr->addr.a4) : + secure_ipv6_id(daddr->addr.a6)); p->metrics[RTAX_LOCK-1] = INETPEER_METRICS_NEW; p->rate_tokens = 0; /* 60*HZ is arbitrary, but chosen enough high so that the first diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c index bd1c5baf69b..98d7e53d2af 100644 --- a/net/ipv4/ip_forward.c +++ b/net/ipv4/ip_forward.c @@ -42,12 +42,12 @@ static bool ip_may_fragment(const struct sk_buff *skb) { return unlikely((ip_hdr(skb)->frag_off & htons(IP_DF)) == 0) || - skb->local_df; + !skb->local_df; } static bool ip_exceeds_mtu(const struct sk_buff *skb, unsigned int mtu) { - if (skb->len <= mtu) + if (skb->len <= mtu || skb->local_df) return false; if (skb_is_gso(skb) && skb_gso_network_seglen(skb) <= mtu) diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c index fae5a845953..828b2e8631e 100644 --- a/net/ipv4/ip_gre.c +++ b/net/ipv4/ip_gre.c @@ -652,7 +652,6 @@ static const struct net_device_ops ipgre_netdev_ops = { static void ipgre_tunnel_setup(struct net_device *dev) { dev->netdev_ops = &ipgre_netdev_ops; - dev->type = ARPHRD_IPGRE; ip_tunnel_setup(dev, ipgre_net_id); } @@ -691,6 +690,7 @@ static int ipgre_tunnel_init(struct net_device *dev) memcpy(dev->dev_addr, &iph->saddr, 4); memcpy(dev->broadcast, &iph->daddr, 4); + dev->type = ARPHRD_IPGRE; dev->flags = IFF_NOARP; dev->priv_flags &= ~IFF_XMIT_DST_RELEASE; dev->addr_len = 4; diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c index 089ed81d187..ec7264514a8 100644 --- a/net/ipv4/ip_options.c +++ b/net/ipv4/ip_options.c @@ -288,10 +288,6 @@ int ip_options_compile(struct net *net, optptr++; continue; } - if (unlikely(l < 2)) { - pp_ptr = optptr; - goto error; - } optlen = optptr[1]; if (optlen<2 || optlen>l) { pp_ptr = optptr; diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 5e2e2ccc24e..7e94d6da35f 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -148,7 +148,7 @@ int ip_build_and_send_pkt(struct sk_buff *skb, struct sock *sk, iph->daddr = (opt && opt->opt.srr ? opt->opt.faddr : daddr); iph->saddr = saddr; iph->protocol = sk->sk_protocol; - ip_select_ident(skb, sk); + ip_select_ident(skb, &rt->dst, sk); if (opt && opt->opt.optlen) { iph->ihl += opt->opt.optlen>>2; @@ -394,7 +394,8 @@ packet_routed: ip_options_build(skb, &inet_opt->opt, inet->inet_daddr, rt, 0); } - ip_select_ident_segs(skb, sk, skb_shinfo(skb)->gso_segs ?: 1); + ip_select_ident_more(skb, &rt->dst, sk, + (skb_shinfo(skb)->gso_segs ?: 1) - 1); skb->priority = sk->sk_priority; skb->mark = sk->sk_mark; @@ -1331,7 +1332,7 @@ struct sk_buff *__ip_make_skb(struct sock *sk, iph->ttl = ttl; iph->protocol = sk->sk_protocol; ip_copy_addrs(iph, fl4); - ip_select_ident(skb, sk); + ip_select_ident(skb, &rt->dst, sk); if (opt) { iph->ihl += opt->optlen>>2; @@ -1481,7 +1482,6 @@ void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr, struct sk_buff *nskb; struct sock *sk; struct inet_sock *inet; - int err; if (ip_options_echo(&replyopts.opt.opt, skb)) return; @@ -1519,13 +1519,8 @@ void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr, sock_net_set(sk, net); __skb_queue_head_init(&sk->sk_write_queue); sk->sk_sndbuf = sysctl_wmem_default; - err = ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base, - len, 0, &ipc, &rt, MSG_DONTWAIT); - if (unlikely(err)) { - ip_flush_pending_frames(sk); - goto out; - } - + ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base, len, 0, + &ipc, &rt, MSG_DONTWAIT); nskb = skb_peek(&sk->sk_write_queue); if (nskb) { if (arg->csumoffset >= 0) @@ -1537,7 +1532,7 @@ void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr, skb_set_queue_mapping(nskb, skb_get_queue_mapping(skb)); ip_push_pending_frames(sk, &fl4); } -out: + put_cpu_var(unicast_sock); ip_rt_put(rt); diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c index 84aa69caee5..fa6573264c8 100644 --- a/net/ipv4/ip_tunnel.c +++ b/net/ipv4/ip_tunnel.c @@ -166,7 +166,6 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, hlist_for_each_entry_rcu(t, head, hash_node) { if (remote != t->parms.iph.daddr || - t->parms.iph.saddr != 0 || !(t->dev->flags & IFF_UP)) continue; @@ -183,11 +182,10 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, head = &itn->tunnels[hash]; hlist_for_each_entry_rcu(t, head, hash_node) { - if ((local != t->parms.iph.saddr || t->parms.iph.daddr != 0) && - (local != t->parms.iph.daddr || !ipv4_is_multicast(local))) - continue; - - if (!(t->dev->flags & IFF_UP)) + if ((local != t->parms.iph.saddr && + (local != t->parms.iph.daddr || + !ipv4_is_multicast(local))) || + !(t->dev->flags & IFF_UP)) continue; if (!ip_tunnel_key_match(&t->parms, flags, key)) @@ -204,8 +202,6 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, hlist_for_each_entry_rcu(t, head, hash_node) { if (t->parms.i_key != key || - t->parms.iph.saddr != 0 || - t->parms.iph.daddr != 0 || !(t->dev->flags & IFF_UP)) continue; @@ -691,7 +687,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, iph->daddr = fl4.daddr; iph->saddr = fl4.saddr; iph->ttl = ttl; - __ip_select_ident(iph, skb_shinfo(skb)->gso_segs ?: 1); + __ip_select_ident(iph, &rt->dst, (skb_shinfo(skb)->gso_segs ?: 1) - 1); iptunnel_xmit(skb, dev); return; diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c index 4ec34275160..feb19db6235 100644 --- a/net/ipv4/ip_vti.c +++ b/net/ipv4/ip_vti.c @@ -579,9 +579,9 @@ static void vti_dev_free(struct net_device *dev) static void vti_tunnel_setup(struct net_device *dev) { dev->netdev_ops = &vti_netdev_ops; - dev->type = ARPHRD_TUNNEL; dev->destructor = vti_dev_free; + dev->type = ARPHRD_TUNNEL; dev->hard_header_len = LL_MAX_HEADER + sizeof(struct iphdr); dev->mtu = ETH_DATA_LEN; dev->flags = IFF_NOARP; diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c index 897b784e9c0..f5cc7b33151 100644 --- a/net/ipv4/ipip.c +++ b/net/ipv4/ipip.c @@ -149,13 +149,13 @@ static int ipip_err(struct sk_buff *skb, u32 info) if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) { ipv4_update_pmtu(skb, dev_net(skb->dev), info, - t->parms.link, 0, IPPROTO_IPIP, 0); + t->dev->ifindex, 0, IPPROTO_IPIP, 0); err = 0; goto out; } if (type == ICMP_REDIRECT) { - ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0, + ipv4_redirect(skb, dev_net(skb->dev), t->dev->ifindex, 0, IPPROTO_IPIP, 0); err = 0; goto out; @@ -483,5 +483,4 @@ static void __exit ipip_fini(void) module_init(ipip_init); module_exit(ipip_fini); MODULE_LICENSE("GPL"); -MODULE_ALIAS_RTNL_LINK("ipip"); MODULE_ALIAS_NETDEV("tunl0"); diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c index 56d079b63ad..7dbad683584 100644 --- a/net/ipv4/ipmr.c +++ b/net/ipv4/ipmr.c @@ -1661,7 +1661,7 @@ static void ip_encap(struct sk_buff *skb, __be32 saddr, __be32 daddr) iph->protocol = IPPROTO_IPIP; iph->ihl = 5; iph->tot_len = htons(skb->len); - ip_select_ident(skb, NULL); + ip_select_ident(skb, skb_dst(skb), NULL); ip_send_check(iph); memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); @@ -2255,14 +2255,13 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb, } static int ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb, - u32 portid, u32 seq, struct mfc_cache *c, int cmd, - int flags) + u32 portid, u32 seq, struct mfc_cache *c, int cmd) { struct nlmsghdr *nlh; struct rtmsg *rtm; int err; - nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rtm), flags); + nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rtm), NLM_F_MULTI); if (nlh == NULL) return -EMSGSIZE; @@ -2330,7 +2329,7 @@ static void mroute_netlink_event(struct mr_table *mrt, struct mfc_cache *mfc, if (skb == NULL) goto errout; - err = ipmr_fill_mroute(mrt, skb, 0, 0, mfc, cmd, 0); + err = ipmr_fill_mroute(mrt, skb, 0, 0, mfc, cmd); if (err < 0) goto errout; @@ -2369,8 +2368,7 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb) if (ipmr_fill_mroute(mrt, skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - mfc, RTM_NEWROUTE, - NLM_F_MULTI) < 0) + mfc, RTM_NEWROUTE) < 0) goto done; next_entry: e++; @@ -2384,8 +2382,7 @@ next_entry: if (ipmr_fill_mroute(mrt, skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - mfc, RTM_NEWROUTE, - NLM_F_MULTI) < 0) { + mfc, RTM_NEWROUTE) < 0) { spin_unlock_bh(&mfc_unres_lock); goto done; } diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c index c8abe31961e..85a4f21aac1 100644 --- a/net/ipv4/netfilter/arp_tables.c +++ b/net/ipv4/netfilter/arp_tables.c @@ -1039,10 +1039,8 @@ static int __do_replace(struct net *net, const char *name, xt_free_table_info(oldinfo); if (copy_to_user(counters_ptr, counters, - sizeof(struct xt_counters) * num_counters) != 0) { - /* Silent error, can't fail, new table is already in place */ - net_warn_ratelimited("arptables: counters copy to user failed while replacing table\n"); - } + sizeof(struct xt_counters) * num_counters) != 0) + ret = -EFAULT; vfree(counters); xt_table_unlock(t); return ret; diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c index 651c10774d5..d23118d95ff 100644 --- a/net/ipv4/netfilter/ip_tables.c +++ b/net/ipv4/netfilter/ip_tables.c @@ -1226,10 +1226,8 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks, xt_free_table_info(oldinfo); if (copy_to_user(counters_ptr, counters, - sizeof(struct xt_counters) * num_counters) != 0) { - /* Silent error, can't fail, new table is already in place */ - net_warn_ratelimited("iptables: counters copy to user failed while replacing table\n"); - } + sizeof(struct xt_counters) * num_counters) != 0) + ret = -EFAULT; vfree(counters); xt_table_unlock(t); return ret; diff --git a/net/ipv4/netfilter/ipt_ULOG.c b/net/ipv4/netfilter/ipt_ULOG.c index f8629c04f35..32b0e978c8e 100644 --- a/net/ipv4/netfilter/ipt_ULOG.c +++ b/net/ipv4/netfilter/ipt_ULOG.c @@ -220,7 +220,6 @@ static void ipt_ulog_packet(struct net *net, ub->qlen++; pm = nlmsg_data(nlh); - memset(pm, 0, sizeof(*pm)); /* We might not have a timestamp, get one */ if (skb->tstamp.tv64 == 0) @@ -239,6 +238,8 @@ static void ipt_ulog_packet(struct net *net, } else if (loginfo->prefix[0] != '\0') strncpy(pm->prefix, loginfo->prefix, sizeof(pm->prefix)); + else + *(pm->prefix) = '\0'; if (in && in->hard_header_len > 0 && skb->mac_header != skb->network_header && @@ -250,9 +251,13 @@ static void ipt_ulog_packet(struct net *net, if (in) strncpy(pm->indev_name, in->name, sizeof(pm->indev_name)); + else + pm->indev_name[0] = '\0'; if (out) strncpy(pm->outdev_name, out->name, sizeof(pm->outdev_name)); + else + pm->outdev_name[0] = '\0'; /* copy_len <= skb->len, so can't fail. */ if (skb_copy_bits(skb, 0, pm->payload, copy_len) < 0) diff --git a/net/ipv4/netfilter/nf_defrag_ipv4.c b/net/ipv4/netfilter/nf_defrag_ipv4.c index 4cfb3bd1677..742815518b0 100644 --- a/net/ipv4/netfilter/nf_defrag_ipv4.c +++ b/net/ipv4/netfilter/nf_defrag_ipv4.c @@ -22,6 +22,7 @@ #endif #include <net/netfilter/nf_conntrack_zones.h> +/* Returns new sk_buff, or NULL */ static int nf_ct_ipv4_gather_frags(struct sk_buff *skb, u_int32_t user) { int err; @@ -32,10 +33,8 @@ static int nf_ct_ipv4_gather_frags(struct sk_buff *skb, u_int32_t user) err = ip_defrag(skb, user); local_bh_enable(); - if (!err) { + if (!err) ip_send_check(ip_hdr(skb)); - skb->local_df = 1; - } return err; } diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c index 7122472d24a..5bf79864ca6 100644 --- a/net/ipv4/ping.c +++ b/net/ipv4/ping.c @@ -249,33 +249,26 @@ int ping_init_sock(struct sock *sk) { struct net *net = sock_net(sk); kgid_t group = current_egid(); - struct group_info *group_info; - int i, j, count; + struct group_info *group_info = get_current_groups(); + int i, j, count = group_info->ngroups; kgid_t low, high; - int ret = 0; inet_get_ping_group_range_net(net, &low, &high); if (gid_lte(low, group) && gid_lte(group, high)) return 0; - group_info = get_current_groups(); - count = group_info->ngroups; for (i = 0; i < group_info->nblocks; i++) { int cp_count = min_t(int, NGROUPS_PER_BLOCK, count); for (j = 0; j < cp_count; j++) { kgid_t gid = group_info->blocks[i][j]; if (gid_lte(low, gid) && gid_lte(gid, high)) - goto out_release_group; + return 0; } count -= cp_count; } - ret = -EACCES; - -out_release_group: - put_group_info(group_info); - return ret; + return -EACCES; } EXPORT_SYMBOL_GPL(ping_init_sock); diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c index d2b6e7ae170..448e5a77fa8 100644 --- a/net/ipv4/raw.c +++ b/net/ipv4/raw.c @@ -387,7 +387,7 @@ static int raw_send_hdrinc(struct sock *sk, struct flowi4 *fl4, iph->check = 0; iph->tot_len = htons(length); if (!iph->id) - ip_select_ident(skb, NULL); + ip_select_ident(skb, &rt->dst, NULL); iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl); } diff --git a/net/ipv4/route.c b/net/ipv4/route.c index d4d162eac4d..1a362f375e6 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -89,7 +89,6 @@ #include <linux/rcupdate.h> #include <linux/times.h> #include <linux/slab.h> -#include <linux/jhash.h> #include <net/dst.h> #include <net/net_namespace.h> #include <net/protocol.h> @@ -465,53 +464,39 @@ static struct neighbour *ipv4_neigh_lookup(const struct dst_entry *dst, return neigh_create(&arp_tbl, pkey, dev); } -#define IP_IDENTS_SZ 2048u -struct ip_ident_bucket { - atomic_t id; - u32 stamp32; -}; - -static struct ip_ident_bucket *ip_idents __read_mostly; - -/* In order to protect privacy, we add a perturbation to identifiers - * if one generator is seldom used. This makes hard for an attacker - * to infer how many packets were sent between two points in time. +/* + * Peer allocation may fail only in serious out-of-memory conditions. However + * we still can generate some output. + * Random ID selection looks a bit dangerous because we have no chances to + * select ID being unique in a reasonable period of time. + * But broken packet identifier may be better than no packet at all. */ -u32 ip_idents_reserve(u32 hash, int segs) +static void ip_select_fb_ident(struct iphdr *iph) { - struct ip_ident_bucket *bucket = ip_idents + hash % IP_IDENTS_SZ; - u32 old = ACCESS_ONCE(bucket->stamp32); - u32 now = (u32)jiffies; - u32 delta = 0; - - if (old != now && cmpxchg(&bucket->stamp32, old, now) == old) { - u64 x = prandom_u32(); - - x *= (now - old); - delta = (u32)(x >> 32); - } + static DEFINE_SPINLOCK(ip_fb_id_lock); + static u32 ip_fallback_id; + u32 salt; - return atomic_add_return(segs + delta, &bucket->id) - segs; + spin_lock_bh(&ip_fb_id_lock); + salt = secure_ip_id((__force __be32)ip_fallback_id ^ iph->daddr); + iph->id = htons(salt & 0xFFFF); + ip_fallback_id = salt; + spin_unlock_bh(&ip_fb_id_lock); } -EXPORT_SYMBOL(ip_idents_reserve); -void __ip_select_ident(struct iphdr *iph, int segs) +void __ip_select_ident(struct iphdr *iph, struct dst_entry *dst, int more) { - static u32 ip_idents_hashrnd __read_mostly; - static bool hashrnd_initialized = false; - u32 hash, id; + struct net *net = dev_net(dst->dev); + struct inet_peer *peer; - if (unlikely(!hashrnd_initialized)) { - hashrnd_initialized = true; - get_random_bytes(&ip_idents_hashrnd, sizeof(ip_idents_hashrnd)); + peer = inet_getpeer_v4(net->ipv4.peers, iph->daddr, 1); + if (peer) { + iph->id = htons(inet_getid(peer, more)); + inet_putpeer(peer); + return; } - hash = jhash_3words((__force u32)iph->daddr, - (__force u32)iph->saddr, - iph->protocol, - ip_idents_hashrnd); - id = ip_idents_reserve(hash, segs); - iph->id = htons(id); + ip_select_fb_ident(iph); } EXPORT_SYMBOL(__ip_select_ident); @@ -1000,21 +985,20 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) const struct iphdr *iph = (const struct iphdr *) skb->data; struct flowi4 fl4; struct rtable *rt; - struct dst_entry *odst = NULL; + struct dst_entry *dst; bool new = false; bh_lock_sock(sk); - odst = sk_dst_get(sk); + rt = (struct rtable *) __sk_dst_get(sk); - if (sock_owned_by_user(sk) || !odst) { + if (sock_owned_by_user(sk) || !rt) { __ipv4_sk_update_pmtu(skb, sk, mtu); goto out; } __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); - rt = (struct rtable *)odst; - if (odst->obsolete && odst->ops->check(odst, 0) == NULL) { + if (!__sk_dst_check(sk, 0)) { rt = ip_route_output_flow(sock_net(sk), &fl4, sk); if (IS_ERR(rt)) goto out; @@ -1024,7 +1008,8 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu); - if (!dst_check(&rt->dst, 0)) { + dst = dst_check(&rt->dst, 0); + if (!dst) { if (new) dst_release(&rt->dst); @@ -1036,11 +1021,10 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) } if (new) - sk_dst_set(sk, &rt->dst); + __sk_dst_set(sk, &rt->dst); out: bh_unlock_sock(sk); - dst_release(odst); } EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); @@ -1494,7 +1478,7 @@ static int __mkroute_input(struct sk_buff *skb, struct in_device *out_dev; unsigned int flags = 0; bool do_cache; - u32 itag = 0; + u32 itag; /* get a working reference to the output device */ out_dev = __in_dev_get_rcu(FIB_RES_DEV(*res)); @@ -2322,7 +2306,7 @@ static int rt_fill_info(struct net *net, __be32 dst, __be32 src, } } else #endif - if (nla_put_u32(skb, RTA_IIF, skb->dev->ifindex)) + if (nla_put_u32(skb, RTA_IIF, rt->rt_iif)) goto nla_put_failure; } @@ -2671,12 +2655,6 @@ int __init ip_rt_init(void) { int rc = 0; - ip_idents = kmalloc(IP_IDENTS_SZ * sizeof(*ip_idents), GFP_KERNEL); - if (!ip_idents) - panic("IP: failed to allocate ip_idents\n"); - - prandom_bytes(ip_idents, IP_IDENTS_SZ * sizeof(*ip_idents)); - #ifdef CONFIG_IP_ROUTE_CLASSID ip_rt_acct = __alloc_percpu(256 * sizeof(struct ip_rt_acct), __alignof__(struct ip_rt_acct)); if (!ip_rt_acct) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 02e743eeaa2..78411dad59e 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1005,8 +1005,7 @@ void tcp_free_fastopen_req(struct tcp_sock *tp) } } -static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, - int *copied, size_t size) +static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, int *size) { struct tcp_sock *tp = tcp_sk(sk); int err, flags; @@ -1021,12 +1020,11 @@ static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, if (unlikely(tp->fastopen_req == NULL)) return -ENOBUFS; tp->fastopen_req->data = msg; - tp->fastopen_req->size = size; flags = (msg->msg_flags & MSG_DONTWAIT) ? O_NONBLOCK : 0; err = __inet_stream_connect(sk->sk_socket, msg->msg_name, msg->msg_namelen, flags); - *copied = tp->fastopen_req->copied; + *size = tp->fastopen_req->copied; tcp_free_fastopen_req(tp); return err; } @@ -1046,7 +1044,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, flags = msg->msg_flags; if (flags & MSG_FASTOPEN) { - err = tcp_sendmsg_fastopen(sk, msg, &copied_syn, size); + err = tcp_sendmsg_fastopen(sk, msg, &copied_syn); if (err == -EINPROGRESS && copied_syn > 0) goto out; else if (err) @@ -1069,7 +1067,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, if (unlikely(tp->repair)) { if (tp->repair_queue == TCP_RECV_QUEUE) { copied = tcp_send_rcvq(sk, msg, size); - goto out_nopush; + goto out; } err = -EINVAL; @@ -1242,7 +1240,6 @@ wait_for_memory: out: if (copied) tcp_push(sk, flags, mss_now, tp->nonagle); -out_nopush: release_sock(sk); if (copied + copied_syn) diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c index 894b7cea5d7..b6ae92a51f5 100644 --- a/net/ipv4/tcp_cubic.c +++ b/net/ipv4/tcp_cubic.c @@ -408,7 +408,7 @@ static void bictcp_acked(struct sock *sk, u32 cnt, s32 rtt_us) ratio -= ca->delayed_ack >> ACK_RATIO_SHIFT; ratio += cnt; - ca->delayed_ack = clamp(ratio, 1U, ACK_RATIO_LIMIT); + ca->delayed_ack = min(ratio, ACK_RATIO_LIMIT); } /* Some calls are for duplicates without timetamps */ diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 7aa7faa7c3d..aa5f3bfebab 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1076,7 +1076,7 @@ static bool tcp_check_dsack(struct sock *sk, const struct sk_buff *ack_skb, } /* D-SACK for already forgotten data... Do dumb counting. */ - if (dup_sack && tp->undo_marker && tp->undo_retrans > 0 && + if (dup_sack && tp->undo_marker && tp->undo_retrans && !after(end_seq_0, prior_snd_una) && after(end_seq_0, tp->undo_marker)) tp->undo_retrans--; @@ -1131,7 +1131,7 @@ static int tcp_match_skb_to_sack(struct sock *sk, struct sk_buff *skb, unsigned int new_len = (pkt_len / mss) * mss; if (!in_sack && new_len < pkt_len) { new_len += mss; - if (new_len >= skb->len) + if (new_len > skb->len) return 0; } pkt_len = new_len; @@ -1155,7 +1155,7 @@ static u8 tcp_sacktag_one(struct sock *sk, /* Account D-SACK for retransmitted packet. */ if (dup_sack && (sacked & TCPCB_RETRANS)) { - if (tp->undo_marker && tp->undo_retrans > 0 && + if (tp->undo_marker && tp->undo_retrans && after(end_seq, tp->undo_marker)) tp->undo_retrans--; if (sacked & TCPCB_SACKED_ACKED) @@ -1851,7 +1851,7 @@ static void tcp_clear_retrans_partial(struct tcp_sock *tp) tp->lost_out = 0; tp->undo_marker = 0; - tp->undo_retrans = -1; + tp->undo_retrans = 0; } void tcp_clear_retrans(struct tcp_sock *tp) @@ -2701,7 +2701,7 @@ static void tcp_enter_recovery(struct sock *sk, bool ece_ack) tp->prior_ssthresh = 0; tp->undo_marker = tp->snd_una; - tp->undo_retrans = tp->retrans_out ? : -1; + tp->undo_retrans = tp->retrans_out; if (inet_csk(sk)->icsk_ca_state < TCP_CA_CWR) { if (!ece_ack) @@ -2721,12 +2721,13 @@ static void tcp_process_loss(struct sock *sk, int flag, bool is_dupack) bool recovered = !before(tp->snd_una, tp->high_seq); if (tp->frto) { /* F-RTO RFC5682 sec 3.1 (sack enhanced version). */ - /* Step 3.b. A timeout is spurious if not all data are - * lost, i.e., never-retransmitted data are (s)acked. - */ - if (tcp_try_undo_loss(sk, flag & FLAG_ORIG_SACK_ACKED)) + if (flag & FLAG_ORIG_SACK_ACKED) { + /* Step 3.b. A timeout is spurious if not all data are + * lost, i.e., never-retransmitted data are (s)acked. + */ + tcp_try_undo_loss(sk, true); return; - + } if (after(tp->snd_nxt, tp->high_seq) && (flag & FLAG_DATA_SACKED || is_dupack)) { tp->frto = 0; /* Loss was real: 2nd part of step 3.a */ diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 89fa8077e51..395d69dfaa6 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -308,7 +308,7 @@ EXPORT_SYMBOL(tcp_v4_connect); * It can be called through tcp_release_cb() if socket was owned by user * at the time tcp_v4_err() was called to handle ICMP message. */ -void tcp_v4_mtu_reduced(struct sock *sk) +static void tcp_v4_mtu_reduced(struct sock *sk) { struct dst_entry *dst; struct inet_sock *inet = inet_sk(sk); @@ -338,7 +338,6 @@ void tcp_v4_mtu_reduced(struct sock *sk) tcp_simple_retransmit(sk); } /* else let the usual retransmit timer handle it */ } -EXPORT_SYMBOL(tcp_v4_mtu_reduced); static void do_redirect(struct sk_buff *skb, struct sock *sk) { @@ -2184,7 +2183,6 @@ const struct inet_connection_sock_af_ops ipv4_specific = { .compat_setsockopt = compat_ip_setsockopt, .compat_getsockopt = compat_ip_getsockopt, #endif - .mtu_reduced = tcp_v4_mtu_reduced, }; EXPORT_SYMBOL(ipv4_specific); @@ -2920,6 +2918,7 @@ struct proto tcp_prot = { .sendpage = tcp_sendpage, .backlog_rcv = tcp_v4_do_rcv, .release_cb = tcp_release_cb, + .mtu_reduced = tcp_v4_mtu_reduced, .hash = inet_hash, .unhash = inet_unhash, .get_port = inet_csk_get_port, diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 1fd846463d3..5dc91c31be2 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -754,17 +754,6 @@ void tcp_release_cb(struct sock *sk) if (flags & (1UL << TCP_TSQ_DEFERRED)) tcp_tsq_handler(sk); - /* Here begins the tricky part : - * We are called from release_sock() with : - * 1) BH disabled - * 2) sk_lock.slock spinlock held - * 3) socket owned by us (sk->sk_lock.owned == 1) - * - * But following code is meant to be called from BH handlers, - * so we should keep BH disabled, but early release socket ownership - */ - sock_release_ownership(sk); - if (flags & (1UL << TCP_WRITE_TIMER_DEFERRED)) { tcp_write_timer_handler(sk); __sock_put(sk); @@ -774,7 +763,7 @@ void tcp_release_cb(struct sock *sk) __sock_put(sk); } if (flags & (1UL << TCP_MTU_REDUCED_DEFERRED)) { - inet_csk(sk)->icsk_af_ops->mtu_reduced(sk); + sk->sk_prot->mtu_reduced(sk); __sock_put(sk); } } @@ -2035,7 +2024,9 @@ void tcp_send_loss_probe(struct sock *sk) if (WARN_ON(!skb || !tcp_skb_pcount(skb))) goto rearm_timer; - err = __tcp_retransmit_skb(sk, skb); + /* Probe with zero data doesn't trigger fast recovery. */ + if (skb->len > 0) + err = __tcp_retransmit_skb(sk, skb); /* Record snd_nxt for loss detection. */ if (likely(!err)) @@ -2425,15 +2416,13 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) if (!tp->retrans_stamp) tp->retrans_stamp = TCP_SKB_CB(skb)->when; + tp->undo_retrans += tcp_skb_pcount(skb); + /* snd_nxt is stored to detect loss of retransmitted segment, * see tcp_input.c tcp_sacktag_write_queue(). */ TCP_SKB_CB(skb)->ack_seq = tp->snd_nxt; } - - if (tp->undo_retrans < 0) - tp->undo_retrans = 0; - tp->undo_retrans += tcp_skb_pcount(skb); return err; } @@ -2902,12 +2891,7 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) space = __tcp_mtu_to_mss(sk, inet_csk(sk)->icsk_pmtu_cookie) - MAX_TCP_OPTION_SPACE; - space = min_t(size_t, space, fo->size); - - /* limit to order-0 allocations */ - space = min_t(size_t, space, SKB_MAX_HEAD(MAX_TCP_HEADER)); - - syn_data = skb_copy_expand(syn, MAX_TCP_HEADER, space, + syn_data = skb_copy_expand(syn, skb_headroom(syn), space, sk->sk_allocation); if (syn_data == NULL) goto fallback; diff --git a/net/ipv4/tcp_vegas.c b/net/ipv4/tcp_vegas.c index c042e529a11..80fa2bfd7ed 100644 --- a/net/ipv4/tcp_vegas.c +++ b/net/ipv4/tcp_vegas.c @@ -218,8 +218,7 @@ static void tcp_vegas_cong_avoid(struct sock *sk, u32 ack, u32 in_flight) * This is: * (actual rate in segments) * baseRTT */ - target_cwnd = (u64)tp->snd_cwnd * vegas->baseRTT; - do_div(target_cwnd, rtt); + target_cwnd = tp->snd_cwnd * vegas->baseRTT / rtt; /* Calculate the difference between the window we had, * and the window we would like to have. This quantity diff --git a/net/ipv4/tcp_veno.c b/net/ipv4/tcp_veno.c index b4d1858be55..ac43cd747bc 100644 --- a/net/ipv4/tcp_veno.c +++ b/net/ipv4/tcp_veno.c @@ -144,7 +144,7 @@ static void tcp_veno_cong_avoid(struct sock *sk, u32 ack, u32 in_flight) rtt = veno->minrtt; - target_cwnd = (u64)tp->snd_cwnd * veno->basertt; + target_cwnd = (tp->snd_cwnd * veno->basertt); target_cwnd <<= V_PARAM_SHIFT; do_div(target_cwnd, rtt); diff --git a/net/ipv4/xfrm4_mode_tunnel.c b/net/ipv4/xfrm4_mode_tunnel.c index e3f64831bc3..b5663c37f08 100644 --- a/net/ipv4/xfrm4_mode_tunnel.c +++ b/net/ipv4/xfrm4_mode_tunnel.c @@ -117,12 +117,12 @@ static int xfrm4_mode_tunnel_output(struct xfrm_state *x, struct sk_buff *skb) top_iph->frag_off = (flags & XFRM_STATE_NOPMTUDISC) ? 0 : (XFRM_MODE_SKB_CB(skb)->frag_off & htons(IP_DF)); + ip_select_ident(skb, dst->child, NULL); top_iph->ttl = ip4_dst_hoplimit(dst->child); top_iph->saddr = x->props.saddr.a4; top_iph->daddr = x->id.daddr.a4; - ip_select_ident(skb, NULL); return 0; } diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index c8a2371d041..d4cad1d5e9f 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -1113,11 +1113,8 @@ retry: * Lifetime is greater than REGEN_ADVANCE time units. In particular, * an implementation must not create a temporary address with a zero * Preferred Lifetime. - * Use age calculation as in addrconf_verify to avoid unnecessary - * temporary addresses being generated. */ - age = (now - tmp_tstamp + ADDRCONF_TIMER_FUZZ_MINUS) / HZ; - if (tmp_prefered_lft <= regen_advance + age) { + if (tmp_prefered_lft <= regen_advance) { in6_ifa_put(ifp); in6_dev_put(idev); ret = -1; @@ -2719,18 +2716,8 @@ static void init_loopback(struct net_device *dev) if (sp_ifa->flags & (IFA_F_DADFAILED | IFA_F_TENTATIVE)) continue; - if (sp_ifa->rt) { - /* This dst has been added to garbage list when - * lo device down, release this obsolete dst and - * reallocate a new router for ifa. - */ - if (sp_ifa->rt->dst.obsolete > 0) { - ip6_rt_put(sp_ifa->rt); - sp_ifa->rt = NULL; - } else { - continue; - } - } + if (sp_ifa->rt) + continue; sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, 0); diff --git a/net/ipv6/exthdrs_core.c b/net/ipv6/exthdrs_core.c index 11de7379fb9..23eed2365fe 100644 --- a/net/ipv6/exthdrs_core.c +++ b/net/ipv6/exthdrs_core.c @@ -212,7 +212,7 @@ int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset, found = (nexthdr == target); if ((!ipv6_ext_hdr(nexthdr)) || nexthdr == NEXTHDR_NONE) { - if (target < 0 || found) + if (target < 0) break; return -ENOENT; } diff --git a/net/ipv6/exthdrs_offload.c b/net/ipv6/exthdrs_offload.c index 447a7fbd1bb..cf77f3abfd0 100644 --- a/net/ipv6/exthdrs_offload.c +++ b/net/ipv6/exthdrs_offload.c @@ -25,11 +25,11 @@ int __init ipv6_exthdrs_offload_init(void) int ret; ret = inet6_add_offload(&rthdr_offload, IPPROTO_ROUTING); - if (ret) + if (!ret) goto out; ret = inet6_add_offload(&dstopt_offload, IPPROTO_DSTOPTS); - if (ret) + if (!ret) goto out_rt; out: diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c index 84bdcd06dd3..83a421c9746 100644 --- a/net/ipv6/icmp.c +++ b/net/ipv6/icmp.c @@ -519,7 +519,7 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info) np->tclass, NULL, &fl6, (struct rt6_info *)dst, MSG_DONTWAIT, np->dontfrag); if (err) { - ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTERRORS); + ICMP6_INC_STATS_BH(net, idev, ICMP6_MIB_OUTERRORS); ip6_flush_pending_frames(sk); } else { err = icmpv6_push_pending_frames(sk, &fl6, &tmp_hdr, diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c index 009c9620f44..9c06ecb6556 100644 --- a/net/ipv6/ip6_fib.c +++ b/net/ipv6/ip6_fib.c @@ -1418,7 +1418,7 @@ static int fib6_walk_continue(struct fib6_walker_t *w) if (w->skip) { w->skip--; - goto skip; + continue; } err = w->func(w); @@ -1428,7 +1428,6 @@ static int fib6_walk_continue(struct fib6_walker_t *w) w->count++; continue; } -skip: w->state = FWS_U; case FWS_U: if (fn == w->root) diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c index 250a73e77f5..1f9a1a5b61f 100644 --- a/net/ipv6/ip6_gre.c +++ b/net/ipv6/ip6_gre.c @@ -787,7 +787,7 @@ static inline int ip6gre_xmit_ipv4(struct sk_buff *skb, struct net_device *dev) encap_limit = t->parms.encap_limit; memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6)); - fl6.flowi6_proto = IPPROTO_GRE; + fl6.flowi6_proto = IPPROTO_IPIP; dsfield = ipv4_get_dsfield(iph); @@ -837,7 +837,7 @@ static inline int ip6gre_xmit_ipv6(struct sk_buff *skb, struct net_device *dev) encap_limit = t->parms.encap_limit; memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6)); - fl6.flowi6_proto = IPPROTO_GRE; + fl6.flowi6_proto = IPPROTO_IPV6; dsfield = ipv6_get_dsfield(ipv6h); if (t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS) @@ -1549,15 +1549,6 @@ static int ip6gre_changelink(struct net_device *dev, struct nlattr *tb[], return 0; } -static void ip6gre_dellink(struct net_device *dev, struct list_head *head) -{ - struct net *net = dev_net(dev); - struct ip6gre_net *ign = net_generic(net, ip6gre_net_id); - - if (dev != ign->fb_tunnel_dev) - unregister_netdevice_queue(dev, head); -} - static size_t ip6gre_get_size(const struct net_device *dev) { return @@ -1635,7 +1626,6 @@ static struct rtnl_link_ops ip6gre_link_ops __read_mostly = { .validate = ip6gre_tunnel_validate, .newlink = ip6gre_newlink, .changelink = ip6gre_changelink, - .dellink = ip6gre_dellink, .get_size = ip6gre_get_size, .fill_info = ip6gre_fill_info, }; diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index 071edcba415..98a262b759a 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -347,16 +347,12 @@ static inline int ip6_forward_finish(struct sk_buff *skb) static bool ip6_pkt_too_big(const struct sk_buff *skb, unsigned int mtu) { - if (skb->len <= mtu) + if (skb->len <= mtu || skb->local_df) return false; - /* ipv6 conntrack defrag sets max_frag_size + local_df */ if (IP6CB(skb)->frag_max_size && IP6CB(skb)->frag_max_size > mtu) return true; - if (skb->local_df) - return false; - if (skb_is_gso(skb) && skb_gso_network_seglen(skb) <= mtu) return false; @@ -540,23 +536,6 @@ static void ip6_copy_metadata(struct sk_buff *to, struct sk_buff *from) skb_copy_secmark(to, from); } -static void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt) -{ - static u32 ip6_idents_hashrnd __read_mostly; - static bool hashrnd_initialized = false; - u32 hash, id; - - if (unlikely(!hashrnd_initialized)) { - hashrnd_initialized = true; - get_random_bytes(&ip6_idents_hashrnd, sizeof(ip6_idents_hashrnd)); - } - hash = __ipv6_addr_jhash(&rt->rt6i_dst.addr, ip6_idents_hashrnd); - hash = __ipv6_addr_jhash(&rt->rt6i_src.addr, hash); - - id = ip_idents_reserve(hash, 1); - fhdr->identification = htonl(id); -} - int ip6_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *)) { struct sk_buff *frag; @@ -1129,19 +1108,21 @@ static void ip6_append_data_mtu(unsigned int *mtu, unsigned int fragheaderlen, struct sk_buff *skb, struct rt6_info *rt, - unsigned int orig_mtu) + bool pmtuprobe) { if (!(rt->dst.flags & DST_XFRM_TUNNEL)) { if (skb == NULL) { /* first fragment, reserve header_len */ - *mtu = orig_mtu - rt->dst.header_len; + *mtu = *mtu - rt->dst.header_len; } else { /* * this fragment is not first, the headers * space is regarded as data space. */ - *mtu = orig_mtu; + *mtu = min(*mtu, pmtuprobe ? + rt->dst.dev->mtu : + dst_mtu(rt->dst.path)); } *maxfraglen = ((*mtu - fragheaderlen) & ~7) + fragheaderlen - sizeof(struct frag_hdr); @@ -1158,7 +1139,7 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to, struct ipv6_pinfo *np = inet6_sk(sk); struct inet_cork *cork; struct sk_buff *skb, *skb_prev = NULL; - unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu; + unsigned int maxfraglen, fragheaderlen, mtu; int exthdrlen; int dst_exthdrlen; int hh_len; @@ -1240,7 +1221,6 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to, dst_exthdrlen = 0; mtu = cork->fragsize; } - orig_mtu = mtu; hh_len = LL_RESERVED_SPACE(rt->dst.dev); @@ -1320,7 +1300,8 @@ alloc_new_skb: if (skb == NULL || skb_prev == NULL) ip6_append_data_mtu(&mtu, &maxfraglen, fragheaderlen, skb, rt, - orig_mtu); + np->pmtudisc == + IPV6_PMTUDISC_PROBE); skb_prev = skb; @@ -1575,8 +1556,8 @@ int ip6_push_pending_frames(struct sock *sk) if (proto == IPPROTO_ICMPV6) { struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb)); - ICMP6MSGOUT_INC_STATS(net, idev, icmp6_hdr(skb)->icmp6_type); - ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS); + ICMP6MSGOUT_INC_STATS_BH(net, idev, icmp6_hdr(skb)->icmp6_type); + ICMP6_INC_STATS_BH(net, idev, ICMP6_MIB_OUTMSGS); } err = ip6_local_out(skb); diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c index a0ecdf596f2..f21cf476b00 100644 --- a/net/ipv6/ip6_tunnel.c +++ b/net/ipv6/ip6_tunnel.c @@ -61,7 +61,6 @@ MODULE_AUTHOR("Ville Nuorvala"); MODULE_DESCRIPTION("IPv6 tunneling device"); MODULE_LICENSE("GPL"); -MODULE_ALIAS_RTNL_LINK("ip6tnl"); MODULE_ALIAS_NETDEV("ip6tnl0"); #ifdef IP6_TNL_DEBUG @@ -1532,7 +1531,7 @@ static int ip6_tnl_validate(struct nlattr *tb[], struct nlattr *data[]) { u8 proto; - if (!data || !data[IFLA_IPTUN_PROTO]) + if (!data) return 0; proto = nla_get_u8(data[IFLA_IPTUN_PROTO]); diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c index 2c84072b1da..9f44ebc1775 100644 --- a/net/ipv6/ip6mr.c +++ b/net/ipv6/ip6mr.c @@ -2351,14 +2351,13 @@ int ip6mr_get_route(struct net *net, } static int ip6mr_fill_mroute(struct mr6_table *mrt, struct sk_buff *skb, - u32 portid, u32 seq, struct mfc6_cache *c, int cmd, - int flags) + u32 portid, u32 seq, struct mfc6_cache *c, int cmd) { struct nlmsghdr *nlh; struct rtmsg *rtm; int err; - nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rtm), flags); + nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rtm), NLM_F_MULTI); if (nlh == NULL) return -EMSGSIZE; @@ -2426,7 +2425,7 @@ static void mr6_netlink_event(struct mr6_table *mrt, struct mfc6_cache *mfc, if (skb == NULL) goto errout; - err = ip6mr_fill_mroute(mrt, skb, 0, 0, mfc, cmd, 0); + err = ip6mr_fill_mroute(mrt, skb, 0, 0, mfc, cmd); if (err < 0) goto errout; @@ -2465,8 +2464,7 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb) if (ip6mr_fill_mroute(mrt, skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - mfc, RTM_NEWROUTE, - NLM_F_MULTI) < 0) + mfc, RTM_NEWROUTE) < 0) goto done; next_entry: e++; @@ -2480,8 +2478,7 @@ next_entry: if (ip6mr_fill_mroute(mrt, skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - mfc, RTM_NEWROUTE, - NLM_F_MULTI) < 0) { + mfc, RTM_NEWROUTE) < 0) { spin_unlock_bh(&mfc_unres_lock); goto done; } diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 734aec059ff..952eaed3880 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -1439,12 +1439,11 @@ static void mld_sendpack(struct sk_buff *skb) dst_output); out: if (!err) { - ICMP6MSGOUT_INC_STATS(net, idev, ICMPV6_MLD2_REPORT); - ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS); - IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUTMCAST, payload_len); - } else { - IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS); - } + ICMP6MSGOUT_INC_STATS_BH(net, idev, ICMPV6_MLD2_REPORT); + ICMP6_INC_STATS_BH(net, idev, ICMP6_MIB_OUTMSGS); + IP6_UPD_PO_STATS_BH(net, idev, IPSTATS_MIB_OUTMCAST, payload_len); + } else + IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_OUTDISCARDS); rcu_read_unlock(); return; diff --git a/net/ipv6/netfilter.c b/net/ipv6/netfilter.c index d38e6a8d8b9..95f3f1da0d7 100644 --- a/net/ipv6/netfilter.c +++ b/net/ipv6/netfilter.c @@ -30,15 +30,13 @@ int ip6_route_me_harder(struct sk_buff *skb) .daddr = iph->daddr, .saddr = iph->saddr, }; - int err; dst = ip6_route_output(net, skb->sk, &fl6); - err = dst->error; - if (err) { + if (dst->error) { IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES); LIMIT_NETDEBUG(KERN_DEBUG "ip6_route_me_harder: No more route.\n"); dst_release(dst); - return err; + return dst->error; } /* Drop old route. */ diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c index 89a4e4ddd8b..44400c216dc 100644 --- a/net/ipv6/netfilter/ip6_tables.c +++ b/net/ipv6/netfilter/ip6_tables.c @@ -1236,10 +1236,8 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks, xt_free_table_info(oldinfo); if (copy_to_user(counters_ptr, counters, - sizeof(struct xt_counters) * num_counters) != 0) { - /* Silent error, can't fail, new table is already in place */ - net_warn_ratelimited("ip6tables: counters copy to user failed while replacing table\n"); - } + sizeof(struct xt_counters) * num_counters) != 0) + ret = -EFAULT; vfree(counters); xt_table_unlock(t); return ret; diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c index a5d465105b6..c2e73e647e4 100644 --- a/net/ipv6/output_core.c +++ b/net/ipv6/output_core.c @@ -6,6 +6,34 @@ #include <net/ipv6.h> #include <net/ip6_fib.h> +void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt) +{ + static atomic_t ipv6_fragmentation_id; + int old, new; + +#if IS_ENABLED(CONFIG_IPV6) + if (rt && !(rt->dst.flags & DST_NOPEER)) { + struct inet_peer *peer; + struct net *net; + + net = dev_net(rt->dst.dev); + peer = inet_getpeer_v6(net->ipv6.peers, &rt->rt6i_dst.addr, 1); + if (peer) { + fhdr->identification = htonl(inet_getid(peer, 0)); + inet_putpeer(peer); + return; + } + } +#endif + do { + old = atomic_read(&ipv6_fragmentation_id); + new = old + 1; + if (!new) + new = 1; + } while (atomic_cmpxchg(&ipv6_fragmentation_id, old, new) != old); + fhdr->identification = htonl(new); +} +EXPORT_SYMBOL(ipv6_select_ident); int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) { diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 920bc87f0b4..3e3fc446d4d 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -1232,7 +1232,7 @@ static unsigned int ip6_mtu(const struct dst_entry *dst) unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); if (mtu) - goto out; + return mtu; mtu = IPV6_MIN_MTU; @@ -1242,8 +1242,7 @@ static unsigned int ip6_mtu(const struct dst_entry *dst) mtu = idev->cnf.mtu6; rcu_read_unlock(); -out: - return min_t(unsigned int, mtu, IP6_MAX_MTU); + return mtu; } static struct dst_entry *icmp6_dst_gc_list; @@ -1425,7 +1424,7 @@ int ip6_route_add(struct fib6_config *cfg) if (!table) goto out; - rt = ip6_dst_alloc(net, NULL, (cfg->fc_flags & RTF_ADDRCONF) ? 0 : DST_NOCOUNT, table); + rt = ip6_dst_alloc(net, NULL, DST_NOCOUNT, table); if (!rt) { err = -ENOMEM; diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c index 4ddf67c6355..620d326e8fd 100644 --- a/net/ipv6/sit.c +++ b/net/ipv6/sit.c @@ -101,19 +101,19 @@ static struct ip_tunnel *ipip6_tunnel_lookup(struct net *net, for_each_ip_tunnel_rcu(t, sitn->tunnels_r_l[h0 ^ h1]) { if (local == t->parms.iph.saddr && remote == t->parms.iph.daddr && - (!dev || !t->parms.link || dev->ifindex == t->parms.link) && + (!dev || !t->parms.link || dev->iflink == t->parms.link) && (t->dev->flags & IFF_UP)) return t; } for_each_ip_tunnel_rcu(t, sitn->tunnels_r[h0]) { if (remote == t->parms.iph.daddr && - (!dev || !t->parms.link || dev->ifindex == t->parms.link) && + (!dev || !t->parms.link || dev->iflink == t->parms.link) && (t->dev->flags & IFF_UP)) return t; } for_each_ip_tunnel_rcu(t, sitn->tunnels_l[h1]) { if (local == t->parms.iph.saddr && - (!dev || !t->parms.link || dev->ifindex == t->parms.link) && + (!dev || !t->parms.link || dev->iflink == t->parms.link) && (t->dev->flags & IFF_UP)) return t; } @@ -530,12 +530,12 @@ static int ipip6_err(struct sk_buff *skb, u32 info) if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) { ipv4_update_pmtu(skb, dev_net(skb->dev), info, - t->parms.link, 0, IPPROTO_IPV6, 0); + t->dev->ifindex, 0, IPPROTO_IPV6, 0); err = 0; goto out; } if (type == ICMP_REDIRECT) { - ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0, + ipv4_redirect(skb, dev_net(skb->dev), t->dev->ifindex, 0, IPPROTO_IPV6, 0); err = 0; goto out; @@ -919,7 +919,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb, iph->ttl = iph6->hop_limit; skb->ip_summed = CHECKSUM_NONE; - ip_select_ident(skb, NULL); + ip_select_ident(skb, skb_dst(skb), NULL); iptunnel_xmit(skb, dev); return NETDEV_TX_OK; @@ -1654,5 +1654,4 @@ xfrm_tunnel_failed: module_init(sit_init); module_exit(sit_cleanup); MODULE_LICENSE("GPL"); -MODULE_ALIAS_RTNL_LINK("sit"); MODULE_ALIAS_NETDEV("sit0"); diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 759dc48aec9..7f26437ae1f 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1655,7 +1655,6 @@ static const struct inet_connection_sock_af_ops ipv6_specific = { .compat_setsockopt = compat_ipv6_setsockopt, .compat_getsockopt = compat_ipv6_getsockopt, #endif - .mtu_reduced = tcp_v6_mtu_reduced, }; #ifdef CONFIG_TCP_MD5SIG @@ -1687,7 +1686,6 @@ static const struct inet_connection_sock_af_ops ipv6_mapped = { .compat_setsockopt = compat_ipv6_setsockopt, .compat_getsockopt = compat_ipv6_getsockopt, #endif - .mtu_reduced = tcp_v4_mtu_reduced, }; #ifdef CONFIG_TCP_MD5SIG @@ -1930,6 +1928,7 @@ struct proto tcpv6_prot = { .sendpage = tcp_sendpage, .backlog_rcv = tcp_v6_do_rcv, .release_cb = tcp_release_cb, + .mtu_reduced = tcp_v6_mtu_reduced, .hash = tcp_v6_hash, .unhash = inet_unhash, .get_port = inet_csk_get_port, diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c index 2f65b022627..3696aa28784 100644 --- a/net/ipv6/udp_offload.c +++ b/net/ipv6/udp_offload.c @@ -108,7 +108,7 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb, fptr = (struct frag_hdr *)(skb_network_header(skb) + unfrag_ip6hlen); fptr->nexthdr = nexthdr; fptr->reserved = 0; - fptr->identification = skb_shinfo(skb)->ip6_frag_id; + ipv6_select_ident(fptr, (struct rt6_info *)skb_dst(skb)); /* Fragment the skb. ipv6 header and the remaining fields of the * fragment header are updated in ipv6_gso_segment() diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c index 215e9b008db..276aa86f366 100644 --- a/net/iucv/af_iucv.c +++ b/net/iucv/af_iucv.c @@ -1829,7 +1829,7 @@ static void iucv_callback_txdone(struct iucv_path *path, spin_lock_irqsave(&list->lock, flags); while (list_skb != (struct sk_buff *)list) { - if (msg->tag == IUCV_SKB_CB(list_skb)->tag) { + if (msg->tag != IUCV_SKB_CB(list_skb)->tag) { this = list_skb; break; } diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c index c3ae2411650..44441c0c503 100644 --- a/net/l2tp/l2tp_ppp.c +++ b/net/l2tp/l2tp_ppp.c @@ -754,10 +754,9 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr, session->deref = pppol2tp_session_sock_put; /* If PMTU discovery was enabled, use the MTU that was discovered */ - dst = sk_dst_get(tunnel->sock); + dst = sk_dst_get(sk); if (dst != NULL) { - u32 pmtu = dst_mtu(dst); - + u32 pmtu = dst_mtu(__sk_dst_get(sk)); if (pmtu != 0) session->mtu = session->mru = pmtu - PPPOL2TP_HEADER_OVERHEAD; @@ -1366,7 +1365,7 @@ static int pppol2tp_setsockopt(struct socket *sock, int level, int optname, int err; if (level != SOL_PPPOL2TP) - return -EINVAL; + return udp_prot.setsockopt(sk, level, optname, optval, optlen); if (optlen < sizeof(int)) return -EINVAL; @@ -1492,7 +1491,7 @@ static int pppol2tp_getsockopt(struct socket *sock, int level, int optname, struct pppol2tp_session *ps; if (level != SOL_PPPOL2TP) - return -EINVAL; + return udp_prot.getsockopt(sk, level, optname, optval, optlen); if (get_user(len, optlen)) return -EFAULT; diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c index 2d5b4f65c51..14abcf44f97 100644 --- a/net/mac80211/debugfs_netdev.c +++ b/net/mac80211/debugfs_netdev.c @@ -34,7 +34,8 @@ static ssize_t ieee80211_if_read( ssize_t ret = -EINVAL; read_lock(&dev_base_lock); - ret = (*format)(sdata, buf, sizeof(buf)); + if (sdata->dev->reg_state == NETREG_REGISTERED) + ret = (*format)(sdata, buf, sizeof(buf)); read_unlock(&dev_base_lock); if (ret >= 0) @@ -61,7 +62,8 @@ static ssize_t ieee80211_if_write( ret = -ENODEV; rtnl_lock(); - ret = (*write)(sdata, buf, count); + if (sdata->dev->reg_state == NETREG_REGISTERED) + ret = (*write)(sdata, buf, count); rtnl_unlock(); return ret; diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index bd56be0580a..8b7a1538310 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -311,7 +311,6 @@ struct ieee80211_roc_work { bool started, abort, hw_begun, notified; bool to_be_freed; - bool on_channel; unsigned long hw_start_time; @@ -1274,7 +1273,6 @@ void ieee80211_sta_reset_conn_monitor(struct ieee80211_sub_if_data *sdata); void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata); void ieee80211_mgd_conn_tx_status(struct ieee80211_sub_if_data *sdata, __le16 fc, bool acked); -void ieee80211_mgd_quiesce(struct ieee80211_sub_if_data *sdata); void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata); /* IBSS code */ diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c index 2c64ab27b51..514e90f470b 100644 --- a/net/mac80211/iface.c +++ b/net/mac80211/iface.c @@ -1746,6 +1746,7 @@ void ieee80211_remove_interfaces(struct ieee80211_local *local) } mutex_unlock(&local->iflist_mtx); unregister_netdevice_many(&unreg_list); + list_del(&unreg_list); list_for_each_entry_safe(sdata, tmp, &wdev_list, list) { list_del(&sdata->list); diff --git a/net/mac80211/main.c b/net/mac80211/main.c index 6658c580935..8a7bfc47d57 100644 --- a/net/mac80211/main.c +++ b/net/mac80211/main.c @@ -157,8 +157,6 @@ static u32 ieee80211_hw_conf_chan(struct ieee80211_local *local) list_for_each_entry_rcu(sdata, &local->interfaces, list) { if (!rcu_access_pointer(sdata->vif.chanctx_conf)) continue; - if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) - continue; power = min(power, sdata->vif.bss_conf.txpower); } rcu_read_unlock(); diff --git a/net/mac80211/mesh_ps.c b/net/mac80211/mesh_ps.c index ddda201832b..3b7bfc01ee3 100644 --- a/net/mac80211/mesh_ps.c +++ b/net/mac80211/mesh_ps.c @@ -36,7 +36,6 @@ static struct sk_buff *mps_qos_null_get(struct sta_info *sta) sdata->vif.addr); nullfunc->frame_control = fc; nullfunc->duration_id = 0; - nullfunc->seq_ctrl = 0; /* no address resolution for this frame -> set addr 1 immediately */ memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN); memset(skb_put(skb, 2), 0, 2); /* append QoS control field */ diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index e606e4a113e..5b4328dcbe4 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -310,7 +310,6 @@ ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata, switch (vht_oper->chan_width) { case IEEE80211_VHT_CHANWIDTH_USE_HT: vht_chandef.width = chandef->width; - vht_chandef.center_freq1 = chandef->center_freq1; break; case IEEE80211_VHT_CHANWIDTH_80MHZ: vht_chandef.width = NL80211_CHAN_WIDTH_80; @@ -360,28 +359,6 @@ ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata, ret = 0; out: - /* - * When tracking the current AP, don't do any further checks if the - * new chandef is identical to the one we're currently using for the - * connection. This keeps us from playing ping-pong with regulatory, - * without it the following can happen (for example): - * - connect to an AP with 80 MHz, world regdom allows 80 MHz - * - AP advertises regdom US - * - CRDA loads regdom US with 80 MHz prohibited (old database) - * - the code below detects an unsupported channel, downgrades, and - * we disconnect from the AP in the caller - * - disconnect causes CRDA to reload world regdomain and the game - * starts anew. - * (see https://bugzilla.kernel.org/show_bug.cgi?id=70881) - * - * It seems possible that there are still scenarios with CSA or real - * bandwidth changes where a this could happen, but those cases are - * less common and wouldn't completely prevent using the AP. - */ - if (tracking && - cfg80211_chandef_identical(chandef, &sdata->vif.bss_conf.chandef)) - return ret; - /* don't print the message below for VHT mismatch if VHT is disabled */ if (ret & IEEE80211_STA_DISABLE_VHT) vht_chandef = *chandef; @@ -3754,32 +3731,6 @@ static void ieee80211_restart_sta_timer(struct ieee80211_sub_if_data *sdata) } #ifdef CONFIG_PM -void ieee80211_mgd_quiesce(struct ieee80211_sub_if_data *sdata) -{ - struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; - u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; - - mutex_lock(&ifmgd->mtx); - - if (ifmgd->auth_data) { - /* - * If we are trying to authenticate while suspending, cfg80211 - * won't know and won't actually abort those attempts, thus we - * need to do that ourselves. - */ - ieee80211_send_deauth_disassoc(sdata, - ifmgd->auth_data->bss->bssid, - IEEE80211_STYPE_DEAUTH, - WLAN_REASON_DEAUTH_LEAVING, - false, frame_buf); - ieee80211_destroy_auth_data(sdata, false); - cfg80211_send_deauth(sdata->dev, frame_buf, - IEEE80211_DEAUTH_FRAME_LEN); - } - - mutex_unlock(&ifmgd->mtx); -} - void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata) { struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; @@ -4395,7 +4346,8 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, rcu_read_unlock(); if (bss->wmm_used && bss->uapsd_supported && - (sdata->local->hw.flags & IEEE80211_HW_SUPPORTS_UAPSD)) { + (sdata->local->hw.flags & IEEE80211_HW_SUPPORTS_UAPSD) && + sdata->wmm_acm != 0xff) { assoc_data->uapsd = true; ifmgd->flags |= IEEE80211_STA_UAPSD_ENABLED; } else { diff --git a/net/mac80211/offchannel.c b/net/mac80211/offchannel.c index 0427a58b439..acd1f71adc0 100644 --- a/net/mac80211/offchannel.c +++ b/net/mac80211/offchannel.c @@ -333,7 +333,7 @@ void ieee80211_sw_roc_work(struct work_struct *work) container_of(work, struct ieee80211_roc_work, work.work); struct ieee80211_sub_if_data *sdata = roc->sdata; struct ieee80211_local *local = sdata->local; - bool started, on_channel; + bool started; mutex_lock(&local->mtx); @@ -354,24 +354,13 @@ void ieee80211_sw_roc_work(struct work_struct *work) if (!roc->started) { struct ieee80211_roc_work *dep; - WARN_ON(local->use_chanctx); - - /* If actually operating on the desired channel (with at least - * 20 MHz channel width) don't stop all the operations but still - * treat it as though the ROC operation started properly, so - * other ROC operations won't interfere with this one. - */ - roc->on_channel = roc->chan == local->_oper_chandef.chan; - /* start this ROC */ - ieee80211_recalc_idle(local); - if (!roc->on_channel) { - ieee80211_offchannel_stop_vifs(local); + /* switch channel etc */ + ieee80211_recalc_idle(local); - local->tmp_channel = roc->chan; - ieee80211_hw_config(local, 0); - } + local->tmp_channel = roc->chan; + ieee80211_hw_config(local, 0); /* tell userspace or send frame */ ieee80211_handle_roc_started(roc); @@ -390,10 +379,9 @@ void ieee80211_sw_roc_work(struct work_struct *work) finish: list_del(&roc->list); started = roc->started; - on_channel = roc->on_channel; ieee80211_roc_notify_destroy(roc, !roc->abort); - if (started && !on_channel) { + if (started) { ieee80211_flush_queues(local, NULL); local->tmp_channel = NULL; diff --git a/net/mac80211/pm.c b/net/mac80211/pm.c index efb510e6f20..34012620434 100644 --- a/net/mac80211/pm.c +++ b/net/mac80211/pm.c @@ -101,18 +101,10 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan) /* remove all interfaces that were created in the driver */ list_for_each_entry(sdata, &local->interfaces, list) { - if (!ieee80211_sdata_running(sdata)) + if (!ieee80211_sdata_running(sdata) || + sdata->vif.type == NL80211_IFTYPE_AP_VLAN || + sdata->vif.type == NL80211_IFTYPE_MONITOR) continue; - switch (sdata->vif.type) { - case NL80211_IFTYPE_AP_VLAN: - case NL80211_IFTYPE_MONITOR: - continue; - case NL80211_IFTYPE_STATION: - ieee80211_mgd_quiesce(sdata); - break; - default: - break; - } drv_remove_interface(local, sdata); } diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c index d68d6cfac3b..a02bef35b13 100644 --- a/net/mac80211/rate.c +++ b/net/mac80211/rate.c @@ -448,7 +448,7 @@ static void rate_fixup_ratelist(struct ieee80211_vif *vif, */ if (!(rates[0].flags & IEEE80211_TX_RC_MCS)) { u32 basic_rates = vif->bss_conf.basic_rates; - s8 baserate = basic_rates ? ffs(basic_rates) - 1 : 0; + s8 baserate = basic_rates ? ffs(basic_rates - 1) : 0; rate = &sband->bitrates[rates[0].idx]; diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c index 557a5760f9f..11216bc13b2 100644 --- a/net/mac80211/sta_info.c +++ b/net/mac80211/sta_info.c @@ -270,7 +270,6 @@ void sta_info_free(struct ieee80211_local *local, struct sta_info *sta) sta_dbg(sta->sdata, "Destroyed STA %pM\n", sta->sta.addr); - kfree(rcu_dereference_raw(sta->sta.rates)); kfree(sta); } @@ -340,7 +339,6 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata, return NULL; spin_lock_init(&sta->lock); - spin_lock_init(&sta->ps_lock); INIT_WORK(&sta->drv_unblock_wk, sta_unblock); INIT_WORK(&sta->ampdu_mlme.work, ieee80211_ba_session_work); mutex_init(&sta->ampdu_mlme.mtx); @@ -1047,8 +1045,6 @@ void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta) skb_queue_head_init(&pending); - /* sync with ieee80211_tx_h_unicast_ps_buf */ - spin_lock(&sta->ps_lock); /* Send all buffered frames to the station */ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { int count = skb_queue_len(&pending), tmp; @@ -1068,7 +1064,6 @@ void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta) } ieee80211_add_pending_skbs_fn(local, &pending, clear_sta_ps_flags, sta); - spin_unlock(&sta->ps_lock); local->total_ps_buffered -= buffered; @@ -1115,7 +1110,6 @@ static void ieee80211_send_null_response(struct ieee80211_sub_if_data *sdata, memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN); memcpy(nullfunc->addr2, sdata->vif.addr, ETH_ALEN); memcpy(nullfunc->addr3, sdata->vif.addr, ETH_ALEN); - nullfunc->seq_ctrl = 0; skb->priority = tid; skb_set_queue_mapping(skb, ieee802_1d_to_ac[tid]); diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h index 3184b2b2853..adc30045f99 100644 --- a/net/mac80211/sta_info.h +++ b/net/mac80211/sta_info.h @@ -244,7 +244,6 @@ struct sta_ampdu_mlme { * @drv_unblock_wk: used for driver PS unblocking * @listen_interval: listen interval of this station, when we're acting as AP * @_flags: STA flags, see &enum ieee80211_sta_info_flags, do not use directly - * @ps_lock: used for powersave (when mac80211 is the AP) related locking * @ps_tx_buf: buffers (per AC) of frames to transmit to this station * when it leaves power saving state or polls * @tx_filtered: buffers (per AC) of frames we already tried to @@ -325,8 +324,10 @@ struct sta_info { /* use the accessors defined below */ unsigned long _flags; - /* STA powersave lock and frame queues */ - spinlock_t ps_lock; + /* + * STA powersave frame queues, no more than the internal + * locking required. + */ struct sk_buff_head ps_tx_buf[IEEE80211_NUM_ACS]; struct sk_buff_head tx_filtered[IEEE80211_NUM_ACS]; unsigned long driver_buffered_tids; diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c index 10eea232602..fe9d6e7b904 100644 --- a/net/mac80211/tx.c +++ b/net/mac80211/tx.c @@ -398,9 +398,6 @@ ieee80211_tx_h_multicast_ps_buf(struct ieee80211_tx_data *tx) if (ieee80211_has_order(hdr->frame_control)) return TX_CONTINUE; - if (ieee80211_is_probe_req(hdr->frame_control)) - return TX_CONTINUE; - /* no stations in PS mode */ if (!atomic_read(&ps->num_sta_ps)) return TX_CONTINUE; @@ -450,7 +447,6 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx) { struct sta_info *sta = tx->sta; struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb); - struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)tx->skb->data; struct ieee80211_local *local = tx->local; if (unlikely(!sta)) @@ -461,33 +457,10 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx) !(info->flags & IEEE80211_TX_CTL_NO_PS_BUFFER))) { int ac = skb_get_queue_mapping(tx->skb); - /* only deauth, disassoc and action are bufferable MMPDUs */ - if (ieee80211_is_mgmt(hdr->frame_control) && - !ieee80211_is_deauth(hdr->frame_control) && - !ieee80211_is_disassoc(hdr->frame_control) && - !ieee80211_is_action(hdr->frame_control)) { - info->flags |= IEEE80211_TX_CTL_NO_PS_BUFFER; - return TX_CONTINUE; - } - ps_dbg(sta->sdata, "STA %pM aid %d: PS buffer for AC %d\n", sta->sta.addr, sta->sta.aid, ac); if (tx->local->total_ps_buffered >= TOTAL_MAX_TX_BUFFER) purge_old_ps_buffers(tx->local); - - /* sync with ieee80211_sta_ps_deliver_wakeup */ - spin_lock(&sta->ps_lock); - /* - * STA woke up the meantime and all the frames on ps_tx_buf have - * been queued to pending queue. No reordering can happen, go - * ahead and Tx the packet. - */ - if (!test_sta_flag(sta, WLAN_STA_PS_STA) && - !test_sta_flag(sta, WLAN_STA_PS_DRIVER)) { - spin_unlock(&sta->ps_lock); - return TX_CONTINUE; - } - if (skb_queue_len(&sta->ps_tx_buf[ac]) >= STA_MAX_TX_BUFFER) { struct sk_buff *old = skb_dequeue(&sta->ps_tx_buf[ac]); ps_dbg(tx->sdata, @@ -501,7 +474,6 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx) info->control.vif = &tx->sdata->vif; info->flags |= IEEE80211_TX_INTFL_NEED_TXPROCESSING; skb_queue_tail(&sta->ps_tx_buf[ac], tx->skb); - spin_unlock(&sta->ps_lock); if (!timer_pending(&local->sta_cleanup)) mod_timer(&local->sta_cleanup, @@ -527,8 +499,22 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx) static ieee80211_tx_result debug_noinline ieee80211_tx_h_ps_buf(struct ieee80211_tx_data *tx) { + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb); + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)tx->skb->data; + if (unlikely(tx->flags & IEEE80211_TX_PS_BUFFERED)) return TX_CONTINUE; + + /* only deauth, disassoc and action are bufferable MMPDUs */ + if (ieee80211_is_mgmt(hdr->frame_control) && + !ieee80211_is_deauth(hdr->frame_control) && + !ieee80211_is_disassoc(hdr->frame_control) && + !ieee80211_is_action(hdr->frame_control)) { + if (tx->flags & IEEE80211_TX_UNICAST) + info->flags |= IEEE80211_TX_CTL_NO_PS_BUFFER; + return TX_CONTINUE; + } + if (tx->flags & IEEE80211_TX_UNICAST) return ieee80211_tx_h_unicast_ps_buf(tx); else @@ -2710,7 +2696,7 @@ ieee80211_get_buffered_bc(struct ieee80211_hw *hw, cpu_to_le16(IEEE80211_FCTL_MOREDATA); } - if (sdata->vif.type == NL80211_IFTYPE_AP) + if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) sdata = IEEE80211_DEV_TO_SUB_IF(skb->dev); if (!ieee80211_tx_prepare(sdata, &tx, skb)) break; diff --git a/net/mac80211/wme.c b/net/mac80211/wme.c index a282fddf8b0..afba19cb6f8 100644 --- a/net/mac80211/wme.c +++ b/net/mac80211/wme.c @@ -153,11 +153,6 @@ u16 ieee80211_select_queue(struct ieee80211_sub_if_data *sdata, return IEEE80211_AC_BE; } - if (skb->protocol == sdata->control_port_protocol) { - skb->priority = 7; - return ieee80211_downgrade_queue(sdata, skb); - } - /* use the data classifier to determine what 802.1d tag the * data frame has */ skb->priority = cfg80211_classify8021d(skb); diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c index 90e756cf6e5..a083bda322b 100644 --- a/net/netfilter/ipvs/ip_vs_conn.c +++ b/net/netfilter/ipvs/ip_vs_conn.c @@ -797,6 +797,7 @@ static void ip_vs_conn_expire(unsigned long data) ip_vs_control_del(cp); if (cp->flags & IP_VS_CONN_F_NFCT) { + ip_vs_conn_drop_conntrack(cp); /* Do not access conntracks during subsys cleanup * because nf_conntrack_find_get can not be used after * conntrack cleanup for the net. diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c index 26b9a986a87..21a3a475d7c 100644 --- a/net/netfilter/ipvs/ip_vs_core.c +++ b/net/netfilter/ipvs/ip_vs_core.c @@ -1384,19 +1384,15 @@ ip_vs_in_icmp(struct sk_buff *skb, int *related, unsigned int hooknum) if (ipip) { __be32 info = ic->un.gateway; - __u8 type = ic->type; - __u8 code = ic->code; /* Update the MTU */ if (ic->type == ICMP_DEST_UNREACH && ic->code == ICMP_FRAG_NEEDED) { struct ip_vs_dest *dest = cp->dest; u32 mtu = ntohs(ic->un.frag.mtu); - __be16 frag_off = cih->frag_off; /* Strip outer IP and ICMP, go to IPIP header */ - if (pskb_pull(skb, ihl + sizeof(_icmph)) == NULL) - goto ignore_ipip; + __skb_pull(skb, ihl + sizeof(_icmph)); offset2 -= ihl + sizeof(_icmph); skb_reset_network_header(skb); IP_VS_DBG(12, "ICMP for IPIP %pI4->%pI4: mtu=%u\n", @@ -1404,7 +1400,7 @@ ip_vs_in_icmp(struct sk_buff *skb, int *related, unsigned int hooknum) ipv4_update_pmtu(skb, dev_net(skb->dev), mtu, 0, 0, 0, 0); /* Client uses PMTUD? */ - if (!(frag_off & htons(IP_DF))) + if (!(cih->frag_off & htons(IP_DF))) goto ignore_ipip; /* Prefer the resulting PMTU */ if (dest) { @@ -1423,13 +1419,12 @@ ip_vs_in_icmp(struct sk_buff *skb, int *related, unsigned int hooknum) /* Strip outer IP, ICMP and IPIP, go to IP header of * original request. */ - if (pskb_pull(skb, offset2) == NULL) - goto ignore_ipip; + __skb_pull(skb, offset2); skb_reset_network_header(skb); IP_VS_DBG(12, "Sending ICMP for %pI4->%pI4: t=%u, c=%u, i=%u\n", &ip_hdr(skb)->saddr, &ip_hdr(skb)->daddr, - type, code, ntohl(info)); - icmp_send(skb, type, code, info); + ic->type, ic->code, ntohl(info)); + icmp_send(skb, ic->type, ic->code, info); /* ICMP can be shorter but anyways, account it */ ip_vs_out_stats(cp, skb); @@ -1898,7 +1893,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = { { .hook = ip_vs_local_reply6, .owner = THIS_MODULE, - .pf = NFPROTO_IPV6, + .pf = NFPROTO_IPV4, .hooknum = NF_INET_LOCAL_OUT, .priority = NF_IP6_PRI_NAT_DST + 1, }, diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c index 1692e753475..c47444e4cf8 100644 --- a/net/netfilter/ipvs/ip_vs_xmit.c +++ b/net/netfilter/ipvs/ip_vs_xmit.c @@ -883,7 +883,7 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, struct ip_vs_conn *cp, iph->daddr = cp->daddr.ip; iph->saddr = saddr; iph->ttl = old_iph->ttl; - ip_select_ident(skb, NULL); + ip_select_ident(skb, &rt->dst, NULL); /* Another hack: avoid icmp_send in ip_fragment */ skb->local_df = 1; @@ -967,8 +967,8 @@ ip_vs_tunnel_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp, iph->nexthdr = IPPROTO_IPV6; iph->payload_len = old_iph->payload_len; be16_add_cpu(&iph->payload_len, sizeof(*old_iph)); + iph->priority = old_iph->priority; memset(&iph->flow_lbl, 0, sizeof(iph->flow_lbl)); - ipv6_change_dsfield(iph, 0, ipv6_get_dsfield(old_iph)); iph->daddr = cp->daddr.in6; iph->saddr = saddr; iph->hop_limit = old_iph->hop_limit; diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c index 59359bec328..a99b6c3427b 100644 --- a/net/netfilter/nf_conntrack_proto_dccp.c +++ b/net/netfilter/nf_conntrack_proto_dccp.c @@ -428,7 +428,7 @@ static bool dccp_new(struct nf_conn *ct, const struct sk_buff *skb, const char *msg; u_int8_t state; - dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh); + dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &dh); BUG_ON(dh == NULL); state = dccp_state_table[CT_DCCP_ROLE_CLIENT][dh->dccph_type][CT_DCCP_NONE]; @@ -486,7 +486,7 @@ static int dccp_packet(struct nf_conn *ct, const struct sk_buff *skb, u_int8_t type, old_state, new_state; enum ct_dccp_roles role; - dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh); + dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &dh); BUG_ON(dh == NULL); type = dh->dccph_type; @@ -577,7 +577,7 @@ static int dccp_error(struct net *net, struct nf_conn *tmpl, unsigned int cscov; const char *msg; - dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh); + dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &dh); if (dh == NULL) { msg = "nf_ct_dccp: short packet "; goto out_invalid; diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c index 7dcc376eea5..4d4d8f1d01f 100644 --- a/net/netfilter/nf_conntrack_proto_tcp.c +++ b/net/netfilter/nf_conntrack_proto_tcp.c @@ -1043,12 +1043,6 @@ static int tcp_packet(struct nf_conn *ct, nf_ct_kill_acct(ct, ctinfo, skb); return NF_ACCEPT; } - /* ESTABLISHED without SEEN_REPLY, i.e. mid-connection - * pickup with loose=1. Avoid large ESTABLISHED timeout. - */ - if (new_state == TCP_CONNTRACK_ESTABLISHED && - timeout > timeouts[TCP_CONNTRACK_UNACK]) - timeout = timeouts[TCP_CONNTRACK_UNACK]; } else if (!test_bit(IPS_ASSURED_BIT, &ct->status) && (old_state == TCP_CONNTRACK_SYN_RECV || old_state == TCP_CONNTRACK_ESTABLISHED) diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c index 0a03662bfbe..572d87dc116 100644 --- a/net/netfilter/nfnetlink.c +++ b/net/netfilter/nfnetlink.c @@ -147,7 +147,7 @@ static int nfnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) const struct nfnetlink_subsystem *ss; int type, err; - if (!netlink_net_capable(skb, CAP_NET_ADMIN)) + if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) return -EPERM; /* All the messages must at least contain nfgenmsg */ diff --git a/net/netfilter/nfnetlink_queue_core.c b/net/netfilter/nfnetlink_queue_core.c index 2b8199f6878..5352b2d2d5b 100644 --- a/net/netfilter/nfnetlink_queue_core.c +++ b/net/netfilter/nfnetlink_queue_core.c @@ -227,23 +227,22 @@ nfqnl_flush(struct nfqnl_instance *queue, nfqnl_cmpfn cmpfn, unsigned long data) spin_unlock_bh(&queue->lock); } -static int +static void nfqnl_zcopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) { int i, j = 0; int plen = 0; /* length of skb->head fragment */ - int ret; struct page *page; unsigned int offset; /* dont bother with small payloads */ - if (len <= skb_tailroom(to)) - return skb_copy_bits(from, 0, skb_put(to, len), len); + if (len <= skb_tailroom(to)) { + skb_copy_bits(from, 0, skb_put(to, len), len); + return; + } if (hlen) { - ret = skb_copy_bits(from, 0, skb_put(to, hlen), hlen); - if (unlikely(ret)) - return ret; + skb_copy_bits(from, 0, skb_put(to, hlen), hlen); len -= hlen; } else { plen = min_t(int, skb_headlen(from), len); @@ -261,11 +260,6 @@ nfqnl_zcopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) to->len += len + plen; to->data_len += len + plen; - if (unlikely(skb_orphan_frags(from, GFP_ATOMIC))) { - skb_tx_error(from); - return -ENOMEM; - } - for (i = 0; i < skb_shinfo(from)->nr_frags; i++) { if (!len) break; @@ -276,8 +270,6 @@ nfqnl_zcopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) j++; } skb_shinfo(to)->nr_frags = j; - - return 0; } static int nfqnl_put_packet_info(struct sk_buff *nlskb, struct sk_buff *packet) @@ -363,16 +355,13 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue, skb = nfnetlink_alloc_skb(&init_net, size, queue->peer_portid, GFP_ATOMIC); - if (!skb) { - skb_tx_error(entskb); + if (!skb) return NULL; - } nlh = nlmsg_put(skb, 0, 0, NFNL_SUBSYS_QUEUE << 8 | NFQNL_MSG_PACKET, sizeof(struct nfgenmsg), 0); if (!nlh) { - skb_tx_error(entskb); kfree_skb(skb); return NULL; } @@ -492,15 +481,13 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue, nla->nla_type = NFQA_PAYLOAD; nla->nla_len = nla_attr_size(data_len); - if (nfqnl_zcopy(skb, entskb, data_len, hlen)) - goto nla_put_failure; + nfqnl_zcopy(skb, entskb, data_len, hlen); } nlh->nlmsg_len = skb->len; return skb; nla_put_failure: - skb_tx_error(entskb); kfree_skb(skb); net_err_ratelimited("nf_queue: error creating packet message\n"); return NULL; diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index ce98e8a0fee..b55ab2d9a94 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -500,7 +500,7 @@ static unsigned int netlink_poll(struct file *file, struct socket *sock, while (nlk->cb != NULL && netlink_dump_space(nlk)) { err = netlink_dump(sk); if (err < 0) { - sk->sk_err = -err; + sk->sk_err = err; sk->sk_error_report(sk); break; } @@ -571,7 +571,7 @@ static int netlink_mmap_sendmsg(struct sock *sk, struct msghdr *msg, * after validation, the socket and the ring may only be used by a * single process, otherwise we fall back to copying. */ - if (atomic_long_read(&sk->sk_socket->file->f_count) > 1 || + if (atomic_long_read(&sk->sk_socket->file->f_count) > 2 || atomic_read(&nlk->mapped) > 1) excl = false; @@ -1219,74 +1219,7 @@ retry: return err; } -/** - * __netlink_ns_capable - General netlink message capability test - * @nsp: NETLINK_CB of the socket buffer holding a netlink command from userspace. - * @user_ns: The user namespace of the capability to use - * @cap: The capability to use - * - * Test to see if the opener of the socket we received the message - * from had when the netlink socket was created and the sender of the - * message has has the capability @cap in the user namespace @user_ns. - */ -bool __netlink_ns_capable(const struct netlink_skb_parms *nsp, - struct user_namespace *user_ns, int cap) -{ - return ((nsp->flags & NETLINK_SKB_DST) || - file_ns_capable(nsp->sk->sk_socket->file, user_ns, cap)) && - ns_capable(user_ns, cap); -} -EXPORT_SYMBOL(__netlink_ns_capable); - -/** - * netlink_ns_capable - General netlink message capability test - * @skb: socket buffer holding a netlink command from userspace - * @user_ns: The user namespace of the capability to use - * @cap: The capability to use - * - * Test to see if the opener of the socket we received the message - * from had when the netlink socket was created and the sender of the - * message has has the capability @cap in the user namespace @user_ns. - */ -bool netlink_ns_capable(const struct sk_buff *skb, - struct user_namespace *user_ns, int cap) -{ - return __netlink_ns_capable(&NETLINK_CB(skb), user_ns, cap); -} -EXPORT_SYMBOL(netlink_ns_capable); - -/** - * netlink_capable - Netlink global message capability test - * @skb: socket buffer holding a netlink command from userspace - * @cap: The capability to use - * - * Test to see if the opener of the socket we received the message - * from had when the netlink socket was created and the sender of the - * message has has the capability @cap in all user namespaces. - */ -bool netlink_capable(const struct sk_buff *skb, int cap) -{ - return netlink_ns_capable(skb, &init_user_ns, cap); -} -EXPORT_SYMBOL(netlink_capable); - -/** - * netlink_net_capable - Netlink network namespace message capability test - * @skb: socket buffer holding a netlink command from userspace - * @cap: The capability to use - * - * Test to see if the opener of the socket we received the message - * from had when the netlink socket was created and the sender of the - * message has has the capability @cap over the network namespace of - * the socket we received the message from. - */ -bool netlink_net_capable(const struct sk_buff *skb, int cap) -{ - return netlink_ns_capable(skb, sock_net(skb->sk)->user_ns, cap); -} -EXPORT_SYMBOL(netlink_net_capable); - -static inline int netlink_allowed(const struct socket *sock, unsigned int flag) +static inline int netlink_capable(const struct socket *sock, unsigned int flag) { return (nl_table[sock->sk->sk_protocol].flags & flag) || ns_capable(sock_net(sock->sk)->user_ns, CAP_NET_ADMIN); @@ -1354,7 +1287,7 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr, /* Only superuser is allowed to listen multicasts */ if (nladdr->nl_groups) { - if (!netlink_allowed(sock, NL_CFG_F_NONROOT_RECV)) + if (!netlink_capable(sock, NL_CFG_F_NONROOT_RECV)) return -EPERM; err = netlink_realloc_groups(sk); if (err) @@ -1416,7 +1349,7 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr, return -EINVAL; /* Only superuser is allowed to send multicasts */ - if (nladdr->nl_groups && !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND)) + if (nladdr->nl_groups && !netlink_capable(sock, NL_CFG_F_NONROOT_SEND)) return -EPERM; if (!nlk->portid) @@ -1988,7 +1921,7 @@ static int netlink_setsockopt(struct socket *sock, int level, int optname, break; case NETLINK_ADD_MEMBERSHIP: case NETLINK_DROP_MEMBERSHIP: { - if (!netlink_allowed(sock, NL_CFG_F_NONROOT_RECV)) + if (!netlink_capable(sock, NL_CFG_F_NONROOT_RECV)) return -EPERM; err = netlink_realloc_groups(sk); if (err) @@ -2120,7 +2053,6 @@ static int netlink_sendmsg(struct kiocb *kiocb, struct socket *sock, struct sk_buff *skb; int err; struct scm_cookie scm; - u32 netlink_skb_flags = 0; if (msg->msg_flags&MSG_OOB) return -EOPNOTSUPP; @@ -2140,9 +2072,8 @@ static int netlink_sendmsg(struct kiocb *kiocb, struct socket *sock, dst_group = ffs(addr->nl_groups); err = -EPERM; if ((dst_group || dst_portid) && - !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND)) + !netlink_capable(sock, NL_CFG_F_NONROOT_SEND)) goto out; - netlink_skb_flags |= NETLINK_SKB_DST; } else { dst_portid = nlk->dst_portid; dst_group = nlk->dst_group; @@ -2172,7 +2103,6 @@ static int netlink_sendmsg(struct kiocb *kiocb, struct socket *sock, NETLINK_CB(skb).portid = nlk->portid; NETLINK_CB(skb).dst_group = dst_group; NETLINK_CB(skb).creds = siocb->scm->creds; - NETLINK_CB(skb).flags = netlink_skb_flags; err = -EFAULT; if (memcpy_fromiovec(skb_put(skb, len), msg->msg_iov, len)) { @@ -2272,7 +2202,7 @@ static int netlink_recvmsg(struct kiocb *kiocb, struct socket *sock, if (nlk->cb && atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) { ret = netlink_dump(sk); if (ret) { - sk->sk_err = -ret; + sk->sk_err = ret; sk->sk_error_report(sk); } } diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c index ade434b8abd..393f17eea1a 100644 --- a/net/netlink/genetlink.c +++ b/net/netlink/genetlink.c @@ -592,7 +592,7 @@ static int genl_family_rcv_msg(struct genl_family *family, return -EOPNOTSUPP; if ((ops->flags & GENL_ADMIN_PERM) && - !netlink_capable(skb, CAP_NET_ADMIN)) + !capable(CAP_NET_ADMIN)) return -EPERM; if (nlh->nlmsg_flags & NLM_F_DUMP) { diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c index c4779ca5903..894b6cbdd92 100644 --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -40,9 +40,6 @@ static int do_execute_actions(struct datapath *dp, struct sk_buff *skb, static int make_writable(struct sk_buff *skb, int write_len) { - if (!pskb_may_pull(skb, write_len)) - return -ENOMEM; - if (!skb_cloned(skb) || skb_clone_writable(skb, write_len)) return 0; @@ -71,8 +68,6 @@ static int __pop_vlan_tci(struct sk_buff *skb, __be16 *current_tci) vlan_set_encap_proto(skb, vhdr); skb->mac_header += VLAN_HLEN; - if (skb_network_offset(skb) < ETH_HLEN) - skb_set_network_header(skb, ETH_HLEN); skb_reset_mac_len(skb); return 0; diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 81b4b816f13..e8b5a0dfca2 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -565,7 +565,6 @@ static void init_prb_bdqc(struct packet_sock *po, p1->tov_in_jiffies = msecs_to_jiffies(p1->retire_blk_tov); p1->blk_sizeof_priv = req_u->req3.tp_sizeof_priv; - p1->max_frame_len = p1->kblk_size - BLK_PLUS_PRIV(p1->blk_sizeof_priv); prb_init_ft_ops(p1, req_u); prb_setup_retire_blk_timer(po, tx_ring); prb_open_block(p1, pbd); @@ -1804,18 +1803,6 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, if ((int)snaplen < 0) snaplen = 0; } - } else if (unlikely(macoff + snaplen > - GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len)) { - u32 nval; - - nval = GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len - macoff; - pr_err_once("tpacket_rcv: packet too big, clamped from %u to %u. macoff=%u\n", - snaplen, nval, macoff); - snaplen = nval; - if (unlikely((int)snaplen < 0)) { - snaplen = 0; - macoff = GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len; - } } spin_lock(&sk->sk_receive_queue.lock); h.raw = packet_current_rx_frame(po, skb, @@ -3655,10 +3642,6 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u, goto out; if (unlikely(req->tp_block_size & (PAGE_SIZE - 1))) goto out; - if (po->tp_version >= TPACKET_V3 && - (int)(req->tp_block_size - - BLK_PLUS_PRIV(req_u->req3.tp_sizeof_priv)) <= 0) - goto out; if (unlikely(req->tp_frame_size < po->tp_hdrlen + po->tp_reserve)) goto out; diff --git a/net/packet/diag.c b/net/packet/diag.c index 674b0a65df6..a9584a2f6d6 100644 --- a/net/packet/diag.c +++ b/net/packet/diag.c @@ -127,7 +127,6 @@ static int pdiag_put_fanout(struct packet_sock *po, struct sk_buff *nlskb) static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, struct packet_diag_req *req, - bool may_report_filterinfo, struct user_namespace *user_ns, u32 portid, u32 seq, u32 flags, int sk_ino) { @@ -172,8 +171,7 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, goto out_nlmsg_trim; if ((req->pdiag_show & PACKET_SHOW_FILTER) && - sock_diag_put_filterinfo(may_report_filterinfo, sk, skb, - PACKET_DIAG_FILTER)) + sock_diag_put_filterinfo(user_ns, sk, skb, PACKET_DIAG_FILTER)) goto out_nlmsg_trim; return nlmsg_end(skb, nlh); @@ -189,11 +187,9 @@ static int packet_diag_dump(struct sk_buff *skb, struct netlink_callback *cb) struct packet_diag_req *req; struct net *net; struct sock *sk; - bool may_report_filterinfo; net = sock_net(skb->sk); req = nlmsg_data(cb->nlh); - may_report_filterinfo = netlink_net_capable(cb->skb, CAP_NET_ADMIN); mutex_lock(&net->packet.sklist_lock); sk_for_each(sk, &net->packet.sklist) { @@ -203,7 +199,6 @@ static int packet_diag_dump(struct sk_buff *skb, struct netlink_callback *cb) goto next; if (sk_diag_fill(sk, skb, req, - may_report_filterinfo, sk_user_ns(NETLINK_CB(cb->skb).sk), NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, NLM_F_MULTI, diff --git a/net/packet/internal.h b/net/packet/internal.h index ca086c0c2c0..1035fa2d909 100644 --- a/net/packet/internal.h +++ b/net/packet/internal.h @@ -29,7 +29,6 @@ struct tpacket_kbdq_core { char *pkblk_start; char *pkblk_end; int kblk_size; - unsigned int max_frame_len; unsigned int knum_blocks; uint64_t knxt_seq_num; char *prev; diff --git a/net/phonet/pn_netlink.c b/net/phonet/pn_netlink.c index b64151ade6b..dc15f430080 100644 --- a/net/phonet/pn_netlink.c +++ b/net/phonet/pn_netlink.c @@ -70,10 +70,10 @@ static int addr_doit(struct sk_buff *skb, struct nlmsghdr *nlh) int err; u8 pnaddr; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; - if (!netlink_capable(skb, CAP_SYS_ADMIN)) + if (!capable(CAP_SYS_ADMIN)) return -EPERM; ASSERT_RTNL(); @@ -233,10 +233,10 @@ static int route_doit(struct sk_buff *skb, struct nlmsghdr *nlh) int err; u8 dst; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; - if (!netlink_capable(skb, CAP_SYS_ADMIN)) + if (!capable(CAP_SYS_ADMIN)) return -EPERM; ASSERT_RTNL(); diff --git a/net/rds/iw.c b/net/rds/iw.c index 589935661d6..7826d46baa7 100644 --- a/net/rds/iw.c +++ b/net/rds/iw.c @@ -239,8 +239,7 @@ static int rds_iw_laddr_check(__be32 addr) ret = rdma_bind_addr(cm_id, (struct sockaddr *)&sin); /* due to this, we will claim to support IB devices unless we check node_type. */ - if (ret || !cm_id->device || - cm_id->device->node_type != RDMA_NODE_RNIC) + if (ret || cm_id->device->node_type != RDMA_NODE_RNIC) ret = -EADDRNOTAVAIL; rdsdebug("addr %pI4 ret %d node type %d\n", diff --git a/net/sched/act_api.c b/net/sched/act_api.c index 15d46b9166d..fd7072827a4 100644 --- a/net/sched/act_api.c +++ b/net/sched/act_api.c @@ -989,7 +989,7 @@ static int tc_ctl_action(struct sk_buff *skb, struct nlmsghdr *n) u32 portid = skb ? NETLINK_CB(skb).portid : 0; int ret = 0, ovr = 0; - if ((n->nlmsg_type != RTM_GETACTION) && !netlink_capable(skb, CAP_NET_ADMIN)) + if ((n->nlmsg_type != RTM_GETACTION) && !capable(CAP_NET_ADMIN)) return -EPERM; ret = nlmsg_parse(n, sizeof(struct tcamsg), tca, TCA_ACT_MAX, NULL); diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 2ea40d1877a..8e118af9097 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -138,7 +138,7 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n) int err; int tp_created = 0; - if ((n->nlmsg_type != RTM_GETTFILTER) && !netlink_capable(skb, CAP_NET_ADMIN)) + if ((n->nlmsg_type != RTM_GETTFILTER) && !capable(CAP_NET_ADMIN)) return -EPERM; replay: diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 2d2f07945c8..51b968d3feb 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -1024,7 +1024,7 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n) struct Qdisc *p = NULL; int err; - if ((n->nlmsg_type != RTM_GETQDISC) && !netlink_capable(skb, CAP_NET_ADMIN)) + if ((n->nlmsg_type != RTM_GETQDISC) && !capable(CAP_NET_ADMIN)) return -EPERM; err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL); @@ -1091,7 +1091,7 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n) struct Qdisc *q, *p; int err; - if (!netlink_capable(skb, CAP_NET_ADMIN)) + if (!capable(CAP_NET_ADMIN)) return -EPERM; replay: @@ -1431,7 +1431,7 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n) u32 qid; int err; - if ((n->nlmsg_type != RTM_GETTCLASS) && !netlink_capable(skb, CAP_NET_ADMIN)) + if ((n->nlmsg_type != RTM_GETTCLASS) && !capable(CAP_NET_ADMIN)) return -EPERM; err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL); diff --git a/net/sctp/associola.c b/net/sctp/associola.c index 62e86d98bc3..91cfd8f94a1 100644 --- a/net/sctp/associola.c +++ b/net/sctp/associola.c @@ -387,7 +387,7 @@ void sctp_association_free(struct sctp_association *asoc) /* Only real associations count against the endpoint, so * don't bother for if this is a temporary association. */ - if (!list_empty(&asoc->asocs)) { + if (!asoc->temp) { list_del(&asoc->asocs); /* Decrement the backlog value for a TCP-style listening @@ -1213,7 +1213,6 @@ void sctp_assoc_update(struct sctp_association *asoc, asoc->c = new->c; asoc->peer.rwnd = new->peer.rwnd; asoc->peer.sack_needed = new->peer.sack_needed; - asoc->peer.auth_capable = new->peer.auth_capable; asoc->peer.i = new->peer.i; sctp_tsnmap_init(&asoc->peer.tsn_map, SCTP_TSN_MAP_INITIAL, asoc->peer.i.initial_tsn, GFP_ATOMIC); diff --git a/net/sctp/auth.c b/net/sctp/auth.c index 7a19117254d..ba1dfc3f8de 100644 --- a/net/sctp/auth.c +++ b/net/sctp/auth.c @@ -393,13 +393,14 @@ nomem: */ int sctp_auth_asoc_init_active_key(struct sctp_association *asoc, gfp_t gfp) { + struct net *net = sock_net(asoc->base.sk); struct sctp_auth_bytes *secret; struct sctp_shared_key *ep_key; /* If we don't support AUTH, or peer is not capable * we don't need to do anything. */ - if (!asoc->ep->auth_enable || !asoc->peer.auth_capable) + if (!net->sctp.auth_enable || !asoc->peer.auth_capable) return 0; /* If the key_id is non-zero and we couldn't find an @@ -446,16 +447,16 @@ struct sctp_shared_key *sctp_auth_get_shkey( */ int sctp_auth_init_hmacs(struct sctp_endpoint *ep, gfp_t gfp) { + struct net *net = sock_net(ep->base.sk); struct crypto_hash *tfm = NULL; __u16 id; - /* If AUTH extension is disabled, we are done */ - if (!ep->auth_enable) { + /* if the transforms are already allocted, we are done */ + if (!net->sctp.auth_enable) { ep->auth_hmacs = NULL; return 0; } - /* If the transforms are already allocated, we are done */ if (ep->auth_hmacs) return 0; @@ -676,10 +677,12 @@ static int __sctp_auth_cid(sctp_cid_t chunk, struct sctp_chunks_param *param) /* Check if peer requested that this chunk is authenticated */ int sctp_auth_send_cid(sctp_cid_t chunk, const struct sctp_association *asoc) { + struct net *net; if (!asoc) return 0; - if (!asoc->ep->auth_enable || !asoc->peer.auth_capable) + net = sock_net(asoc->base.sk); + if (!net->sctp.auth_enable || !asoc->peer.auth_capable) return 0; return __sctp_auth_cid(chunk, asoc->peer.peer_chunks); @@ -688,10 +691,12 @@ int sctp_auth_send_cid(sctp_cid_t chunk, const struct sctp_association *asoc) /* Check if we requested that peer authenticate this chunk. */ int sctp_auth_recv_cid(sctp_cid_t chunk, const struct sctp_association *asoc) { + struct net *net; if (!asoc) return 0; - if (!asoc->ep->auth_enable) + net = sock_net(asoc->base.sk); + if (!net->sctp.auth_enable) return 0; return __sctp_auth_cid(chunk, diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c index e09f906514d..5fbd7bc6bb1 100644 --- a/net/sctp/endpointola.c +++ b/net/sctp/endpointola.c @@ -75,8 +75,7 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep, if (!ep->digest) return NULL; - ep->auth_enable = net->sctp.auth_enable; - if (ep->auth_enable) { + if (net->sctp.auth_enable) { /* Allocate space for HMACS and CHUNKS authentication * variables. There are arrays that we encode directly * into parameters to make the rest of the operations easier. diff --git a/net/sctp/output.c b/net/sctp/output.c index b6f5fc3127b..0beb2f9c8a7 100644 --- a/net/sctp/output.c +++ b/net/sctp/output.c @@ -618,7 +618,7 @@ out: return err; no_route: kfree_skb(nskb); - IP_INC_STATS(sock_net(asoc->base.sk), IPSTATS_MIB_OUTNOROUTES); + IP_INC_STATS_BH(sock_net(asoc->base.sk), IPSTATS_MIB_OUTNOROUTES); /* FIXME: Returning the 'err' will effect all the associations * associated with a socket, although only one of the paths of the diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c index 5a3c1c0a84a..eaee00c6113 100644 --- a/net/sctp/protocol.c +++ b/net/sctp/protocol.c @@ -498,13 +498,8 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr, continue; if ((laddr->state == SCTP_ADDR_SRC) && (AF_INET == laddr->a.sa.sa_family)) { + fl4->saddr = laddr->a.v4.sin_addr.s_addr; fl4->fl4_sport = laddr->a.v4.sin_port; - flowi4_update_output(fl4, - asoc->base.sk->sk_bound_dev_if, - RT_CONN_FLAGS(asoc->base.sk), - daddr->v4.sin_addr.s_addr, - laddr->a.v4.sin_addr.s_addr); - rt = ip_route_output_key(sock_net(sk), fl4); if (!IS_ERR(rt)) { dst = &rt->dst; diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c index 87e244be899..cf579e71cff 100644 --- a/net/sctp/sm_make_chunk.c +++ b/net/sctp/sm_make_chunk.c @@ -199,7 +199,6 @@ struct sctp_chunk *sctp_make_init(const struct sctp_association *asoc, gfp_t gfp, int vparam_len) { struct net *net = sock_net(asoc->base.sk); - struct sctp_endpoint *ep = asoc->ep; sctp_inithdr_t init; union sctp_params addrs; size_t chunksize; @@ -259,7 +258,7 @@ struct sctp_chunk *sctp_make_init(const struct sctp_association *asoc, chunksize += vparam_len; /* Account for AUTH related parameters */ - if (ep->auth_enable) { + if (net->sctp.auth_enable) { /* Add random parameter length*/ chunksize += sizeof(asoc->c.auth_random); @@ -344,7 +343,7 @@ struct sctp_chunk *sctp_make_init(const struct sctp_association *asoc, } /* Add SCTP-AUTH chunks to the parameter list */ - if (ep->auth_enable) { + if (net->sctp.auth_enable) { sctp_addto_chunk(retval, sizeof(asoc->c.auth_random), asoc->c.auth_random); if (auth_hmacs) @@ -1404,8 +1403,8 @@ static void sctp_chunk_destroy(struct sctp_chunk *chunk) BUG_ON(!list_empty(&chunk->list)); list_del_init(&chunk->transmitted_list); - consume_skb(chunk->skb); - consume_skb(chunk->auth_chunk); + /* Free the chunk skb data and the SCTP_chunk stub itself. */ + dev_kfree_skb(chunk->skb); SCTP_DBG_OBJCNT_DEC(chunk); kmem_cache_free(sctp_chunk_cachep, chunk); @@ -1996,7 +1995,7 @@ static void sctp_process_ext_param(struct sctp_association *asoc, /* if the peer reports AUTH, assume that he * supports AUTH. */ - if (asoc->ep->auth_enable) + if (net->sctp.auth_enable) asoc->peer.auth_capable = 1; break; case SCTP_CID_ASCONF: @@ -2088,7 +2087,6 @@ static sctp_ierror_t sctp_process_unk_param(const struct sctp_association *asoc, * SCTP_IERROR_NO_ERROR - continue with the chunk */ static sctp_ierror_t sctp_verify_param(struct net *net, - const struct sctp_endpoint *ep, const struct sctp_association *asoc, union sctp_params param, sctp_cid_t cid, @@ -2139,7 +2137,7 @@ static sctp_ierror_t sctp_verify_param(struct net *net, goto fallthrough; case SCTP_PARAM_RANDOM: - if (!ep->auth_enable) + if (!net->sctp.auth_enable) goto fallthrough; /* SCTP-AUTH: Secion 6.1 @@ -2156,7 +2154,7 @@ static sctp_ierror_t sctp_verify_param(struct net *net, break; case SCTP_PARAM_CHUNKS: - if (!ep->auth_enable) + if (!net->sctp.auth_enable) goto fallthrough; /* SCTP-AUTH: Section 3.2 @@ -2172,7 +2170,7 @@ static sctp_ierror_t sctp_verify_param(struct net *net, break; case SCTP_PARAM_HMAC_ALGO: - if (!ep->auth_enable) + if (!net->sctp.auth_enable) goto fallthrough; hmacs = (struct sctp_hmac_algo_param *)param.p; @@ -2206,9 +2204,10 @@ fallthrough: } /* Verify the INIT packet before we process it. */ -int sctp_verify_init(struct net *net, const struct sctp_endpoint *ep, - const struct sctp_association *asoc, sctp_cid_t cid, - sctp_init_chunk_t *peer_init, struct sctp_chunk *chunk, +int sctp_verify_init(struct net *net, const struct sctp_association *asoc, + sctp_cid_t cid, + sctp_init_chunk_t *peer_init, + struct sctp_chunk *chunk, struct sctp_chunk **errp) { union sctp_params param; @@ -2251,8 +2250,8 @@ int sctp_verify_init(struct net *net, const struct sctp_endpoint *ep, /* Verify all the variable length parameters */ sctp_walk_params(param, peer_init, init_hdr.params) { - result = sctp_verify_param(net, ep, asoc, param, cid, - chunk, errp); + + result = sctp_verify_param(net, asoc, param, cid, chunk, errp); switch (result) { case SCTP_IERROR_ABORT: case SCTP_IERROR_NOMEM: @@ -2484,7 +2483,6 @@ static int sctp_process_param(struct sctp_association *asoc, struct sctp_af *af; union sctp_addr_param *addr_param; struct sctp_transport *t; - struct sctp_endpoint *ep = asoc->ep; /* We maintain all INIT parameters in network byte order all the * time. This allows us to not worry about whether the parameters @@ -2625,7 +2623,7 @@ do_addr_param: goto fall_through; case SCTP_PARAM_RANDOM: - if (!ep->auth_enable) + if (!net->sctp.auth_enable) goto fall_through; /* Save peer's random parameter */ @@ -2638,7 +2636,7 @@ do_addr_param: break; case SCTP_PARAM_HMAC_ALGO: - if (!ep->auth_enable) + if (!net->sctp.auth_enable) goto fall_through; /* Save peer's HMAC list */ @@ -2654,7 +2652,7 @@ do_addr_param: break; case SCTP_PARAM_CHUNKS: - if (!ep->auth_enable) + if (!net->sctp.auth_enable) goto fall_through; asoc->peer.peer_chunks = kmemdup(param.p, diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c index edc204b05c8..de1a0138317 100644 --- a/net/sctp/sm_statefuns.c +++ b/net/sctp/sm_statefuns.c @@ -364,7 +364,7 @@ sctp_disposition_t sctp_sf_do_5_1B_init(struct net *net, /* Verify the INIT chunk before processing it. */ err_chunk = NULL; - if (!sctp_verify_init(net, ep, asoc, chunk->chunk_hdr->type, + if (!sctp_verify_init(net, asoc, chunk->chunk_hdr->type, (sctp_init_chunk_t *)chunk->chunk_hdr, chunk, &err_chunk)) { /* This chunk contains fatal error. It is to be discarded. @@ -531,7 +531,7 @@ sctp_disposition_t sctp_sf_do_5_1C_ack(struct net *net, /* Verify the INIT chunk before processing it. */ err_chunk = NULL; - if (!sctp_verify_init(net, ep, asoc, chunk->chunk_hdr->type, + if (!sctp_verify_init(net, asoc, chunk->chunk_hdr->type, (sctp_init_chunk_t *)chunk->chunk_hdr, chunk, &err_chunk)) { @@ -765,12 +765,6 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(struct net *net, struct sctp_chunk auth; sctp_ierror_t ret; - /* Make sure that we and the peer are AUTH capable */ - if (!net->sctp.auth_enable || !new_asoc->peer.auth_capable) { - sctp_association_free(new_asoc); - return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); - } - /* set-up our fake chunk so that we can process it */ auth.skb = chunk->auth_chunk; auth.asoc = chunk->asoc; @@ -781,6 +775,10 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(struct net *net, auth.transport = chunk->transport; ret = sctp_sf_authenticate(net, ep, new_asoc, type, &auth); + + /* We can now safely free the auth_chunk clone */ + kfree_skb(chunk->auth_chunk); + if (ret != SCTP_IERROR_NO_ERROR) { sctp_association_free(new_asoc); return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); @@ -1437,7 +1435,7 @@ static sctp_disposition_t sctp_sf_do_unexpected_init( /* Verify the INIT chunk before processing it. */ err_chunk = NULL; - if (!sctp_verify_init(net, ep, asoc, chunk->chunk_hdr->type, + if (!sctp_verify_init(net, asoc, chunk->chunk_hdr->type, (sctp_init_chunk_t *)chunk->chunk_hdr, chunk, &err_chunk)) { /* This chunk contains fatal error. It is to be discarded. @@ -1782,22 +1780,9 @@ static sctp_disposition_t sctp_sf_do_dupcook_a(struct net *net, /* Update the content of current association. */ sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc)); sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev)); - if (sctp_state(asoc, SHUTDOWN_PENDING) && - (sctp_sstate(asoc->base.sk, CLOSING) || - sock_flag(asoc->base.sk, SOCK_DEAD))) { - /* if were currently in SHUTDOWN_PENDING, but the socket - * has been closed by user, don't transition to ESTABLISHED. - * Instead trigger SHUTDOWN bundled with COOKIE_ACK. - */ - sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl)); - return sctp_sf_do_9_2_start_shutdown(net, ep, asoc, - SCTP_ST_CHUNK(0), NULL, - commands); - } else { - sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, - SCTP_STATE(SCTP_STATE_ESTABLISHED)); - sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl)); - } + sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, + SCTP_STATE(SCTP_STATE_ESTABLISHED)); + sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl)); return SCTP_DISPOSITION_CONSUME; nomem_ev: diff --git a/net/sctp/socket.c b/net/sctp/socket.c index dfb9b133e66..8554e5eebae 100644 --- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -3318,10 +3318,10 @@ static int sctp_setsockopt_auth_chunk(struct sock *sk, char __user *optval, unsigned int optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_authchunk val; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; if (optlen != sizeof(struct sctp_authchunk)) @@ -3338,7 +3338,7 @@ static int sctp_setsockopt_auth_chunk(struct sock *sk, } /* add this chunk id to the endpoint */ - return sctp_auth_ep_add_chunkid(ep, val.sauth_chunk); + return sctp_auth_ep_add_chunkid(sctp_sk(sk)->ep, val.sauth_chunk); } /* @@ -3351,12 +3351,12 @@ static int sctp_setsockopt_hmac_ident(struct sock *sk, char __user *optval, unsigned int optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_hmacalgo *hmacs; u32 idents; int err; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; if (optlen < sizeof(struct sctp_hmacalgo)) @@ -3373,7 +3373,7 @@ static int sctp_setsockopt_hmac_ident(struct sock *sk, goto out; } - err = sctp_auth_ep_set_hmacs(ep, hmacs); + err = sctp_auth_ep_set_hmacs(sctp_sk(sk)->ep, hmacs); out: kfree(hmacs); return err; @@ -3389,12 +3389,12 @@ static int sctp_setsockopt_auth_key(struct sock *sk, char __user *optval, unsigned int optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_authkey *authkey; struct sctp_association *asoc; int ret; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; if (optlen <= sizeof(struct sctp_authkey)) @@ -3415,7 +3415,7 @@ static int sctp_setsockopt_auth_key(struct sock *sk, goto out; } - ret = sctp_auth_set_key(ep, asoc, authkey); + ret = sctp_auth_set_key(sctp_sk(sk)->ep, asoc, authkey); out: kzfree(authkey); return ret; @@ -3431,11 +3431,11 @@ static int sctp_setsockopt_active_key(struct sock *sk, char __user *optval, unsigned int optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_authkeyid val; struct sctp_association *asoc; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; if (optlen != sizeof(struct sctp_authkeyid)) @@ -3447,7 +3447,8 @@ static int sctp_setsockopt_active_key(struct sock *sk, if (!asoc && val.scact_assoc_id && sctp_style(sk, UDP)) return -EINVAL; - return sctp_auth_set_active_key(ep, asoc, val.scact_keynumber); + return sctp_auth_set_active_key(sctp_sk(sk)->ep, asoc, + val.scact_keynumber); } /* @@ -3459,11 +3460,11 @@ static int sctp_setsockopt_del_key(struct sock *sk, char __user *optval, unsigned int optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_authkeyid val; struct sctp_association *asoc; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; if (optlen != sizeof(struct sctp_authkeyid)) @@ -3475,7 +3476,8 @@ static int sctp_setsockopt_del_key(struct sock *sk, if (!asoc && val.scact_assoc_id && sctp_style(sk, UDP)) return -EINVAL; - return sctp_auth_del_key_id(ep, asoc, val.scact_keynumber); + return sctp_auth_del_key_id(sctp_sk(sk)->ep, asoc, + val.scact_keynumber); } @@ -5366,16 +5368,16 @@ static int sctp_getsockopt_maxburst(struct sock *sk, int len, static int sctp_getsockopt_hmac_ident(struct sock *sk, int len, char __user *optval, int __user *optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_hmacalgo __user *p = (void __user *)optval; struct sctp_hmac_algo_param *hmacs; __u16 data_len = 0; u32 num_idents; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; - hmacs = ep->auth_hmacs_list; + hmacs = sctp_sk(sk)->ep->auth_hmacs_list; data_len = ntohs(hmacs->param_hdr.length) - sizeof(sctp_paramhdr_t); if (len < sizeof(struct sctp_hmacalgo) + data_len) @@ -5396,11 +5398,11 @@ static int sctp_getsockopt_hmac_ident(struct sock *sk, int len, static int sctp_getsockopt_active_key(struct sock *sk, int len, char __user *optval, int __user *optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_authkeyid val; struct sctp_association *asoc; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; if (len < sizeof(struct sctp_authkeyid)) @@ -5415,7 +5417,7 @@ static int sctp_getsockopt_active_key(struct sock *sk, int len, if (asoc) val.scact_keynumber = asoc->active_key_id; else - val.scact_keynumber = ep->active_key_id; + val.scact_keynumber = sctp_sk(sk)->ep->active_key_id; len = sizeof(struct sctp_authkeyid); if (put_user(len, optlen)) @@ -5429,7 +5431,7 @@ static int sctp_getsockopt_active_key(struct sock *sk, int len, static int sctp_getsockopt_peer_auth_chunks(struct sock *sk, int len, char __user *optval, int __user *optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_authchunks __user *p = (void __user *)optval; struct sctp_authchunks val; struct sctp_association *asoc; @@ -5437,7 +5439,7 @@ static int sctp_getsockopt_peer_auth_chunks(struct sock *sk, int len, u32 num_chunks = 0; char __user *to; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; if (len < sizeof(struct sctp_authchunks)) @@ -5473,7 +5475,7 @@ num: static int sctp_getsockopt_local_auth_chunks(struct sock *sk, int len, char __user *optval, int __user *optlen) { - struct sctp_endpoint *ep = sctp_sk(sk)->ep; + struct net *net = sock_net(sk); struct sctp_authchunks __user *p = (void __user *)optval; struct sctp_authchunks val; struct sctp_association *asoc; @@ -5481,7 +5483,7 @@ static int sctp_getsockopt_local_auth_chunks(struct sock *sk, int len, u32 num_chunks = 0; char __user *to; - if (!ep->auth_enable) + if (!net->sctp.auth_enable) return -EACCES; if (len < sizeof(struct sctp_authchunks)) @@ -5498,7 +5500,7 @@ static int sctp_getsockopt_local_auth_chunks(struct sock *sk, int len, if (asoc) ch = (struct sctp_chunks_param*)asoc->c.auth_chunks; else - ch = ep->auth_chunk_list; + ch = sctp_sk(sk)->ep->auth_chunk_list; if (!ch) goto num; @@ -6580,46 +6582,6 @@ static void __sctp_write_space(struct sctp_association *asoc) } } -static void sctp_wake_up_waiters(struct sock *sk, - struct sctp_association *asoc) -{ - struct sctp_association *tmp = asoc; - - /* We do accounting for the sndbuf space per association, - * so we only need to wake our own association. - */ - if (asoc->ep->sndbuf_policy) - return __sctp_write_space(asoc); - - /* If association goes down and is just flushing its - * outq, then just normally notify others. - */ - if (asoc->base.dead) - return sctp_write_space(sk); - - /* Accounting for the sndbuf space is per socket, so we - * need to wake up others, try to be fair and in case of - * other associations, let them have a go first instead - * of just doing a sctp_write_space() call. - * - * Note that we reach sctp_wake_up_waiters() only when - * associations free up queued chunks, thus we are under - * lock and the list of associations on a socket is - * guaranteed not to change. - */ - for (tmp = list_next_entry(tmp, asocs); 1; - tmp = list_next_entry(tmp, asocs)) { - /* Manually skip the head element. */ - if (&tmp->asocs == &((sctp_sk(sk))->ep->asocs)) - continue; - /* Wake up association. */ - __sctp_write_space(tmp); - /* We've reached the end. */ - if (tmp == asoc) - break; - } -} - /* Do accounting for the sndbuf space. * Decrement the used sndbuf space of the corresponding association by the * data size which was just transmitted(freed). @@ -6647,7 +6609,7 @@ static void sctp_wfree(struct sk_buff *skb) sk_mem_uncharge(sk, skb->truesize); sock_wfree(skb); - sctp_wake_up_waiters(sk, asoc); + __sctp_write_space(asoc); sctp_association_put(asoc); } diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c index 29299dcabfb..bf3c6e8fc40 100644 --- a/net/sctp/sysctl.c +++ b/net/sctp/sysctl.c @@ -65,11 +65,8 @@ extern int sysctl_sctp_wmem[3]; static int proc_sctp_do_hmac_alg(ctl_table *ctl, int write, void __user *buffer, size_t *lenp, - loff_t *ppos); -static int proc_sctp_do_auth(struct ctl_table *ctl, int write, - void __user *buffer, size_t *lenp, - loff_t *ppos); + loff_t *ppos); static ctl_table sctp_table[] = { { .procname = "sctp_mem", @@ -270,7 +267,7 @@ static ctl_table sctp_net_table[] = { .data = &init_net.sctp.auth_enable, .maxlen = sizeof(int), .mode = 0644, - .proc_handler = proc_sctp_do_auth, + .proc_handler = proc_dointvec, }, { .procname = "addr_scope_policy", @@ -351,36 +348,6 @@ static int proc_sctp_do_hmac_alg(ctl_table *ctl, return ret; } -static int proc_sctp_do_auth(struct ctl_table *ctl, int write, - void __user *buffer, size_t *lenp, - loff_t *ppos) -{ - struct net *net = current->nsproxy->net_ns; - struct ctl_table tbl; - int new_value, ret; - - memset(&tbl, 0, sizeof(struct ctl_table)); - tbl.maxlen = sizeof(unsigned int); - - if (write) - tbl.data = &new_value; - else - tbl.data = &net->sctp.auth_enable; - - ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); - if (write && ret == 0) { - struct sock *sk = net->sctp.ctl_sock; - - net->sctp.auth_enable = new_value; - /* Update the value in the control socket */ - lock_sock(sk); - sctp_sk(sk)->ep->auth_enable = new_value; - release_sock(sk); - } - - return ret; -} - int sctp_sysctl_net_register(struct net *net) { struct ctl_table *table; diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c index ca907f2f5e5..10c018a5b9f 100644 --- a/net/sctp/ulpevent.c +++ b/net/sctp/ulpevent.c @@ -373,10 +373,9 @@ fail: * specification [SCTP] and any extensions for a list of possible * error formats. */ -struct sctp_ulpevent * -sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, - struct sctp_chunk *chunk, __u16 flags, - gfp_t gfp) +struct sctp_ulpevent *sctp_ulpevent_make_remote_error( + const struct sctp_association *asoc, struct sctp_chunk *chunk, + __u16 flags, gfp_t gfp) { struct sctp_ulpevent *event; struct sctp_remote_error *sre; @@ -395,7 +394,8 @@ sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, /* Copy the skb to a new skb with room for us to prepend * notification with. */ - skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp); + skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error), + 0, gfp); /* Pull off the rest of the cause TLV from the chunk. */ skb_pull(chunk->skb, elen); @@ -406,21 +406,62 @@ sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, event = sctp_skb2event(skb); sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize); - sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre)); + sre = (struct sctp_remote_error *) + skb_push(skb, sizeof(struct sctp_remote_error)); /* Trim the buffer to the right length. */ - skb_trim(skb, sizeof(*sre) + elen); + skb_trim(skb, sizeof(struct sctp_remote_error) + elen); - /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */ - memset(sre, 0, sizeof(*sre)); + /* Socket Extensions for SCTP + * 5.3.1.3 SCTP_REMOTE_ERROR + * + * sre_type: + * It should be SCTP_REMOTE_ERROR. + */ sre->sre_type = SCTP_REMOTE_ERROR; + + /* + * Socket Extensions for SCTP + * 5.3.1.3 SCTP_REMOTE_ERROR + * + * sre_flags: 16 bits (unsigned integer) + * Currently unused. + */ sre->sre_flags = 0; + + /* Socket Extensions for SCTP + * 5.3.1.3 SCTP_REMOTE_ERROR + * + * sre_length: sizeof (__u32) + * + * This field is the total length of the notification data, + * including the notification header. + */ sre->sre_length = skb->len; + + /* Socket Extensions for SCTP + * 5.3.1.3 SCTP_REMOTE_ERROR + * + * sre_error: 16 bits (unsigned integer) + * This value represents one of the Operational Error causes defined in + * the SCTP specification, in network byte order. + */ sre->sre_error = cause; + + /* Socket Extensions for SCTP + * 5.3.1.3 SCTP_REMOTE_ERROR + * + * sre_assoc_id: sizeof (sctp_assoc_t) + * + * The association id field, holds the identifier for the association. + * All notifications for a given association have the same association + * identifier. For TCP style socket, this field is ignored. + */ sctp_ulpevent_set_owner(event, asoc); sre->sre_assoc_id = sctp_assoc2id(asoc); return event; + fail: return NULL; } @@ -865,9 +906,7 @@ __u16 sctp_ulpevent_get_notification_type(const struct sctp_ulpevent *event) return notification->sn_header.sn_type; } -/* RFC6458, Section 5.3.2. SCTP Header Information Structure - * (SCTP_SNDRCV, DEPRECATED) - */ +/* Copy out the sndrcvinfo into a msghdr. */ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, struct msghdr *msghdr) { @@ -876,21 +915,74 @@ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, if (sctp_ulpevent_is_notification(event)) return; - memset(&sinfo, 0, sizeof(sinfo)); + /* Sockets API Extensions for SCTP + * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV) + * + * sinfo_stream: 16 bits (unsigned integer) + * + * For recvmsg() the SCTP stack places the message's stream number in + * this value. + */ sinfo.sinfo_stream = event->stream; + /* sinfo_ssn: 16 bits (unsigned integer) + * + * For recvmsg() this value contains the stream sequence number that + * the remote endpoint placed in the DATA chunk. For fragmented + * messages this is the same number for all deliveries of the message + * (if more than one recvmsg() is needed to read the message). + */ sinfo.sinfo_ssn = event->ssn; + /* sinfo_ppid: 32 bits (unsigned integer) + * + * In recvmsg() this value is + * the same information that was passed by the upper layer in the peer + * application. Please note that byte order issues are NOT accounted + * for and this information is passed opaquely by the SCTP stack from + * one end to the other. + */ sinfo.sinfo_ppid = event->ppid; + /* sinfo_flags: 16 bits (unsigned integer) + * + * This field may contain any of the following flags and is composed of + * a bitwise OR of these values. + * + * recvmsg() flags: + * + * SCTP_UNORDERED - This flag is present when the message was sent + * non-ordered. + */ sinfo.sinfo_flags = event->flags; + /* sinfo_tsn: 32 bit (unsigned integer) + * + * For the receiving side, this field holds a TSN that was + * assigned to one of the SCTP Data Chunks. + */ sinfo.sinfo_tsn = event->tsn; + /* sinfo_cumtsn: 32 bit (unsigned integer) + * + * This field will hold the current cumulative TSN as + * known by the underlying SCTP layer. Note this field is + * ignored when sending and only valid for a receive + * operation when sinfo_flags are set to SCTP_UNORDERED. + */ sinfo.sinfo_cumtsn = event->cumtsn; + /* sinfo_assoc_id: sizeof (sctp_assoc_t) + * + * The association handle field, sinfo_assoc_id, holds the identifier + * for the association announced in the COMMUNICATION_UP notification. + * All notifications for a given association have the same identifier. + * Ignored for one-to-one style sockets. + */ sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc); - /* Context value that is set via SCTP_CONTEXT socket option. */ + + /* context value that is set via SCTP_CONTEXT socket option. */ sinfo.sinfo_context = event->asoc->default_rcv_context; + /* These fields are not used while receiving. */ sinfo.sinfo_timetolive = 0; put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV, - sizeof(sinfo), &sinfo); + sizeof(struct sctp_sndrcvinfo), (void *)&sinfo); } /* Do accounting for bytes received and hold a reference to the association diff --git a/net/socket.c b/net/socket.c index e85c4ec47a1..6b315a8cd0c 100644 --- a/net/socket.c +++ b/net/socket.c @@ -1969,10 +1969,6 @@ static int copy_msghdr_from_user(struct msghdr *kmsg, { if (copy_from_user(kmsg, umsg, sizeof(struct msghdr))) return -EFAULT; - - if (kmsg->msg_namelen < 0) - return -EINVAL; - if (kmsg->msg_namelen > sizeof(struct sockaddr_storage)) kmsg->msg_namelen = sizeof(struct sockaddr_storage); return 0; diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index b9aad4723a9..80a6640f329 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -730,8 +730,6 @@ static int svc_handle_xprt(struct svc_rqst *rqstp, struct svc_xprt *xprt) newxpt = xprt->xpt_ops->xpo_accept(xprt); if (newxpt) svc_add_new_temp_xprt(serv, newxpt); - else - module_put(xprt->xpt_class->xcl_owner); } else if (xprt->xpt_ops->xpo_has_wspace(xprt)) { /* XPT_DATA|XPT_DEFERRED case: */ dprintk("svc: server %p, pool %u, transport %p, inuse=%d\n", diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 5c62c5e89b4..305374d4fb9 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -683,7 +683,6 @@ static struct svc_xprt_class svc_udp_class = { .xcl_owner = THIS_MODULE, .xcl_ops = &svc_udp_ops, .xcl_max_payload = RPCSVC_MAXPAYLOAD_UDP, - .xcl_ident = XPRT_TRANSPORT_UDP, }; static void svc_udp_init(struct svc_sock *svsk, struct svc_serv *serv) @@ -1276,7 +1275,6 @@ static struct svc_xprt_class svc_tcp_class = { .xcl_owner = THIS_MODULE, .xcl_ops = &svc_tcp_ops, .xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP, - .xcl_ident = XPRT_TRANSPORT_TCP, }; void svc_init_xprt_sock(void) @@ -1395,22 +1393,6 @@ static struct svc_sock *svc_setup_socket(struct svc_serv *serv, return svsk; } -bool svc_alien_sock(struct net *net, int fd) -{ - int err; - struct socket *sock = sockfd_lookup(fd, &err); - bool ret = false; - - if (!sock) - goto out; - if (sock_net(sock->sk) != net) - ret = true; - sockfd_put(sock); -out: - return ret; -} -EXPORT_SYMBOL_GPL(svc_alien_sock); - /** * svc_addsock - add a listener socket to an RPC service * @serv: pointer to RPC service to which to add a new listener diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index 42ce6bfc729..095363eee76 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -1290,7 +1290,7 @@ struct rpc_xprt *xprt_create_transport(struct xprt_create *args) } } spin_unlock(&xprt_list_lock); - dprintk("RPC: transport (%d) not supported\n", args->ident); + printk(KERN_ERR "RPC: transport (%d) not supported\n", args->ident); return ERR_PTR(-EIO); found: diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index ed36cb52cd8..62e4f9bcc38 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -89,7 +89,6 @@ struct svc_xprt_class svc_rdma_class = { .xcl_owner = THIS_MODULE, .xcl_ops = &svc_rdma_ops, .xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP, - .xcl_ident = XPRT_TRANSPORT_RDMA, }; struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt) diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c index bf2755419ec..e5f3da50782 100644 --- a/net/tipc/bcast.c +++ b/net/tipc/bcast.c @@ -531,7 +531,6 @@ receive: buf = node->bclink.deferred_head; node->bclink.deferred_head = buf->next; - buf->next = NULL; node->bclink.deferred_size--; goto receive; } diff --git a/net/tipc/netlink.c b/net/tipc/netlink.c index 1e6081fb607..8bcd4985d0f 100644 --- a/net/tipc/netlink.c +++ b/net/tipc/netlink.c @@ -47,7 +47,7 @@ static int handle_cmd(struct sk_buff *skb, struct genl_info *info) int hdr_space = nlmsg_total_size(GENL_HDRLEN + TIPC_GENL_HDRLEN); u16 cmd; - if ((req_userhdr->cmd & 0xC000) && (!netlink_capable(skb, CAP_NET_ADMIN))) + if ((req_userhdr->cmd & 0xC000) && (!capable(CAP_NET_ADMIN))) cmd = TIPC_CMD_NOT_NET_ADMIN; else cmd = req_userhdr->cmd; diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index c80c107139f..f246812680d 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -161,8 +161,9 @@ static inline void unix_set_secdata(struct scm_cookie *scm, struct sk_buff *skb) static inline unsigned int unix_hash_fold(__wsum n) { - unsigned int hash = (__force unsigned int)csum_fold(n); + unsigned int hash = (__force unsigned int)n; + hash ^= hash>>16; hash ^= hash>>8; return hash&(UNIX_HASH_SIZE-1); } @@ -1793,11 +1794,8 @@ static int unix_dgram_recvmsg(struct kiocb *iocb, struct socket *sock, goto out; err = mutex_lock_interruptible(&u->readlock); - if (unlikely(err)) { - /* recvmsg() in non blocking mode is supposed to return -EAGAIN - * sk_rcvtimeo is not honored by mutex_lock_interruptible() - */ - err = noblock ? -EAGAIN : -ERESTARTSYS; + if (err) { + err = sock_intr_errno(sock_rcvtimeo(sk, noblock)); goto out; } @@ -1917,7 +1915,6 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock, struct unix_sock *u = unix_sk(sk); struct sockaddr_un *sunaddr = msg->msg_name; int copied = 0; - int noblock = flags & MSG_DONTWAIT; int check_creds = 0; int target; int err = 0; @@ -1933,7 +1930,7 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock, goto out; target = sock_rcvlowat(sk, flags&MSG_WAITALL, size); - timeo = sock_rcvtimeo(sk, noblock); + timeo = sock_rcvtimeo(sk, flags&MSG_DONTWAIT); /* Lock the socket to prevent queue disordering * while sleeps in memcpy_tomsg @@ -1945,11 +1942,8 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock, } err = mutex_lock_interruptible(&u->readlock); - if (unlikely(err)) { - /* recvmsg() in non blocking mode is supposed to return -EAGAIN - * sk_rcvtimeo is not honored by mutex_lock_interruptible() - */ - err = noblock ? -EAGAIN : -ERESTARTSYS; + if (err) { + err = sock_intr_errno(timeo); goto out; } diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index 29ebabda0b7..b145256392b 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -6577,9 +6577,6 @@ int cfg80211_testmode_reply(struct sk_buff *skb) void *hdr = ((void **)skb->cb)[1]; struct nlattr *data = ((void **)skb->cb)[2]; - /* clear CB data for netlink core to own from now on */ - memset(skb->cb, 0, sizeof(skb->cb)); - if (WARN_ON(!rdev->testmode_info)) { kfree_skb(skb); return -EINVAL; @@ -6606,9 +6603,6 @@ void cfg80211_testmode_event(struct sk_buff *skb, gfp_t gfp) void *hdr = ((void **)skb->cb)[1]; struct nlattr *data = ((void **)skb->cb)[2]; - /* clear CB data for netlink core to own from now on */ - memset(skb->cb, 0, sizeof(skb->cb)); - nla_nest_end(skb, data); genlmsg_end(skb, hdr); genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), skb, 0, diff --git a/net/wireless/trace.h b/net/wireless/trace.h index bc5a75b1aef..5755bc14abb 100644 --- a/net/wireless/trace.h +++ b/net/wireless/trace.h @@ -1972,8 +1972,7 @@ TRACE_EVENT(cfg80211_michael_mic_failure, MAC_ASSIGN(addr, addr); __entry->key_type = key_type; __entry->key_id = key_id; - if (tsc) - memcpy(__entry->tsc, tsc, 6); + memcpy(__entry->tsc, tsc, 6); ), TP_printk(NETDEV_PR_FMT ", " MAC_PR_FMT ", key type: %d, key id: %d, tsc: %pm", NETDEV_PR_ARG, MAC_PR_ARG(addr), __entry->key_type, diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 7a70a5a5671..3f565e495ac 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -2362,7 +2362,7 @@ static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) link = &xfrm_dispatch[type]; /* All operations require privileges, even GET */ - if (!netlink_net_capable(skb, CAP_NET_ADMIN)) + if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) return -EPERM; if ((type == (XFRM_MSG_GETSA - XFRM_MSG_BASE) || diff --git a/scripts/Makefile.headersinst b/scripts/Makefile.headersinst index 8ccf83056a7..182084d728c 100644 --- a/scripts/Makefile.headersinst +++ b/scripts/Makefile.headersinst @@ -47,24 +47,18 @@ header-y := $(filter-out $(generic-y), $(header-y)) all-files := $(header-y) $(genhdr-y) $(wrapper-files) output-files := $(addprefix $(installdir)/, $(all-files)) -input-files1 := $(foreach hdr, $(header-y), \ +input-files := $(foreach hdr, $(header-y), \ $(if $(wildcard $(srcdir)/$(hdr)), \ - $(wildcard $(srcdir)/$(hdr))) \ - ) -input-files1-name := $(notdir $(input-files1)) -input-files2 := $(foreach hdr, $(header-y), \ - $(if $(wildcard $(srcdir)/$(hdr)),, \ + $(wildcard $(srcdir)/$(hdr)), \ $(if $(wildcard $(oldsrcdir)/$(hdr)), \ $(wildcard $(oldsrcdir)/$(hdr)), \ $(error Missing UAPI file $(srcdir)/$(hdr))) \ - )) -input-files2-name := $(notdir $(input-files2)) -input-files3 := $(foreach hdr, $(genhdr-y), \ + )) \ + $(foreach hdr, $(genhdr-y), \ $(if $(wildcard $(gendir)/$(hdr)), \ $(wildcard $(gendir)/$(hdr)), \ $(error Missing generated UAPI file $(gendir)/$(hdr)) \ )) -input-files3-name := $(notdir $(input-files3)) # Work out what needs to be removed oldheaders := $(patsubst $(installdir)/%,%,$(wildcard $(installdir)/*.h)) @@ -78,9 +72,7 @@ printdir = $(patsubst $(INSTALL_HDR_PATH)/%/,%,$(dir $@)) quiet_cmd_install = INSTALL $(printdir) ($(words $(all-files))\ file$(if $(word 2, $(all-files)),s)) cmd_install = \ - $(CONFIG_SHELL) $< $(installdir) $(srcdir) $(input-files1-name); \ - $(CONFIG_SHELL) $< $(installdir) $(oldsrcdir) $(input-files2-name); \ - $(CONFIG_SHELL) $< $(installdir) $(gendir) $(input-files3-name); \ + $(CONFIG_SHELL) $< $(installdir) $(input-files); \ for F in $(wrapper-files); do \ echo "\#include <asm-generic/$$F>" > $(installdir)/$$F; \ done; \ @@ -106,7 +98,7 @@ __headersinst: $(subdirs) $(install-file) @: targets += $(install-file) -$(install-file): scripts/headers_install.sh $(input-files1) $(input-files2) $(input-files3) FORCE +$(install-file): scripts/headers_install.sh $(input-files) FORCE $(if $(unwanted),$(call cmd,remove),) $(if $(wildcard $(dir $@)),,$(shell mkdir -p $(dir $@))) $(call if_changed,install) diff --git a/scripts/headers_install.sh b/scripts/headers_install.sh index 5de5660cb70..643764f53ea 100644 --- a/scripts/headers_install.sh +++ b/scripts/headers_install.sh @@ -2,7 +2,7 @@ if [ $# -lt 1 ] then - echo "Usage: headers_install.sh OUTDIR SRCDIR [FILES...] + echo "Usage: headers_install.sh OUTDIR [FILES...] echo echo "Prepares kernel header files for use by user space, by removing" echo "all compiler.h definitions and #includes, removing any" @@ -10,7 +10,6 @@ then echo "asm/inline/volatile keywords." echo echo "OUTDIR: directory to write each userspace header FILE to." - echo "SRCDIR: source directory where files are picked." echo "FILES: list of header files to operate on." exit 1 @@ -20,8 +19,6 @@ fi OUTDIR="$1" shift -SRCDIR="$1" -shift # Iterate through files listed on command line @@ -37,7 +34,7 @@ do -e 's/(^|[^a-zA-Z0-9])__packed([^a-zA-Z0-9_]|$)/\1__attribute__((packed))\2/g' \ -e 's/(^|[ \t(])(inline|asm|volatile)([ \t(]|$)/\1__\2__\3/g' \ -e 's@#(ifndef|define|endif[ \t]*/[*])[ \t]*_UAPI@#\1 @' \ - "$SRCDIR/$i" > "$OUTDIR/$FILE.sed" || exit 1 + "$i" > "$OUTDIR/$FILE.sed" || exit 1 scripts/unifdef -U__KERNEL__ -D__EXPORTED_HEADERS__ "$OUTDIR/$FILE.sed" \ > "$OUTDIR/$FILE" [ $? -gt 1 ] && exit 1 diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c index a2c361c86e7..a4be8e112bb 100644 --- a/scripts/mod/modpost.c +++ b/scripts/mod/modpost.c @@ -573,16 +573,12 @@ static int ignore_undef_symbol(struct elf_info *info, const char *symname) if (strncmp(symname, "_restgpr_", sizeof("_restgpr_") - 1) == 0 || strncmp(symname, "_savegpr_", sizeof("_savegpr_") - 1) == 0 || strncmp(symname, "_rest32gpr_", sizeof("_rest32gpr_") - 1) == 0 || - strncmp(symname, "_save32gpr_", sizeof("_save32gpr_") - 1) == 0 || - strncmp(symname, "_restvr_", sizeof("_restvr_") - 1) == 0 || - strncmp(symname, "_savevr_", sizeof("_savevr_") - 1) == 0) + strncmp(symname, "_save32gpr_", sizeof("_save32gpr_") - 1) == 0) return 1; if (info->hdr->e_machine == EM_PPC64) /* Special register function linked on all modules during final link of .ko */ if (strncmp(symname, "_restgpr0_", sizeof("_restgpr0_") - 1) == 0 || - strncmp(symname, "_savegpr0_", sizeof("_savegpr0_") - 1) == 0 || - strncmp(symname, "_restvr_", sizeof("_restvr_") - 1) == 0 || - strncmp(symname, "_savevr_", sizeof("_savevr_") - 1) == 0) + strncmp(symname, "_savegpr0_", sizeof("_savegpr0_") - 1) == 0) return 1; /* Do not ignore this symbol */ return 0; diff --git a/scripts/package/builddeb b/scripts/package/builddeb index 3001ec5ae07..acb86507828 100644 --- a/scripts/package/builddeb +++ b/scripts/package/builddeb @@ -62,7 +62,7 @@ create_package() { fi # Create the package - dpkg-gencontrol -isp $forcearch -Vkernel:debarch="${debarch:-$(dpkg --print-architecture)}" -p$pname -P"$pdir" + dpkg-gencontrol -isp $forcearch -p$pname -P"$pdir" dpkg --build "$pdir" .. } @@ -252,14 +252,15 @@ mkdir -p "$destdir" (cd $objtree; tar -c -f - -T "$objtree/debian/hdrobjfiles") | (cd $destdir; tar -xf -) ln -sf "/usr/src/linux-headers-$version" "$kernel_headers_dir/lib/modules/$version/build" rm -f "$objtree/debian/hdrsrcfiles" "$objtree/debian/hdrobjfiles" +arch=$(dpkg --print-architecture) cat <<EOF >> debian/control Package: $kernel_headers_packagename Provides: linux-headers, linux-headers-2.6 -Architecture: any -Description: Linux kernel headers for $KERNELRELEASE on \${kernel:debarch} - This package provides kernel header files for $KERNELRELEASE on \${kernel:debarch} +Architecture: $arch +Description: Linux kernel headers for $KERNELRELEASE on $arch + This package provides kernel header files for $KERNELRELEASE on $arch . This is useful for people who need to build external modules EOF diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h index 49b582a225b..9d1421e63ff 100644 --- a/scripts/recordmcount.h +++ b/scripts/recordmcount.h @@ -163,11 +163,11 @@ static int mcount_adjust = 0; static int MIPS_is_fake_mcount(Elf_Rel const *rp) { - static Elf_Addr old_r_offset = ~(Elf_Addr)0; + static Elf_Addr old_r_offset; Elf_Addr current_r_offset = _w(rp->r_offset); int is_fake; - is_fake = (old_r_offset != ~(Elf_Addr)0) && + is_fake = old_r_offset && (current_r_offset - old_r_offset == MIPS_FAKEMCOUNT_OFFSET); old_r_offset = current_r_offset; diff --git a/security/commoncap.c b/security/commoncap.c index 0405522995c..5870fdc224b 100644 --- a/security/commoncap.c +++ b/security/commoncap.c @@ -432,9 +432,6 @@ int get_vfs_caps_from_disk(const struct dentry *dentry, struct cpu_vfs_cap_data cpu_caps->inheritable.cap[i] = le32_to_cpu(caps.data[i].inheritable); } - cpu_caps->permitted.cap[CAP_LAST_U32] &= CAP_LAST_U32_VALID_MASK; - cpu_caps->inheritable.cap[CAP_LAST_U32] &= CAP_LAST_U32_VALID_MASK; - return 0; } diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c index b980a6ce5c7..cdbde176218 100644 --- a/security/integrity/evm/evm_main.c +++ b/security/integrity/evm/evm_main.c @@ -275,23 +275,12 @@ static int evm_protect_xattr(struct dentry *dentry, const char *xattr_name, * @xattr_value: pointer to the new extended attribute value * @xattr_value_len: pointer to the new extended attribute value length * - * Before allowing the 'security.evm' protected xattr to be updated, - * verify the existing value is valid. As only the kernel should have - * access to the EVM encrypted key needed to calculate the HMAC, prevent - * userspace from writing HMAC value. Writing 'security.evm' requires - * requires CAP_SYS_ADMIN privileges. + * Updating 'security.evm' requires CAP_SYS_ADMIN privileges and that + * the current value is valid. */ int evm_inode_setxattr(struct dentry *dentry, const char *xattr_name, const void *xattr_value, size_t xattr_value_len) { - const struct evm_ima_xattr_data *xattr_data = xattr_value; - - if (strcmp(xattr_name, XATTR_NAME_EVM) == 0) { - if (!xattr_value_len) - return -EINVAL; - if (xattr_data->type != EVM_IMA_XATTR_DIGSIG) - return -EPERM; - } return evm_protect_xattr(dentry, xattr_name, xattr_value, xattr_value_len); } diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 9da974c0f95..a02e0791cf1 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -24,36 +24,6 @@ static struct crypto_shash *ima_shash_tfm; -/** - * ima_kernel_read - read file content - * - * This is a function for reading file content instead of kernel_read(). - * It does not perform locking checks to ensure it cannot be blocked. - * It does not perform security checks because it is irrelevant for IMA. - * - */ -static int ima_kernel_read(struct file *file, loff_t offset, - char *addr, unsigned long count) -{ - mm_segment_t old_fs; - char __user *buf = addr; - ssize_t ret; - - if (!(file->f_mode & FMODE_READ)) - return -EBADF; - if (!file->f_op->read && !file->f_op->aio_read) - return -EINVAL; - - old_fs = get_fs(); - set_fs(get_ds()); - if (file->f_op->read) - ret = file->f_op->read(file, buf, count, &offset); - else - ret = do_sync_read(file, buf, count, &offset); - set_fs(old_fs); - return ret; -} - int ima_init_crypto(void) { long rc; @@ -100,7 +70,7 @@ int ima_calc_file_hash(struct file *file, char *digest) while (offset < i_size) { int rbuf_len; - rbuf_len = ima_kernel_read(file, offset, rbuf, PAGE_SIZE); + rbuf_len = kernel_read(file, offset, rbuf, PAGE_SIZE); if (rbuf_len < 0) { rc = rbuf_len; break; diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index bf54f68c169..b582c7d39ae 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -437,7 +437,6 @@ next_inode: list_entry(sbsec->isec_head.next, struct inode_security_struct, list); struct inode *inode = isec->inode; - list_del_init(&isec->list); spin_unlock(&sbsec->isec_lock); inode = igrab(inode); if (inode) { @@ -446,6 +445,7 @@ next_inode: iput(inode); } spin_lock(&sbsec->isec_lock); + list_del_init(&isec->list); goto next_inode; } spin_unlock(&sbsec->isec_lock); @@ -1361,33 +1361,15 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent isec->sid = sbsec->sid; if ((sbsec->flags & SE_SBPROC) && !S_ISLNK(inode->i_mode)) { - /* We must have a dentry to determine the label on - * procfs inodes */ - if (opt_dentry) - /* Called from d_instantiate or - * d_splice_alias. */ - dentry = dget(opt_dentry); - else - /* Called from selinux_complete_init, try to - * find a dentry. */ - dentry = d_find_alias(inode); - /* - * This can be hit on boot when a file is accessed - * before the policy is loaded. When we load policy we - * may find inodes that have no dentry on the - * sbsec->isec_head list. No reason to complain as - * these will get fixed up the next time we go through - * inode_doinit() with a dentry, before these inodes - * could be used again by userspace. - */ - if (!dentry) - goto out_unlock; - isec->sclass = inode_mode_to_security_class(inode->i_mode); - rc = selinux_proc_get_sid(dentry, isec->sclass, &sid); - dput(dentry); - if (rc) - goto out_unlock; - isec->sid = sid; + if (opt_dentry) { + isec->sclass = inode_mode_to_security_class(inode->i_mode); + rc = selinux_proc_get_sid(opt_dentry, + isec->sclass, + &sid); + if (rc) + goto out_unlock; + isec->sid = sid; + } } break; } diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c index 3fdf998ad05..19799931c51 100644 --- a/sound/core/compress_offload.c +++ b/sound/core/compress_offload.c @@ -133,7 +133,7 @@ static int snd_compr_open(struct inode *inode, struct file *f) kfree(data); } snd_card_unref(compr->card); - return ret; + return 0; } static int snd_compr_free(struct inode *inode, struct file *f) diff --git a/sound/core/control.c b/sound/core/control.c index 98a29b26c5f..d8aa206e8bd 100644 --- a/sound/core/control.c +++ b/sound/core/control.c @@ -289,10 +289,6 @@ static bool snd_ctl_remove_numid_conflict(struct snd_card *card, { struct snd_kcontrol *kctl; - /* Make sure that the ids assigned to the control do not wrap around */ - if (card->last_numid >= UINT_MAX - count) - card->last_numid = 0; - list_for_each_entry(kctl, &card->controls, list) { if (kctl->id.numid < card->last_numid + 1 + count && kctl->id.numid + kctl->count > card->last_numid + 1) { @@ -335,7 +331,6 @@ int snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol) { struct snd_ctl_elem_id id; unsigned int idx; - unsigned int count; int err = -EINVAL; if (! kcontrol) @@ -343,9 +338,6 @@ int snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol) if (snd_BUG_ON(!card || !kcontrol->info)) goto error; id = kcontrol->id; - if (id.index > UINT_MAX - kcontrol->count) - goto error; - down_write(&card->controls_rwsem); if (snd_ctl_find_id(card, &id)) { up_write(&card->controls_rwsem); @@ -367,9 +359,8 @@ int snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol) card->controls_count += kcontrol->count; kcontrol->id.numid = card->last_numid + 1; card->last_numid += kcontrol->count; - count = kcontrol->count; up_write(&card->controls_rwsem); - for (idx = 0; idx < count; idx++, id.index++, id.numid++) + for (idx = 0; idx < kcontrol->count; idx++, id.index++, id.numid++) snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id); return 0; @@ -398,7 +389,6 @@ int snd_ctl_replace(struct snd_card *card, struct snd_kcontrol *kcontrol, bool add_on_replace) { struct snd_ctl_elem_id id; - unsigned int count; unsigned int idx; struct snd_kcontrol *old; int ret; @@ -434,9 +424,8 @@ add: card->controls_count += kcontrol->count; kcontrol->id.numid = card->last_numid + 1; card->last_numid += kcontrol->count; - count = kcontrol->count; up_write(&card->controls_rwsem); - for (idx = 0; idx < count; idx++, id.index++, id.numid++) + for (idx = 0; idx < kcontrol->count; idx++, id.index++, id.numid++) snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id); return 0; @@ -909,9 +898,9 @@ static int snd_ctl_elem_write(struct snd_card *card, struct snd_ctl_file *file, result = kctl->put(kctl, control); } if (result > 0) { - struct snd_ctl_elem_id id = control->id; up_read(&card->controls_rwsem); - snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_VALUE, &id); + snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_VALUE, + &control->id); return 0; } } @@ -1003,7 +992,6 @@ static int snd_ctl_elem_unlock(struct snd_ctl_file *file, struct user_element { struct snd_ctl_elem_info info; - struct snd_card *card; void *elem_data; /* element data */ unsigned long elem_data_size; /* size of element data in bytes */ void *tlv_data; /* TLV data */ @@ -1047,9 +1035,7 @@ static int snd_ctl_elem_user_get(struct snd_kcontrol *kcontrol, { struct user_element *ue = kcontrol->private_data; - mutex_lock(&ue->card->user_ctl_lock); memcpy(&ucontrol->value, ue->elem_data, ue->elem_data_size); - mutex_unlock(&ue->card->user_ctl_lock); return 0; } @@ -1058,12 +1044,10 @@ static int snd_ctl_elem_user_put(struct snd_kcontrol *kcontrol, { int change; struct user_element *ue = kcontrol->private_data; - - mutex_lock(&ue->card->user_ctl_lock); + change = memcmp(&ucontrol->value, ue->elem_data, ue->elem_data_size) != 0; if (change) memcpy(ue->elem_data, &ucontrol->value, ue->elem_data_size); - mutex_unlock(&ue->card->user_ctl_lock); return change; } @@ -1083,32 +1067,19 @@ static int snd_ctl_elem_user_tlv(struct snd_kcontrol *kcontrol, new_data = memdup_user(tlv, size); if (IS_ERR(new_data)) return PTR_ERR(new_data); - mutex_lock(&ue->card->user_ctl_lock); change = ue->tlv_data_size != size; if (!change) change = memcmp(ue->tlv_data, new_data, size); kfree(ue->tlv_data); ue->tlv_data = new_data; ue->tlv_data_size = size; - mutex_unlock(&ue->card->user_ctl_lock); } else { - int ret = 0; - - mutex_lock(&ue->card->user_ctl_lock); - if (!ue->tlv_data_size || !ue->tlv_data) { - ret = -ENXIO; - goto err_unlock; - } - if (size < ue->tlv_data_size) { - ret = -ENOSPC; - goto err_unlock; - } + if (! ue->tlv_data_size || ! ue->tlv_data) + return -ENXIO; + if (size < ue->tlv_data_size) + return -ENOSPC; if (copy_to_user(tlv, ue->tlv_data, ue->tlv_data_size)) - ret = -EFAULT; -err_unlock: - mutex_unlock(&ue->card->user_ctl_lock); - if (ret) - return ret; + return -EFAULT; } return change; } @@ -1166,6 +1137,8 @@ static int snd_ctl_elem_add(struct snd_ctl_file *file, struct user_element *ue; int idx, err; + if (!replace && card->user_ctl_count >= MAX_USER_CONTROLS) + return -ENOMEM; if (info->count < 1) return -EINVAL; access = info->access == 0 ? SNDRV_CTL_ELEM_ACCESS_READWRITE : @@ -1174,16 +1147,21 @@ static int snd_ctl_elem_add(struct snd_ctl_file *file, SNDRV_CTL_ELEM_ACCESS_TLV_READWRITE)); info->id.numid = 0; memset(&kctl, 0, sizeof(kctl)); - - if (replace) { - err = snd_ctl_remove_user_ctl(file, &info->id); - if (err) - return err; + down_write(&card->controls_rwsem); + _kctl = snd_ctl_find_id(card, &info->id); + err = 0; + if (_kctl) { + if (replace) + err = snd_ctl_remove(card, _kctl); + else + err = -EBUSY; + } else { + if (replace) + err = -ENOENT; } - - if (card->user_ctl_count >= MAX_USER_CONTROLS) - return -ENOMEM; - + up_write(&card->controls_rwsem); + if (err < 0) + return err; memcpy(&kctl.id, &info->id, sizeof(info->id)); kctl.count = info->owner ? info->owner : 1; access |= SNDRV_CTL_ELEM_ACCESS_USER; @@ -1233,7 +1211,6 @@ static int snd_ctl_elem_add(struct snd_ctl_file *file, ue = kzalloc(sizeof(struct user_element) + private_size, GFP_KERNEL); if (ue == NULL) return -ENOMEM; - ue->card = card; ue->info = *info; ue->info.access = 0; ue->elem_data = (char *)ue + sizeof(*ue); @@ -1345,9 +1322,8 @@ static int snd_ctl_tlv_ioctl(struct snd_ctl_file *file, } err = kctl->tlv.c(kctl, op_flag, tlv.length, _tlv->tlv); if (err > 0) { - struct snd_ctl_elem_id id = kctl->id; up_read(&card->controls_rwsem); - snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_TLV, &id); + snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_TLV, &kctl->id); return 0; } } else { diff --git a/sound/core/info.c b/sound/core/info.c index 08070e1eefe..e79baa11b60 100644 --- a/sound/core/info.c +++ b/sound/core/info.c @@ -679,7 +679,7 @@ int snd_info_card_free(struct snd_card *card) * snd_info_get_line - read one line from the procfs buffer * @buffer: the procfs buffer * @line: the buffer to store - * @len: the max. buffer size + * @len: the max. buffer size - 1 * * Reads one line from the buffer and stores the string. * @@ -699,7 +699,7 @@ int snd_info_get_line(struct snd_info_buffer *buffer, char *line, int len) buffer->stop = 1; if (c == '\n') break; - if (len > 1) { + if (len) { len--; *line++ = c; } diff --git a/sound/core/init.c b/sound/core/init.c index 27791a58e44..6ef06400dfc 100644 --- a/sound/core/init.c +++ b/sound/core/init.c @@ -208,7 +208,6 @@ int snd_card_create(int idx, const char *xid, INIT_LIST_HEAD(&card->devices); init_rwsem(&card->controls_rwsem); rwlock_init(&card->ctl_files_rwlock); - mutex_init(&card->user_ctl_lock); INIT_LIST_HEAD(&card->controls); INIT_LIST_HEAD(&card->ctl_files); spin_lock_init(&card->files_lock); diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c index c4ac3c1e19a..af49721ba0e 100644 --- a/sound/core/pcm_compat.c +++ b/sound/core/pcm_compat.c @@ -206,8 +206,6 @@ static int snd_pcm_status_user_compat(struct snd_pcm_substream *substream, if (err < 0) return err; - if (clear_user(src, sizeof(*src))) - return -EFAULT; if (put_user(status.state, &src->state) || compat_put_timespec(&status.trigger_tstamp, &src->trigger_tstamp) || compat_put_timespec(&status.tstamp, &src->tstamp) || diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c index 8eddece217b..3284940a4af 100644 --- a/sound/core/pcm_lib.c +++ b/sound/core/pcm_lib.c @@ -1782,16 +1782,14 @@ static int snd_pcm_lib_ioctl_fifo_size(struct snd_pcm_substream *substream, { struct snd_pcm_hw_params *params = arg; snd_pcm_format_t format; - int channels; - ssize_t frame_size; + int channels, width; params->fifo_size = substream->runtime->hw.fifo_size; if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_FIFO_IN_FRAMES)) { format = params_format(params); channels = params_channels(params); - frame_size = snd_pcm_format_size(format, channels); - if (frame_size > 0) - params->fifo_size /= (unsigned)frame_size; + width = snd_pcm_format_physical_width(format); + params->fifo_size /= width * channels; } return 0; } diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index 175dca44c97..f9281815595 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -3197,7 +3197,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = { #ifndef ARCH_HAS_DMA_MMAP_COHERENT /* This should be defined / handled globally! */ -#if defined(CONFIG_ARM) || defined(CONFIG_ARM64) +#ifdef CONFIG_ARM #define ARCH_HAS_DMA_MMAP_COHERENT #endif #endif diff --git a/sound/pci/Kconfig b/sound/pci/Kconfig index 3397ddbdfc0..daac7c7ebe9 100644 --- a/sound/pci/Kconfig +++ b/sound/pci/Kconfig @@ -856,8 +856,8 @@ config SND_VIRTUOSO select SND_JACK if INPUT=y || INPUT=SND help Say Y here to include support for sound cards based on the - Asus AV66/AV100/AV200 chips, i.e., Xonar D1, DX, D2, D2X, DS, DSX, - Essence ST (Deluxe), and Essence STX (II). + Asus AV66/AV100/AV200 chips, i.e., Xonar D1, DX, D2, D2X, DS, + Essence ST (Deluxe), and Essence STX. Support for the HDAV1.3 (Deluxe) and HDAV1.3 Slim is experimental; for the Xense, missing. diff --git a/sound/pci/emu10k1/emu10k1_callback.c b/sound/pci/emu10k1/emu10k1_callback.c index 0a34b5f1c47..cae36597aa7 100644 --- a/sound/pci/emu10k1/emu10k1_callback.c +++ b/sound/pci/emu10k1/emu10k1_callback.c @@ -85,8 +85,6 @@ snd_emu10k1_ops_setup(struct snd_emux *emux) * get more voice for pcm * * terminate most inactive voice and give it as a pcm voice. - * - * voice_lock is already held. */ int snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw) @@ -94,10 +92,12 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw) struct snd_emux *emu; struct snd_emux_voice *vp; struct best_voice best[V_END]; + unsigned long flags; int i; emu = hw->synth; + spin_lock_irqsave(&emu->voice_lock, flags); lookup_voices(emu, hw, best, 1); /* no OFF voices */ for (i = 0; i < V_END; i++) { if (best[i].voice >= 0) { @@ -113,9 +113,11 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw) vp->emu->num_voices--; vp->ch = -1; vp->state = SNDRV_EMUX_ST_OFF; + spin_unlock_irqrestore(&emu->voice_lock, flags); return ch; } } + spin_unlock_irqrestore(&emu->voice_lock, flags); /* not found */ return -ENOMEM; diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index e75d0ea2a4f..30e7fd27471 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -4367,9 +4367,6 @@ static DEFINE_PCI_DEVICE_TABLE(azx_pci_ids) = { /* Lynx Point */ { PCI_DEVICE(0x8086, 0x8c20), .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, - /* 9 Series */ - { PCI_DEVICE(0x8086, 0x8ca0), - .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, /* Wellsburg */ { PCI_DEVICE(0x8086, 0x8d20), .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c index 290e09825b8..5a6527668c0 100644 --- a/sound/pci/hda/patch_analog.c +++ b/sound/pci/hda/patch_analog.c @@ -3667,7 +3667,6 @@ static int ad1884_parse_auto_config(struct hda_codec *codec) spec = codec->spec; spec->gen.mixer_nid = 0x20; - spec->gen.mixer_merge_nid = 0x21; spec->gen.beep_nid = 0x10; set_beep_amp(spec, 0x10, 0, HDA_OUTPUT); diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c index 4126f3d9edb..01fefbe29e4 100644 --- a/sound/pci/hda/patch_ca0132.c +++ b/sound/pci/hda/patch_ca0132.c @@ -4379,9 +4379,6 @@ static void ca0132_download_dsp(struct hda_codec *codec) return; /* NOP */ #endif - if (spec->dsp_state == DSP_DOWNLOAD_FAILED) - return; /* don't retry failures */ - chipio_enable_clocks(codec); spec->dsp_state = DSP_DOWNLOADING; if (!ca0132_download_dsp_images(codec)) @@ -4558,8 +4555,7 @@ static int ca0132_init(struct hda_codec *codec) struct auto_pin_cfg *cfg = &spec->autocfg; int i; - if (spec->dsp_state != DSP_DOWNLOAD_FAILED) - spec->dsp_state = DSP_DOWNLOAD_INIT; + spec->dsp_state = DSP_DOWNLOAD_INIT; spec->curr_chip_addx = INVALID_CHIP_ADDRESS; snd_hda_power_up(codec); @@ -4670,7 +4666,6 @@ static int patch_ca0132(struct hda_codec *codec) codec->spec = spec; spec->codec = codec; - spec->dsp_state = DSP_DOWNLOAD_INIT; spec->num_mixers = 1; spec->mixers[0] = ca0132_mixer; diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 4008034b6eb..e0bdcb3ecf0 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -175,8 +175,6 @@ static void alc_fix_pll(struct hda_codec *codec) spec->pll_coef_idx); val = snd_hda_codec_read(codec, spec->pll_nid, 0, AC_VERB_GET_PROC_COEF, 0); - if (val == -1) - return; snd_hda_codec_write(codec, spec->pll_nid, 0, AC_VERB_SET_COEF_INDEX, spec->pll_coef_idx); snd_hda_codec_write(codec, spec->pll_nid, 0, AC_VERB_SET_PROC_COEF, @@ -318,7 +316,6 @@ static void alc_auto_init_amp(struct hda_codec *codec, int type) case 0x10ec0885: case 0x10ec0887: /*case 0x10ec0889:*/ /* this causes an SPDIF problem */ - case 0x10ec0900: alc889_coef_init(codec); break; case 0x10ec0888: @@ -940,7 +937,6 @@ static int alc_codec_rename_from_preset(struct hda_codec *codec) static const struct snd_pci_quirk beep_white_list[] = { SND_PCI_QUIRK(0x1043, 0x103c, "ASUS", 1), - SND_PCI_QUIRK(0x1043, 0x115d, "ASUS", 1), SND_PCI_QUIRK(0x1043, 0x829f, "ASUS", 1), SND_PCI_QUIRK(0x1043, 0x8376, "EeePC", 1), SND_PCI_QUIRK(0x1043, 0x83ce, "EeePC", 1), @@ -1593,10 +1589,12 @@ static const struct hda_fixup alc260_fixups[] = { [ALC260_FIXUP_COEF] = { .type = HDA_FIXUP_VERBS, .v.verbs = (const struct hda_verb[]) { - { 0x1a, AC_VERB_SET_COEF_INDEX, 0x07 }, - { 0x1a, AC_VERB_SET_PROC_COEF, 0x3040 }, + { 0x20, AC_VERB_SET_COEF_INDEX, 0x07 }, + { 0x20, AC_VERB_SET_PROC_COEF, 0x3040 }, { } }, + .chained = true, + .chain_id = ALC260_FIXUP_HP_PIN_0F, }, [ALC260_FIXUP_GPIO1] = { .type = HDA_FIXUP_VERBS, @@ -1611,8 +1609,8 @@ static const struct hda_fixup alc260_fixups[] = { [ALC260_FIXUP_REPLACER] = { .type = HDA_FIXUP_VERBS, .v.verbs = (const struct hda_verb[]) { - { 0x1a, AC_VERB_SET_COEF_INDEX, 0x07 }, - { 0x1a, AC_VERB_SET_PROC_COEF, 0x3050 }, + { 0x20, AC_VERB_SET_COEF_INDEX, 0x07 }, + { 0x20, AC_VERB_SET_PROC_COEF, 0x3050 }, { } }, .chained = true, @@ -2253,7 +2251,6 @@ static int patch_alc882(struct hda_codec *codec) switch (codec->vendor_id) { case 0x10ec0882: case 0x10ec0885: - case 0x10ec0900: break; default: /* ALC883 and variants */ @@ -2681,8 +2678,6 @@ static int alc269_parse_auto_config(struct hda_codec *codec) static void alc269vb_toggle_power_output(struct hda_codec *codec, int power_up) { int val = alc_read_coef_idx(codec, 0x04); - if (val == -1) - return; if (power_up) val |= 1 << 11; else @@ -2863,9 +2858,8 @@ static void alc269_fixup_mic_mute_hook(void *private_data, int enabled) if (spec->mute_led_polarity) enabled = !enabled; - pinval = snd_hda_codec_get_pin_target(codec, spec->mute_led_nid); - pinval &= ~AC_PINCTL_VREFEN; - pinval |= enabled ? AC_PINCTL_VREF_HIZ : AC_PINCTL_VREF_80; + pinval = AC_PINCTL_IN_EN | + (enabled ? AC_PINCTL_VREF_HIZ : AC_PINCTL_VREF_80); if (spec->mute_led_nid) snd_hda_set_pin_ctl_cache(codec, spec->mute_led_nid, pinval); } @@ -3362,7 +3356,6 @@ enum { ALC269_FIXUP_STEREO_DMIC, ALC269_FIXUP_QUANTA_MUTE, ALC269_FIXUP_LIFEBOOK, - ALC269_FIXUP_LIFEBOOK_EXTMIC, ALC269_FIXUP_AMIC, ALC269_FIXUP_DMIC, ALC269VB_FIXUP_AMIC, @@ -3470,13 +3463,6 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC269_FIXUP_QUANTA_MUTE }, - [ALC269_FIXUP_LIFEBOOK_EXTMIC] = { - .type = HDA_FIXUP_PINS, - .v.pins = (const struct hda_pintbl[]) { - { 0x19, 0x01a1903c }, /* headset mic, with jack detect */ - { } - }, - }, [ALC269_FIXUP_AMIC] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -3662,7 +3648,6 @@ static const struct hda_fixup alc269_fixups[] = { }; static const struct snd_pci_quirk alc269_fixup_tbl[] = { - SND_PCI_QUIRK(0x1025, 0x0283, "Acer TravelMate 8371", ALC269_FIXUP_INV_DMIC), SND_PCI_QUIRK(0x1025, 0x029b, "Acer 1810TZ", ALC269_FIXUP_INV_DMIC), SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC), SND_PCI_QUIRK(0x1028, 0x05bd, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), @@ -3727,7 +3712,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK), SND_PCI_QUIRK_VENDOR(0x1025, "Acer Aspire", ALC271_FIXUP_DMIC), SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK), - SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC), SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE), SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE), SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE), @@ -3828,30 +3812,27 @@ static void alc269_fill_coef(struct hda_codec *codec) if ((alc_get_coef0(codec) & 0x00ff) == 0x017) { val = alc_read_coef_idx(codec, 0x04); /* Power up output pin */ - if (val != -1) - alc_write_coef_idx(codec, 0x04, val | (1<<11)); + alc_write_coef_idx(codec, 0x04, val | (1<<11)); } if ((alc_get_coef0(codec) & 0x00ff) == 0x018) { val = alc_read_coef_idx(codec, 0xd); - if (val != -1 && (val & 0x0c00) >> 10 != 0x1) { + if ((val & 0x0c00) >> 10 != 0x1) { /* Capless ramp up clock control */ alc_write_coef_idx(codec, 0xd, val | (1<<10)); } val = alc_read_coef_idx(codec, 0x17); - if (val != -1 && (val & 0x01c0) >> 6 != 0x4) { + if ((val & 0x01c0) >> 6 != 0x4) { /* Class D power on reset */ alc_write_coef_idx(codec, 0x17, val | (1<<7)); } } val = alc_read_coef_idx(codec, 0xd); /* Class D */ - if (val != -1) - alc_write_coef_idx(codec, 0xd, val | (1<<14)); + alc_write_coef_idx(codec, 0xd, val | (1<<14)); val = alc_read_coef_idx(codec, 0x4); /* HP */ - if (val != -1) - alc_write_coef_idx(codec, 0x4, val | (1<<11)); + alc_write_coef_idx(codec, 0x4, val | (1<<11)); } /* @@ -3919,7 +3900,6 @@ static int patch_alc269(struct hda_codec *codec) spec->codec_variant = ALC269_TYPE_ALC284; break; case 0x10ec0286: - case 0x10ec0288: spec->codec_variant = ALC269_TYPE_ALC286; break; case 0x10ec0255: @@ -4662,7 +4642,6 @@ static const struct hda_codec_preset snd_hda_preset_realtek[] = { { .id = 0x10ec0283, .name = "ALC283", .patch = patch_alc269 }, { .id = 0x10ec0284, .name = "ALC284", .patch = patch_alc269 }, { .id = 0x10ec0286, .name = "ALC286", .patch = patch_alc269 }, - { .id = 0x10ec0288, .name = "ALC288", .patch = patch_alc269 }, { .id = 0x10ec0290, .name = "ALC290", .patch = patch_alc269 }, { .id = 0x10ec0292, .name = "ALC292", .patch = patch_alc269 }, { .id = 0x10ec0861, .rev = 0x100340, .name = "ALC660", @@ -4682,7 +4661,6 @@ static const struct hda_codec_preset snd_hda_preset_realtek[] = { { .id = 0x10ec0670, .name = "ALC670", .patch = patch_alc662 }, { .id = 0x10ec0671, .name = "ALC671", .patch = patch_alc662 }, { .id = 0x10ec0680, .name = "ALC680", .patch = patch_alc680 }, - { .id = 0x10ec0867, .name = "ALC891", .patch = patch_alc882 }, { .id = 0x10ec0880, .name = "ALC880", .patch = patch_alc880 }, { .id = 0x10ec0882, .name = "ALC882", .patch = patch_alc882 }, { .id = 0x10ec0883, .name = "ALC883", .patch = patch_alc882 }, diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c index 5dd4c4af9c9..0c521b7752b 100644 --- a/sound/pci/hda/patch_sigmatel.c +++ b/sound/pci/hda/patch_sigmatel.c @@ -84,7 +84,6 @@ enum { STAC_DELL_EQ, STAC_ALIENWARE_M17X, STAC_92HD89XX_HP_FRONT_JACK, - STAC_92HD89XX_HP_Z1_G2_RIGHT_MIC_JACK, STAC_92HD73XX_MODELS }; @@ -539,8 +538,8 @@ static void stac_init_power_map(struct hda_codec *codec) if (snd_hda_jack_tbl_get(codec, nid)) continue; if (def_conf == AC_JACK_PORT_COMPLEX && - spec->vref_mute_led_nid != nid && - is_jack_detectable(codec, nid)) { + !(spec->vref_mute_led_nid == nid || + is_jack_detectable(codec, nid))) { snd_hda_jack_detect_enable_callback(codec, nid, STAC_PWR_EVENT, jack_update_power); @@ -1784,11 +1783,6 @@ static const struct hda_pintbl stac92hd89xx_hp_front_jack_pin_configs[] = { {} }; -static const struct hda_pintbl stac92hd89xx_hp_z1_g2_right_mic_jack_pin_configs[] = { - { 0x0e, 0x400000f0 }, - {} -}; - static void stac92hd73xx_fixup_ref(struct hda_codec *codec, const struct hda_fixup *fix, int action) { @@ -1911,10 +1905,6 @@ static const struct hda_fixup stac92hd73xx_fixups[] = { [STAC_92HD89XX_HP_FRONT_JACK] = { .type = HDA_FIXUP_PINS, .v.pins = stac92hd89xx_hp_front_jack_pin_configs, - }, - [STAC_92HD89XX_HP_Z1_G2_RIGHT_MIC_JACK] = { - .type = HDA_FIXUP_PINS, - .v.pins = stac92hd89xx_hp_z1_g2_right_mic_jack_pin_configs, } }; @@ -1975,8 +1965,6 @@ static const struct snd_pci_quirk stac92hd73xx_fixup_tbl[] = { "Alienware M17x", STAC_ALIENWARE_M17X), SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0490, "Alienware M17x R3", STAC_DELL_EQ), - SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1927, - "HP Z1 G2", STAC_92HD89XX_HP_Z1_G2_RIGHT_MIC_JACK), SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x2b17, "unknown HP", STAC_92HD89XX_HP_FRONT_JACK), {} /* terminator */ @@ -3647,19 +3635,12 @@ static int stac_parse_auto_config(struct hda_codec *codec) return err; } - return 0; -} - -static int stac_build_controls(struct hda_codec *codec) -{ - int err = snd_hda_gen_build_controls(codec); - - if (err < 0) - return err; stac_init_power_map(codec); + return 0; } + static int stac_init(struct hda_codec *codec) { struct sigmatel_spec *spec = codec->spec; @@ -3801,7 +3782,7 @@ static void stac_set_power_state(struct hda_codec *codec, hda_nid_t fg, #endif /* CONFIG_PM */ static const struct hda_codec_ops stac_patch_ops = { - .build_controls = stac_build_controls, + .build_controls = snd_hda_gen_build_controls, .build_pcms = snd_hda_gen_build_pcms, .init = stac_init, .free = stac_free, diff --git a/sound/pci/ice1712/ice1712.c b/sound/pci/ice1712/ice1712.c index e6b70e35f62..806407a3973 100644 --- a/sound/pci/ice1712/ice1712.c +++ b/sound/pci/ice1712/ice1712.c @@ -685,10 +685,9 @@ static snd_pcm_uframes_t snd_ice1712_playback_pointer(struct snd_pcm_substream * if (!(snd_ice1712_read(ice, ICE1712_IREG_PBK_CTRL) & 1)) return 0; ptr = runtime->buffer_size - inw(ice->ddma_port + 4); - ptr = bytes_to_frames(substream->runtime, ptr); if (ptr == runtime->buffer_size) ptr = 0; - return ptr; + return bytes_to_frames(substream->runtime, ptr); } static snd_pcm_uframes_t snd_ice1712_playback_ds_pointer(struct snd_pcm_substream *substream) @@ -705,10 +704,9 @@ static snd_pcm_uframes_t snd_ice1712_playback_ds_pointer(struct snd_pcm_substrea addr = ICE1712_DSC_ADDR0; ptr = snd_ice1712_ds_read(ice, substream->number * 2, addr) - ice->playback_con_virt_addr[substream->number]; - ptr = bytes_to_frames(substream->runtime, ptr); if (ptr == substream->runtime->buffer_size) ptr = 0; - return ptr; + return bytes_to_frames(substream->runtime, ptr); } static snd_pcm_uframes_t snd_ice1712_capture_pointer(struct snd_pcm_substream *substream) @@ -719,10 +717,9 @@ static snd_pcm_uframes_t snd_ice1712_capture_pointer(struct snd_pcm_substream *s if (!(snd_ice1712_read(ice, ICE1712_IREG_CAP_CTRL) & 1)) return 0; ptr = inl(ICEREG(ice, CONCAP_ADDR)) - ice->capture_con_virt_addr; - ptr = bytes_to_frames(substream->runtime, ptr); if (ptr == substream->runtime->buffer_size) ptr = 0; - return ptr; + return bytes_to_frames(substream->runtime, ptr); } static const struct snd_pcm_hardware snd_ice1712_playback = { @@ -1116,10 +1113,9 @@ static snd_pcm_uframes_t snd_ice1712_playback_pro_pointer(struct snd_pcm_substre if (!(inl(ICEMT(ice, PLAYBACK_CONTROL)) & ICE1712_PLAYBACK_START)) return 0; ptr = ice->playback_pro_size - (inw(ICEMT(ice, PLAYBACK_SIZE)) << 2); - ptr = bytes_to_frames(substream->runtime, ptr); if (ptr == substream->runtime->buffer_size) ptr = 0; - return ptr; + return bytes_to_frames(substream->runtime, ptr); } static snd_pcm_uframes_t snd_ice1712_capture_pro_pointer(struct snd_pcm_substream *substream) @@ -1130,10 +1126,9 @@ static snd_pcm_uframes_t snd_ice1712_capture_pro_pointer(struct snd_pcm_substrea if (!(inl(ICEMT(ice, PLAYBACK_CONTROL)) & ICE1712_CAPTURE_START_SHADOW)) return 0; ptr = ice->capture_pro_size - (inw(ICEMT(ice, CAPTURE_SIZE)) << 2); - ptr = bytes_to_frames(substream->runtime, ptr); if (ptr == substream->runtime->buffer_size) ptr = 0; - return ptr; + return bytes_to_frames(substream->runtime, ptr); } static const struct snd_pcm_hardware snd_ice1712_playback_pro = { diff --git a/sound/pci/oxygen/virtuoso.c b/sound/pci/oxygen/virtuoso.c index dbbbacfd535..64b9fda5f04 100644 --- a/sound/pci/oxygen/virtuoso.c +++ b/sound/pci/oxygen/virtuoso.c @@ -53,7 +53,6 @@ static DEFINE_PCI_DEVICE_TABLE(xonar_ids) = { { OXYGEN_PCI_SUBID(0x1043, 0x835e) }, { OXYGEN_PCI_SUBID(0x1043, 0x838e) }, { OXYGEN_PCI_SUBID(0x1043, 0x8522) }, - { OXYGEN_PCI_SUBID(0x1043, 0x85f4) }, { OXYGEN_PCI_SUBID_BROKEN_EEPROM }, { } }; diff --git a/sound/pci/oxygen/xonar_dg.c b/sound/pci/oxygen/xonar_dg.c index eb7ad770620..77acd790ea4 100644 --- a/sound/pci/oxygen/xonar_dg.c +++ b/sound/pci/oxygen/xonar_dg.c @@ -294,16 +294,6 @@ static int output_switch_put(struct snd_kcontrol *ctl, oxygen_write16_masked(chip, OXYGEN_GPIO_DATA, data->output_sel == 1 ? GPIO_HP_REAR : 0, GPIO_HP_REAR); - oxygen_write8_masked(chip, OXYGEN_PLAY_ROUTING, - data->output_sel == 0 ? - OXYGEN_PLAY_MUTE01 : - OXYGEN_PLAY_MUTE23 | - OXYGEN_PLAY_MUTE45 | - OXYGEN_PLAY_MUTE67, - OXYGEN_PLAY_MUTE01 | - OXYGEN_PLAY_MUTE23 | - OXYGEN_PLAY_MUTE45 | - OXYGEN_PLAY_MUTE67); } mutex_unlock(&chip->mutex); return changed; @@ -606,7 +596,7 @@ struct oxygen_model model_xonar_dg = { .model_data_size = sizeof(struct dg), .device_config = PLAYBACK_0_TO_I2S | PLAYBACK_1_TO_SPDIF | - CAPTURE_0_FROM_I2S_1 | + CAPTURE_0_FROM_I2S_2 | CAPTURE_1_FROM_SPDIF, .dac_channels_pcm = 6, .dac_channels_mixer = 0, diff --git a/sound/pci/oxygen/xonar_pcm179x.c b/sound/pci/oxygen/xonar_pcm179x.c index e0260593166..c8c7f2c9b35 100644 --- a/sound/pci/oxygen/xonar_pcm179x.c +++ b/sound/pci/oxygen/xonar_pcm179x.c @@ -100,8 +100,8 @@ */ /* - * Xonar Essence ST (Deluxe)/STX (II) - * ---------------------------------- + * Xonar Essence ST (Deluxe)/STX + * ----------------------------- * * CMI8788: * @@ -1138,14 +1138,6 @@ int get_xonar_pcm179x_model(struct oxygen *chip, chip->model.resume = xonar_stx_resume; chip->model.set_dac_params = set_pcm1796_params; break; - case 0x85f4: - chip->model = model_xonar_st; - /* TODO: daughterboard support */ - chip->model.shortname = "Xonar STX II"; - chip->model.init = xonar_stx_init; - chip->model.resume = xonar_stx_resume; - chip->model.set_dac_params = set_pcm1796_params; - break; default: return -EINVAL; } diff --git a/sound/soc/codecs/cs42l51.c b/sound/soc/codecs/cs42l51.c index e1dfebbea65..1e0fa3b5f79 100644 --- a/sound/soc/codecs/cs42l51.c +++ b/sound/soc/codecs/cs42l51.c @@ -124,8 +124,9 @@ static int cs42l51_set_chan_mix(struct snd_kcontrol *kcontrol, static const DECLARE_TLV_DB_SCALE(adc_pcm_tlv, -5150, 50, 0); static const DECLARE_TLV_DB_SCALE(tone_tlv, -1050, 150, 0); - -static const DECLARE_TLV_DB_SCALE(aout_tlv, -10200, 50, 0); +/* This is a lie. after -102 db, it stays at -102 */ +/* maybe a range would be better */ +static const DECLARE_TLV_DB_SCALE(aout_tlv, -11550, 50, 0); static const DECLARE_TLV_DB_SCALE(boost_tlv, 1600, 1600, 0); static const char *chan_mix[] = { @@ -140,7 +141,7 @@ static const struct soc_enum cs42l51_chan_mix = static const struct snd_kcontrol_new cs42l51_snd_controls[] = { SOC_DOUBLE_R_SX_TLV("PCM Playback Volume", CS42L51_PCMA_VOL, CS42L51_PCMB_VOL, - 0, 0x19, 0x7F, adc_pcm_tlv), + 6, 0x19, 0x7F, adc_pcm_tlv), SOC_DOUBLE_R("PCM Playback Switch", CS42L51_PCMA_VOL, CS42L51_PCMB_VOL, 7, 1, 1), SOC_DOUBLE_R_SX_TLV("Analog Playback Volume", @@ -148,7 +149,7 @@ static const struct snd_kcontrol_new cs42l51_snd_controls[] = { 0, 0x34, 0xE4, aout_tlv), SOC_DOUBLE_R_SX_TLV("ADC Mixer Volume", CS42L51_ADCA_VOL, CS42L51_ADCB_VOL, - 0, 0x19, 0x7F, adc_pcm_tlv), + 6, 0x19, 0x7F, adc_pcm_tlv), SOC_DOUBLE_R("ADC Mixer Switch", CS42L51_ADCA_VOL, CS42L51_ADCB_VOL, 7, 1, 1), SOC_SINGLE("Playback Deemphasis Switch", CS42L51_DAC_CTL, 3, 1, 0), diff --git a/sound/soc/codecs/cs42l52.c b/sound/soc/codecs/cs42l52.c index b99af6362de..ee25f325d65 100644 --- a/sound/soc/codecs/cs42l52.c +++ b/sound/soc/codecs/cs42l52.c @@ -350,7 +350,7 @@ static const char * const right_swap_text[] = { static const unsigned int swap_values[] = { 0, 1, 3 }; static const struct soc_enum adca_swap_enum = - SOC_VALUE_ENUM_SINGLE(CS42L52_ADC_PCM_MIXER, 2, 3, + SOC_VALUE_ENUM_SINGLE(CS42L52_ADC_PCM_MIXER, 2, 1, ARRAY_SIZE(left_swap_text), left_swap_text, swap_values); @@ -359,7 +359,7 @@ static const struct snd_kcontrol_new adca_mixer = SOC_DAPM_ENUM("Route", adca_swap_enum); static const struct soc_enum pcma_swap_enum = - SOC_VALUE_ENUM_SINGLE(CS42L52_ADC_PCM_MIXER, 6, 3, + SOC_VALUE_ENUM_SINGLE(CS42L52_ADC_PCM_MIXER, 6, 1, ARRAY_SIZE(left_swap_text), left_swap_text, swap_values); @@ -368,7 +368,7 @@ static const struct snd_kcontrol_new pcma_mixer = SOC_DAPM_ENUM("Route", pcma_swap_enum); static const struct soc_enum adcb_swap_enum = - SOC_VALUE_ENUM_SINGLE(CS42L52_ADC_PCM_MIXER, 0, 3, + SOC_VALUE_ENUM_SINGLE(CS42L52_ADC_PCM_MIXER, 0, 1, ARRAY_SIZE(right_swap_text), right_swap_text, swap_values); @@ -377,7 +377,7 @@ static const struct snd_kcontrol_new adcb_mixer = SOC_DAPM_ENUM("Route", adcb_swap_enum); static const struct soc_enum pcmb_swap_enum = - SOC_VALUE_ENUM_SINGLE(CS42L52_ADC_PCM_MIXER, 4, 3, + SOC_VALUE_ENUM_SINGLE(CS42L52_ADC_PCM_MIXER, 4, 1, ARRAY_SIZE(right_swap_text), right_swap_text, swap_values); diff --git a/sound/soc/codecs/cs42l73.c b/sound/soc/codecs/cs42l73.c index 934169f6f55..626805ae057 100644 --- a/sound/soc/codecs/cs42l73.c +++ b/sound/soc/codecs/cs42l73.c @@ -326,7 +326,7 @@ static const char * const cs42l73_mono_mix_texts[] = { static const unsigned int cs42l73_mono_mix_values[] = { 0, 1, 2 }; static const struct soc_enum spk_asp_enum = - SOC_VALUE_ENUM_SINGLE(CS42L73_MMIXCTL, 6, 3, + SOC_VALUE_ENUM_SINGLE(CS42L73_MMIXCTL, 6, 1, ARRAY_SIZE(cs42l73_mono_mix_texts), cs42l73_mono_mix_texts, cs42l73_mono_mix_values); @@ -344,7 +344,7 @@ static const struct snd_kcontrol_new spk_xsp_mixer = SOC_DAPM_ENUM("Route", spk_xsp_enum); static const struct soc_enum esl_asp_enum = - SOC_VALUE_ENUM_SINGLE(CS42L73_MMIXCTL, 2, 3, + SOC_VALUE_ENUM_SINGLE(CS42L73_MMIXCTL, 2, 5, ARRAY_SIZE(cs42l73_mono_mix_texts), cs42l73_mono_mix_texts, cs42l73_mono_mix_values); @@ -353,7 +353,7 @@ static const struct snd_kcontrol_new esl_asp_mixer = SOC_DAPM_ENUM("Route", esl_asp_enum); static const struct soc_enum esl_xsp_enum = - SOC_VALUE_ENUM_SINGLE(CS42L73_MMIXCTL, 0, 3, + SOC_VALUE_ENUM_SINGLE(CS42L73_MMIXCTL, 0, 7, ARRAY_SIZE(cs42l73_mono_mix_texts), cs42l73_mono_mix_texts, cs42l73_mono_mix_values); diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c index 1ae1f8bd9c3..e3cd86514ce 100644 --- a/sound/soc/codecs/wm8962.c +++ b/sound/soc/codecs/wm8962.c @@ -153,7 +153,6 @@ static struct reg_default wm8962_reg[] = { { 40, 0x0000 }, /* R40 - SPKOUTL volume */ { 41, 0x0000 }, /* R41 - SPKOUTR volume */ - { 49, 0x0010 }, /* R49 - Class D Control 1 */ { 51, 0x0003 }, /* R51 - Class D Control 2 */ { 56, 0x0506 }, /* R56 - Clocking 4 */ @@ -795,6 +794,7 @@ static bool wm8962_volatile_register(struct device *dev, unsigned int reg) case WM8962_ALC2: case WM8962_THERMAL_SHUTDOWN_STATUS: case WM8962_ADDITIONAL_CONTROL_4: + case WM8962_CLASS_D_CONTROL_1: case WM8962_DC_SERVO_6: case WM8962_INTERRUPT_STATUS_1: case WM8962_INTERRUPT_STATUS_2: @@ -2901,22 +2901,13 @@ static int wm8962_set_fll(struct snd_soc_codec *codec, int fll_id, int source, static int wm8962_mute(struct snd_soc_dai *dai, int mute) { struct snd_soc_codec *codec = dai->codec; - int val, ret; + int val; if (mute) - val = WM8962_DAC_MUTE | WM8962_DAC_MUTE_ALT; + val = WM8962_DAC_MUTE; else val = 0; - /** - * The DAC mute bit is mirrored in two registers, update both to keep - * the register cache consistent. - */ - ret = snd_soc_update_bits(codec, WM8962_CLASS_D_CONTROL_1, - WM8962_DAC_MUTE_ALT, val); - if (ret < 0) - return ret; - return snd_soc_update_bits(codec, WM8962_ADC_DAC_CONTROL_1, WM8962_DAC_MUTE, val); } diff --git a/sound/soc/codecs/wm8962.h b/sound/soc/codecs/wm8962.h index 910aafd09d2..a1a5d5294c1 100644 --- a/sound/soc/codecs/wm8962.h +++ b/sound/soc/codecs/wm8962.h @@ -1954,10 +1954,6 @@ #define WM8962_SPKOUTL_ENA_MASK 0x0040 /* SPKOUTL_ENA */ #define WM8962_SPKOUTL_ENA_SHIFT 6 /* SPKOUTL_ENA */ #define WM8962_SPKOUTL_ENA_WIDTH 1 /* SPKOUTL_ENA */ -#define WM8962_DAC_MUTE_ALT 0x0010 /* DAC_MUTE */ -#define WM8962_DAC_MUTE_ALT_MASK 0x0010 /* DAC_MUTE */ -#define WM8962_DAC_MUTE_ALT_SHIFT 4 /* DAC_MUTE */ -#define WM8962_DAC_MUTE_ALT_WIDTH 1 /* DAC_MUTE */ #define WM8962_SPKOUTL_PGA_MUTE 0x0002 /* SPKOUTL_PGA_MUTE */ #define WM8962_SPKOUTL_PGA_MUTE_MASK 0x0002 /* SPKOUTL_PGA_MUTE */ #define WM8962_SPKOUTL_PGA_MUTE_SHIFT 1 /* SPKOUTL_PGA_MUTE */ diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c index ca1e999026e..6dbb17d050c 100644 --- a/sound/soc/codecs/wm_adsp.c +++ b/sound/soc/codecs/wm_adsp.c @@ -1284,5 +1284,3 @@ int wm_adsp2_init(struct wm_adsp *adsp, bool dvfs) return 0; } EXPORT_SYMBOL_GPL(wm_adsp2_init); - -MODULE_LICENSE("GPL v2"); diff --git a/sound/soc/davinci/davinci-mcasp.c b/sound/soc/davinci/davinci-mcasp.c index ade9d6379c1..81490febac6 100644 --- a/sound/soc/davinci/davinci-mcasp.c +++ b/sound/soc/davinci/davinci-mcasp.c @@ -632,17 +632,8 @@ static int davinci_config_channel_size(struct davinci_audio_dev *dev, { u32 fmt; u32 tx_rotate = (word_length / 4) & 0x7; + u32 rx_rotate = (32 - word_length) / 4; u32 mask = (1ULL << word_length) - 1; - /* - * For captured data we should not rotate, inversion and masking is - * enoguh to get the data to the right position: - * Format data from bus after reverse (XRBUF) - * S16_LE: |LSB|MSB|xxx|xxx| |xxx|xxx|MSB|LSB| - * S24_3LE: |LSB|DAT|MSB|xxx| |xxx|MSB|DAT|LSB| - * S24_LE: |LSB|DAT|MSB|xxx| |xxx|MSB|DAT|LSB| - * S32_LE: |LSB|DAT|DAT|MSB| |MSB|DAT|DAT|LSB| - */ - u32 rx_rotate = 0; /* * if s BCLK-to-LRCLK ratio has been configured via the set_clkdiv() diff --git a/sound/soc/pxa/pxa-ssp.c b/sound/soc/pxa/pxa-ssp.c index 95a9b07bbe9..6f4dd7543e8 100644 --- a/sound/soc/pxa/pxa-ssp.c +++ b/sound/soc/pxa/pxa-ssp.c @@ -757,7 +757,9 @@ static int pxa_ssp_remove(struct snd_soc_dai *dai) SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_64000 | \ SNDRV_PCM_RATE_88200 | SNDRV_PCM_RATE_96000) -#define PXA_SSP_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE) +#define PXA_SSP_FORMATS (SNDRV_PCM_FMTBIT_S16_LE |\ + SNDRV_PCM_FMTBIT_S24_LE | \ + SNDRV_PCM_FMTBIT_S32_LE) static const struct snd_soc_dai_ops pxa_ssp_dai_ops = { .startup = pxa_ssp_startup, diff --git a/sound/soc/samsung/i2s.c b/sound/soc/samsung/i2s.c index 5c9b5e4f94c..82ebb1a5147 100644 --- a/sound/soc/samsung/i2s.c +++ b/sound/soc/samsung/i2s.c @@ -853,9 +853,11 @@ static int i2s_suspend(struct snd_soc_dai *dai) { struct i2s_dai *i2s = to_info(dai); - i2s->suspend_i2smod = readl(i2s->addr + I2SMOD); - i2s->suspend_i2scon = readl(i2s->addr + I2SCON); - i2s->suspend_i2spsr = readl(i2s->addr + I2SPSR); + if (dai->active) { + i2s->suspend_i2smod = readl(i2s->addr + I2SMOD); + i2s->suspend_i2scon = readl(i2s->addr + I2SCON); + i2s->suspend_i2spsr = readl(i2s->addr + I2SPSR); + } return 0; } @@ -864,9 +866,11 @@ static int i2s_resume(struct snd_soc_dai *dai) { struct i2s_dai *i2s = to_info(dai); - writel(i2s->suspend_i2scon, i2s->addr + I2SCON); - writel(i2s->suspend_i2smod, i2s->addr + I2SMOD); - writel(i2s->suspend_i2spsr, i2s->addr + I2SPSR); + if (dai->active) { + writel(i2s->suspend_i2scon, i2s->addr + I2SCON); + writel(i2s->suspend_i2smod, i2s->addr + I2SMOD); + writel(i2s->suspend_i2spsr, i2s->addr + I2SPSR); + } return 0; } diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c index be236da47b6..e1925aba662 100644 --- a/sound/soc/soc-pcm.c +++ b/sound/soc/soc-pcm.c @@ -1883,7 +1883,6 @@ int soc_dpcm_runtime_update(struct snd_soc_dapm_widget *widget) dpcm_be_disconnect(fe, SNDRV_PCM_STREAM_PLAYBACK); } - dpcm_path_put(&list); capture: /* skip if FE doesn't have capture capability */ if (!fe->cpu_dai->driver->capture.channels_min) diff --git a/sound/usb/card.h b/sound/usb/card.h index 82c2d80c822..bf2889a2cae 100644 --- a/sound/usb/card.h +++ b/sound/usb/card.h @@ -90,7 +90,6 @@ struct snd_usb_endpoint { unsigned int curframesize; /* current packet size in frames (for capture) */ unsigned int syncmaxsize; /* sync endpoint packet size */ unsigned int fill_max:1; /* fill max packet size always */ - unsigned int udh01_fb_quirk:1; /* corrupted feedback data */ unsigned int datainterval; /* log_2 of data packet interval */ unsigned int syncinterval; /* P for adaptive mode, 0 otherwise */ unsigned char silence_value; diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c index 308c02b2a59..659950e5b94 100644 --- a/sound/usb/endpoint.c +++ b/sound/usb/endpoint.c @@ -467,10 +467,6 @@ struct snd_usb_endpoint *snd_usb_add_endpoint(struct snd_usb_audio *chip, ep->syncinterval = 3; ep->syncmaxsize = le16_to_cpu(get_endpoint(alts, 1)->wMaxPacketSize); - - if (chip->usb_id == USB_ID(0x0644, 0x8038) /* TEAC UD-H01 */ && - ep->syncmaxsize == 4) - ep->udh01_fb_quirk = 1; } list_add_tail(&ep->list, &chip->ep_list); @@ -1079,16 +1075,7 @@ void snd_usb_handle_sync_urb(struct snd_usb_endpoint *ep, if (f == 0) return; - if (unlikely(sender->udh01_fb_quirk)) { - /* - * The TEAC UD-H01 firmware sometimes changes the feedback value - * by +/- 0x1.0000. - */ - if (f < ep->freqn - 0x8000) - f += 0x10000; - else if (f > ep->freqn + 0x8000) - f -= 0x10000; - } else if (unlikely(ep->freqshift == INT_MIN)) { + if (unlikely(ep->freqshift == INT_MIN)) { /* * The first time we see a feedback value, determine its format * by shifting it left or right until it matches the nominal diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c index be4db47cb2d..95558ef4a7a 100644 --- a/sound/usb/mixer.c +++ b/sound/usb/mixer.c @@ -883,7 +883,6 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval, } break; - case USB_ID(0x046d, 0x0807): /* Logitech Webcam C500 */ case USB_ID(0x046d, 0x0808): case USB_ID(0x046d, 0x0809): case USB_ID(0x046d, 0x081b): /* HD Webcam c310 */ diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c index 0d7a872dab3..5b5ff9130aa 100644 --- a/sound/usb/pcm.c +++ b/sound/usb/pcm.c @@ -1420,8 +1420,7 @@ static void retire_playback_urb(struct snd_usb_substream *subs, * on two reads of a counter updated every ms. */ if (abs(est_delay - subs->last_delay) * 1000 > runtime->rate * 2) - dev_dbg_ratelimited(&subs->dev->dev, - "delay: estimated %d, actual %d\n", + snd_printdd(KERN_DEBUG "delay: estimated %d, actual %d\n", est_delay, subs->last_delay); if (!subs->running) { diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h index d5bed1d2571..8b75bcf136f 100644 --- a/sound/usb/quirks-table.h +++ b/sound/usb/quirks-table.h @@ -386,36 +386,6 @@ YAMAHA_DEVICE(0x105d, NULL), } }, { - USB_DEVICE(0x0499, 0x1509), - .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { - /* .vendor_name = "Yamaha", */ - /* .product_name = "Steinberg UR22", */ - .ifnum = QUIRK_ANY_INTERFACE, - .type = QUIRK_COMPOSITE, - .data = (const struct snd_usb_audio_quirk[]) { - { - .ifnum = 1, - .type = QUIRK_AUDIO_STANDARD_INTERFACE - }, - { - .ifnum = 2, - .type = QUIRK_AUDIO_STANDARD_INTERFACE - }, - { - .ifnum = 3, - .type = QUIRK_MIDI_YAMAHA - }, - { - .ifnum = 4, - .type = QUIRK_IGNORE_INTERFACE - }, - { - .ifnum = -1 - } - } - } -}, -{ USB_DEVICE(0x0499, 0x150a), .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { /* .vendor_name = "Yamaha", */ diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index c9eac3edfe4..46878daca5c 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -101,7 +101,7 @@ static int setup_cpunode_map(void) dir1 = opendir(PATH_SYS_NODE); if (!dir1) - return 0; + return -1; while ((dent1 = readdir(dir1)) != NULL) { if (dent1->d_type != DT_DIR || diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index 63b6f8c8edf..07b1a3ad3e2 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -1514,7 +1514,7 @@ int perf_evsel__open_strerror(struct perf_evsel *evsel, switch (err) { case EPERM: case EACCES: - return scnprintf(msg, size, + return scnprintf(msg, size, "%s", "You may not have permission to collect %sstats.\n" "Consider tweaking /proc/sys/kernel/perf_event_paranoid:\n" " -1 - Not paranoid at all\n" diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile index 2cee2b79b4d..0a63658065f 100644 --- a/tools/testing/selftests/Makefile +++ b/tools/testing/selftests/Makefile @@ -4,7 +4,6 @@ TARGETS += efivarfs TARGETS += kcmp TARGETS += memory-hotplug TARGETS += mqueue -TARGETS += mount TARGETS += net TARGETS += ptrace TARGETS += vm diff --git a/tools/testing/selftests/mount/Makefile b/tools/testing/selftests/mount/Makefile deleted file mode 100644 index 337d853c2b7..00000000000 --- a/tools/testing/selftests/mount/Makefile +++ /dev/null @@ -1,17 +0,0 @@ -# Makefile for mount selftests. - -all: unprivileged-remount-test - -unprivileged-remount-test: unprivileged-remount-test.c - gcc -Wall -O2 unprivileged-remount-test.c -o unprivileged-remount-test - -# Allow specific tests to be selected. -test_unprivileged_remount: unprivileged-remount-test - @if [ -f /proc/self/uid_map ] ; then ./unprivileged-remount-test ; fi - -run_tests: all test_unprivileged_remount - -clean: - rm -f unprivileged-remount-test - -.PHONY: all test_unprivileged_remount diff --git a/tools/testing/selftests/mount/unprivileged-remount-test.c b/tools/testing/selftests/mount/unprivileged-remount-test.c deleted file mode 100644 index 1b3ff2fda4d..00000000000 --- a/tools/testing/selftests/mount/unprivileged-remount-test.c +++ /dev/null @@ -1,242 +0,0 @@ -#define _GNU_SOURCE -#include <sched.h> -#include <stdio.h> -#include <errno.h> -#include <string.h> -#include <sys/types.h> -#include <sys/mount.h> -#include <sys/wait.h> -#include <stdlib.h> -#include <unistd.h> -#include <fcntl.h> -#include <grp.h> -#include <stdbool.h> -#include <stdarg.h> - -#ifndef CLONE_NEWNS -# define CLONE_NEWNS 0x00020000 -#endif -#ifndef CLONE_NEWUTS -# define CLONE_NEWUTS 0x04000000 -#endif -#ifndef CLONE_NEWIPC -# define CLONE_NEWIPC 0x08000000 -#endif -#ifndef CLONE_NEWNET -# define CLONE_NEWNET 0x40000000 -#endif -#ifndef CLONE_NEWUSER -# define CLONE_NEWUSER 0x10000000 -#endif -#ifndef CLONE_NEWPID -# define CLONE_NEWPID 0x20000000 -#endif - -#ifndef MS_RELATIME -#define MS_RELATIME (1 << 21) -#endif -#ifndef MS_STRICTATIME -#define MS_STRICTATIME (1 << 24) -#endif - -static void die(char *fmt, ...) -{ - va_list ap; - va_start(ap, fmt); - vfprintf(stderr, fmt, ap); - va_end(ap); - exit(EXIT_FAILURE); -} - -static void write_file(char *filename, char *fmt, ...) -{ - char buf[4096]; - int fd; - ssize_t written; - int buf_len; - va_list ap; - - va_start(ap, fmt); - buf_len = vsnprintf(buf, sizeof(buf), fmt, ap); - va_end(ap); - if (buf_len < 0) { - die("vsnprintf failed: %s\n", - strerror(errno)); - } - if (buf_len >= sizeof(buf)) { - die("vsnprintf output truncated\n"); - } - - fd = open(filename, O_WRONLY); - if (fd < 0) { - die("open of %s failed: %s\n", - filename, strerror(errno)); - } - written = write(fd, buf, buf_len); - if (written != buf_len) { - if (written >= 0) { - die("short write to %s\n", filename); - } else { - die("write to %s failed: %s\n", - filename, strerror(errno)); - } - } - if (close(fd) != 0) { - die("close of %s failed: %s\n", - filename, strerror(errno)); - } -} - -static void create_and_enter_userns(void) -{ - uid_t uid; - gid_t gid; - - uid = getuid(); - gid = getgid(); - - if (unshare(CLONE_NEWUSER) !=0) { - die("unshare(CLONE_NEWUSER) failed: %s\n", - strerror(errno)); - } - - write_file("/proc/self/uid_map", "0 %d 1", uid); - write_file("/proc/self/gid_map", "0 %d 1", gid); - - if (setgroups(0, NULL) != 0) { - die("setgroups failed: %s\n", - strerror(errno)); - } - if (setgid(0) != 0) { - die ("setgid(0) failed %s\n", - strerror(errno)); - } - if (setuid(0) != 0) { - die("setuid(0) failed %s\n", - strerror(errno)); - } -} - -static -bool test_unpriv_remount(int mount_flags, int remount_flags, int invalid_flags) -{ - pid_t child; - - child = fork(); - if (child == -1) { - die("fork failed: %s\n", - strerror(errno)); - } - if (child != 0) { /* parent */ - pid_t pid; - int status; - pid = waitpid(child, &status, 0); - if (pid == -1) { - die("waitpid failed: %s\n", - strerror(errno)); - } - if (pid != child) { - die("waited for %d got %d\n", - child, pid); - } - if (!WIFEXITED(status)) { - die("child did not terminate cleanly\n"); - } - return WEXITSTATUS(status) == EXIT_SUCCESS ? true : false; - } - - create_and_enter_userns(); - if (unshare(CLONE_NEWNS) != 0) { - die("unshare(CLONE_NEWNS) failed: %s\n", - strerror(errno)); - } - - if (mount("testing", "/tmp", "ramfs", mount_flags, NULL) != 0) { - die("mount of /tmp failed: %s\n", - strerror(errno)); - } - - create_and_enter_userns(); - - if (unshare(CLONE_NEWNS) != 0) { - die("unshare(CLONE_NEWNS) failed: %s\n", - strerror(errno)); - } - - if (mount("/tmp", "/tmp", "none", - MS_REMOUNT | MS_BIND | remount_flags, NULL) != 0) { - /* system("cat /proc/self/mounts"); */ - die("remount of /tmp failed: %s\n", - strerror(errno)); - } - - if (mount("/tmp", "/tmp", "none", - MS_REMOUNT | MS_BIND | invalid_flags, NULL) == 0) { - /* system("cat /proc/self/mounts"); */ - die("remount of /tmp with invalid flags " - "succeeded unexpectedly\n"); - } - exit(EXIT_SUCCESS); -} - -static bool test_unpriv_remount_simple(int mount_flags) -{ - return test_unpriv_remount(mount_flags, mount_flags, 0); -} - -static bool test_unpriv_remount_atime(int mount_flags, int invalid_flags) -{ - return test_unpriv_remount(mount_flags, mount_flags, invalid_flags); -} - -int main(int argc, char **argv) -{ - if (!test_unpriv_remount_simple(MS_RDONLY|MS_NODEV)) { - die("MS_RDONLY malfunctions\n"); - } - if (!test_unpriv_remount_simple(MS_NODEV)) { - die("MS_NODEV malfunctions\n"); - } - if (!test_unpriv_remount_simple(MS_NOSUID|MS_NODEV)) { - die("MS_NOSUID malfunctions\n"); - } - if (!test_unpriv_remount_simple(MS_NOEXEC|MS_NODEV)) { - die("MS_NOEXEC malfunctions\n"); - } - if (!test_unpriv_remount_atime(MS_RELATIME|MS_NODEV, - MS_NOATIME|MS_NODEV)) - { - die("MS_RELATIME malfunctions\n"); - } - if (!test_unpriv_remount_atime(MS_STRICTATIME|MS_NODEV, - MS_NOATIME|MS_NODEV)) - { - die("MS_STRICTATIME malfunctions\n"); - } - if (!test_unpriv_remount_atime(MS_NOATIME|MS_NODEV, - MS_STRICTATIME|MS_NODEV)) - { - die("MS_RELATIME malfunctions\n"); - } - if (!test_unpriv_remount_atime(MS_RELATIME|MS_NODIRATIME|MS_NODEV, - MS_NOATIME|MS_NODEV)) - { - die("MS_RELATIME malfunctions\n"); - } - if (!test_unpriv_remount_atime(MS_STRICTATIME|MS_NODIRATIME|MS_NODEV, - MS_NOATIME|MS_NODEV)) - { - die("MS_RELATIME malfunctions\n"); - } - if (!test_unpriv_remount_atime(MS_NOATIME|MS_NODIRATIME|MS_NODEV, - MS_STRICTATIME|MS_NODEV)) - { - die("MS_RELATIME malfunctions\n"); - } - if (!test_unpriv_remount(MS_STRICTATIME|MS_NODEV, MS_NODEV, - MS_NOATIME|MS_NODEV)) - { - die("Default atime malfunctions\n"); - } - return EXIT_SUCCESS; -} diff --git a/tools/usb/ffs-test.c b/tools/usb/ffs-test.c index a87e99f37c5..fe1e66b6ef4 100644 --- a/tools/usb/ffs-test.c +++ b/tools/usb/ffs-test.c @@ -116,8 +116,8 @@ static const struct { .header = { .magic = cpu_to_le32(FUNCTIONFS_DESCRIPTORS_MAGIC), .length = cpu_to_le32(sizeof descriptors), - .fs_count = cpu_to_le32(3), - .hs_count = cpu_to_le32(3), + .fs_count = 3, + .hs_count = 3, }, .fs_descs = { .intf = { diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c index 5eaf18f90e8..2d682977ce8 100644 --- a/virt/kvm/ioapic.c +++ b/virt/kvm/ioapic.c @@ -203,9 +203,10 @@ void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap, spin_lock(&ioapic->lock); for (index = 0; index < IOAPIC_NUM_PINS; index++) { e = &ioapic->redirtbl[index]; - if (e->fields.trig_mode == IOAPIC_LEVEL_TRIG || - kvm_irq_has_notifier(ioapic->kvm, KVM_IRQCHIP_IOAPIC, index) || - index == RTC_GSI) { + if (!e->fields.mask && + (e->fields.trig_mode == IOAPIC_LEVEL_TRIG || + kvm_irq_has_notifier(ioapic->kvm, KVM_IRQCHIP_IOAPIC, + index) || index == RTC_GSI)) { if (kvm_apic_match_dest(vcpu, NULL, 0, e->fields.dest_id, e->fields.dest_mode)) { __set_bit(e->fields.vector, @@ -305,7 +306,7 @@ static int ioapic_deliver(struct kvm_ioapic *ioapic, int irq, bool line_status) BUG_ON(ioapic->rtc_status.pending_eoi != 0); ret = kvm_irq_delivery_to_apic(ioapic->kvm, NULL, &irqe, ioapic->rtc_status.dest_map); - ioapic->rtc_status.pending_eoi = (ret < 0 ? 0 : ret); + ioapic->rtc_status.pending_eoi = ret; } else ret = kvm_irq_delivery_to_apic(ioapic->kvm, NULL, &irqe, NULL); diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c index a650aa48c78..c329c8fc57f 100644 --- a/virt/kvm/iommu.c +++ b/virt/kvm/iommu.c @@ -43,13 +43,13 @@ static void kvm_iommu_put_pages(struct kvm *kvm, gfn_t base_gfn, unsigned long npages); static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn, - unsigned long npages) + unsigned long size) { gfn_t end_gfn; pfn_t pfn; pfn = gfn_to_pfn_memslot(slot, gfn); - end_gfn = gfn + npages; + end_gfn = gfn + (size >> PAGE_SHIFT); gfn += 1; if (is_error_noslot_pfn(pfn)) @@ -61,14 +61,6 @@ static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn, return pfn; } -static void kvm_unpin_pages(struct kvm *kvm, pfn_t pfn, unsigned long npages) -{ - unsigned long i; - - for (i = 0; i < npages; ++i) - kvm_release_pfn_clean(pfn + i); -} - int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) { gfn_t gfn, end_gfn; @@ -119,7 +111,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) * Pin all pages we are about to map in memory. This is * important because we unmap and unpin in 4kb steps later. */ - pfn = kvm_pin_pages(slot, gfn, page_size >> PAGE_SHIFT); + pfn = kvm_pin_pages(slot, gfn, page_size); if (is_error_noslot_pfn(pfn)) { gfn += 1; continue; @@ -131,7 +123,6 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) if (r) { printk(KERN_ERR "kvm_iommu_map_address:" "iommu failed to map pfn=%llx\n", pfn); - kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT); goto unmap_pages; } @@ -143,7 +134,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) return 0; unmap_pages: - kvm_iommu_put_pages(kvm, slot->base_gfn, gfn - slot->base_gfn); + kvm_iommu_put_pages(kvm, slot->base_gfn, gfn); return r; } @@ -281,6 +272,14 @@ out_unlock: return r; } +static void kvm_unpin_pages(struct kvm *kvm, pfn_t pfn, unsigned long npages) +{ + unsigned long i; + + for (i = 0; i < npages; ++i) + kvm_release_pfn_clean(pfn + i); +} + static void kvm_iommu_put_pages(struct kvm *kvm, gfn_t base_gfn, unsigned long npages) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a17f190be58..eb99458f5b6 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -52,7 +52,6 @@ #include <asm/processor.h> #include <asm/io.h> -#include <asm/ioctl.h> #include <asm/uaccess.h> #include <asm/pgtable.h> @@ -106,12 +105,12 @@ bool kvm_is_mmio_pfn(pfn_t pfn) if (pfn_valid(pfn)) { int reserved; struct page *tail = pfn_to_page(pfn); - struct page *head = compound_head(tail); + struct page *head = compound_trans_head(tail); reserved = PageReserved(head); if (head != tail) { /* * "head" is not a dangling pointer - * (compound_head takes care of that) + * (compound_trans_head takes care of that) * but the hugepage may have been splitted * from under us (and we may not hold a * reference count on the head page so it can @@ -1982,9 +1981,6 @@ static long kvm_vcpu_ioctl(struct file *filp, if (vcpu->kvm->mm != current->mm) return -EIO; - if (unlikely(_IOC_TYPE(ioctl) != KVMIO)) - return -EINVAL; - #if defined(CONFIG_S390) || defined(CONFIG_PPC) || defined(CONFIG_MIPS) /* * Special cases: vcpu ioctls that are asynchronous to vcpu execution, |
