aboutsummaryrefslogtreecommitdiff
path: root/include
Commit message (Collapse)AuthorAgeFilesLines
* ip: make IP identifiers less predictableHEADkitkatEric Dumazet2014-08-171-10/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 04ca6973f7c1a0d8537f2d9906a0cf8e69886d75 ] In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and Jedidiah describe ways exploiting linux IP identifier generation to infer whether two machines are exchanging packets. With commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count"), we changed IP id generation, but this does not really prevent this side-channel technique. This patch adds a random amount of perturbation so that IP identifiers for a given destination [1] are no longer monotonically increasing after an idle period. Note that prandom_u32_max(1) returns 0, so if generator is used at most once per jiffy, this patch inserts no hole in the ID suite and do not increase collision probability. This is jiffies based, so in the worst case (HZ=1000), the id can rollover after ~65 seconds of idle time, which should be fine. We also change the hash used in __ip_select_ident() to not only hash on daddr, but also saddr and protocol, so that ICMP probes can not be used to infer information for other protocols. For IPv6, adds saddr into the hash as well, but not nexthdr. If I ping the patched target, we can see ID are now hard to predict. 21:57:11.008086 IP (...) A > target: ICMP echo request, seq 1, length 64 21:57:11.010752 IP (... id 2081 ...) target > A: ICMP echo reply, seq 1, length 64 21:57:12.013133 IP (...) A > target: ICMP echo request, seq 2, length 64 21:57:12.015737 IP (... id 3039 ...) target > A: ICMP echo reply, seq 2, length 64 21:57:13.016580 IP (...) A > target: ICMP echo request, seq 3, length 64 21:57:13.019251 IP (... id 3437 ...) target > A: ICMP echo reply, seq 3, length 64 [1] TCP sessions uses a per flow ID generator not changed by this patch. Change-Id: Iee3e3b6ded85b460dfd7edac005824f636a26350 Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Jeffrey Knockel <jeffk@cs.unm.edu> Reported-by: Jedidiah R. Crandall <crandall@cs.unm.edu> Cc: Willy Tarreau <w@1wt.eu> Cc: Hannes Frederic Sowa <hannes@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* inetpeer: get rid of ip_id_countEric Dumazet2014-08-175-29/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 73f156a6e8c1074ac6327e0abd1169e95eb66463 ] Ideally, we would need to generate IP ID using a per destination IP generator. linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery. 1) each inet_peer struct consumes 192 bytes 2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load. 3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20. 4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id()) 5) We garbage collect inet_peer aggressively. IP ID generation do not have to be 'perfect' Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID. We simply use an array of generators, and a Jenkin hash using the dst IP as a key. ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file) secure_ip_id() and secure_ipv6_id() no longer are needed. Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments. Conflicts: drivers/net/ppp/pptp.c include/net/inetpeer.h include/net/ip.h include/net/ipip.h include/net/ipv6.h net/ipv4/igmp.c net/ipv4/inetpeer.c net/ipv4/ip_output.c net/ipv4/ipmr.c net/ipv4/raw.c net/ipv4/route.c net/ipv4/xfrm4_mode_tunnel.c net/ipv6/ip6_output.c net/netfilter/ipvs/ip_vs_xmit.c Change-Id: I544360c7c781b61c31544b80db2ecaa720f24aea Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ipv6: use a stronger hash for tcpEric Dumazet2014-08-173-3/+17
| | | | | | | | | | | | | | | | | | | | It looks like its possible to open thousands of TCP IPv6 sessions on a server, all landing in a single slot of TCP hash table. Incoming packets have to lookup sockets in a very long list. We should hash all bits from foreign IPv6 addresses, using a salt and hash mix, not a simple XOR. inet6_ehashfn() can also separately use the ports, instead of xoring them. Change-Id: Ibd708e9d1d9d2bdb7bea6488c259edc576b98780 Reported-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Iliyan Malchev <malchev@google.com>
* m7: Linux 3.4.102Greg Kroah-Hartman2014-08-102-19/+5
| | | | | | | | | | | The following commit from upstream was excluded: ipv6: reallocate addrconf router for ipv6 address when lo device up (commit 33d99113b1102c2d2f8603b9ba72d89d915c13f5 upstream) Change-Id: If57da63d121b62dc05f5de3f3055a7c62f81efa6 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: doc <doc.divxm@gmail.com>
* m7: Linux 3.4.101Greg Kroah-Hartman2014-08-034-3/+5
| | | | | | Change-Id: I735c16d155ef6796cdf0c565f30902662ba2c7fe Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: doc <doc.divxm@gmail.com>
* f2fs: Update to proper 3.4 supportEvan McClain2014-07-241-5/+2
| | | | | | | | | http://git.kernel.org/cgit/linux/kernel/git/jaegeuk/f2fs.git/log/?h=linux-3.4 Conflicts: fs/f2fs/f2fs.h Change-Id: Ica17871e847d4807c7ef24ea77dd4bff5451f5c7
* m7: f2fs Upstream Update.doc2014-07-242-3/+153
| | | | | | | | | | | | Squashed commit from https://github.com/CyanogenMod/android_kernel_htc_m7/commits/cm-11.0 The commits between 1. https://github.com/CyanogenMod/android_kernel_htc_m7/commit/ad9a4fa2d2839b4504366ae208123738077325e0 and 2. https://github.com/CyanogenMod/android_kernel_htc_m7/commit/d110df41d3b7e61bb93107d85c2447e061e850aa were included.. Change-Id: Ia68803e8a375650201fe547eabd5968dd9eef7f1
* f2fs: Update to version in 3.14Changman Lee2014-07-242-25/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | f2fs: introduce __find_rev_next(_zero)_bit When f2fs_set_bit is used, in a byte MSB and LSB is reversed, in that case we can use __find_rev_next_bit or __find_rev_next_zero_bit. Signed-off-by: Changman Lee <cm224.lee@samsung.com> [Jaegeuk Kim: change the function names] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: improve searching speed of __next_free_blkoff To find a zero bit using the result of OR operation between ckpt_valid_map and cur_valid_map is more fast than find a zero bit in each bitmap. Signed-off-by: Changman Lee <cm224.lee@samsung.com> [Jaegeuk Kim: adjust changed function name] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add a slab cache entry for small discards This patch adds a slab cache entry for small discards. Each entry consists of: struct discard_entry { struct list_head list; /* list head */ block_t blkaddr; /* block address to be discarded */ int len; /* # of consecutive blocks of the discard */ }; Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add key functions for small discards This patch adds key functions to activate the small discard feature. Note that this procedure is conducted during the checkpoint only. In flush_sit_entries(), when a new dirty sit entry is flushed, f2fs calls add_discard_addrs() which searches candidates to be discarded. The candidates should be marked *invalidated* and also previous checkpoint recognizes it as *valid*. At the end of a checkpoint procedure, f2fs throws discards based on the discard entry list. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add a sysfs entry to control max_discards If frequent small discards are issued to the device, the performance would be degraded significantly. So, this patch adds a sysfs entry to control the number of discards to be issued during a checkpoint procedure. By default, f2fs does not issue any small discards, which means max_discards is zero. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: introduce f2fs_issue_discard() to clean up Change log from v1: o fix 32bit drops reported by Dan Carpenter This patch adds f2fs_issue_discard() to clean up blkdev_issue_discard() flows. Dan carpenter reported: "block_t is a 32 bit type and sector_t is a 64 bit type. The upper 32 bits of the sector_t are not used because the shift will wrap." Bug-Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add a tracepoint for f2fs_issue_discard This patch adds a tracepoint for f2fs_issue_discard. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: use f2fs_put_page to release page for uniform style We should use f2fs_put_page to release page for uniform style of f2fs code. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: clean up the do_submit_bio flow This patch introduces PAGE_TYPE_OF_BIO() and cleans up do_submit_bio() with it. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: use sbi->write_mutex for write bios This patch removes an unnecessary semaphore (i.e., sbi->bio_sem). There is no reason to use the semaphore when f2fs submits read and write IOs. Instead, let's use a write mutex and cover the sbi->bio[] by the lock. Change log from v1: o split write_mutex suggested by Chao Yu Chao described, "All DATA/NODE/META bio buffers in superblock is protected by 'sbi->write_mutex', but each bio buffer area is independent, So we should split write_mutex to three for DATA/NODE/META." Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: disable the extent cache ops on high fragmented files The f2fs manages an extent cache to search a number of consecutive data blocks very quickly. However it conducts unnecessary cache operations if the file is highly fragmented with no valid extent cache. In such the case, we don't need to handle the extent cache, but just can disable the cache facility. Nevertheless, this patch gives one more chance to enable the extent cache. For example, 1. create a file 2. write data sequentially which produces a large valid extent cache 3. update some data, resulting in a fragmented extent 4. if the fragmented extent is too small, then drop extent cache 5. close the file 6. open the file again 7. give another chance to make a new extent cache 8. write data sequentially again which creates another big extent cache. ... Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: introduce a bio array for per-page write bios The f2fs has three bio types, NODE, DATA, and META, and manages some data structures per each bio types. The codes are a little bit messy, thus, this patch introduces a bio array which groups individual data structures as follows. struct f2fs_bio_info { struct bio *bio; /* bios to merge */ sector_t last_block_in_bio; /* last block number */ struct mutex io_mutex; /* mutex for bio */ }; struct f2fs_sb_info { ... struct f2fs_bio_info write_io[NR_PAGE_TYPE]; /* for write bios */ ... }; The code changes from this new data structure are trivial. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: convert remove_inode_page to void Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: convert dev_valid_block_count to void Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: convert inc/dec_valid_node_count to inc/dec one count Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: simplify write_orphan_inodes for better readable Simplify write_orphan_inodes for better readable. Because we hold the orphan_inode_mutex, so it's safe to use list_for_each_entry instead of list_for_each_safe. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: move the list_head initialization into the lock protection region Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add a new function to support for merging contiguous read For better read performance, we add a new function to support for merging contiguous read as the one for write. v1-->v2: o add declarations here as Gu Zheng suggested. o use new structure f2fs_bio_info introduced by Jaegeuk Kim. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Acked-by: Gu Zheng <guz.fnst@cn.fujitsu.com> f2fs: merge read IOs at ra_nat_pages() Change log from v1: o add mark_page_accessed() not to reclaim the nat pages. This patch changes the policy of submitting read bios at ra_nat_pages. Previously, f2fs submits small read bios with block plugging. But, with this patch, f2fs itself merges read bios first and then submits a large bio, which can reduce the bio handling overheads. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: simplify IS_DATASEG and IS_NODESEG macro It is not efficient comparing each segment type to find node or data. Signed-off-by: Changman Lee <cm224.lee@samsung.com> [Jaegeuk Kim: remove unnecessary white spaces] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: adds a tracepoint for submit_read_page This patch adds a tracepoint for submit_read_page. Signed-off-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: integrate tracepoints of f2fs_submit_read(_write)_page] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: adds a tracepoint for f2fs_submit_read_bio This patch adds a tracepoint for f2fs_submit_read_bio. Signed-off-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: integrate tracepoints of f2fs_submit_read(_write)_bio] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: read contiguous sit entry pages by merging for mount performance Previously we read sit entries page one by one, this method lost the chance of reading contiguous page together. So we read pages as contiguous as possible for better mount performance. change log: o merge judgements/use 'Continue' or 'Break' instead of 'Goto' as Gu Zheng suggested. o add mark_page_accessed() before release page to delay VM reclaiming. o remove '*order' for simplification of function as Jaegeuk Kim suggested. Signed-off-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: fix a bug on the block address calculation] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: avoid lock debugging overhead If CONFIG_F2FS_CHECK_FS is unset, we don't need to add any debugging overhead. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add a new function: f2fs_reserve_block() Add the function f2fs_reserve_block() to easily reserve new blocks, and use it to clean up more codes. Signed-off-by: Huajun Li <huajun.li@intel.com> Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Weihong Xu <weihong.xu@intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add detailed information of bio types in the tracepoints This patch inserts information of bio types in more detail. So, we can now see REQ_META and REQ_PRIO too. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: send REQ_META or REQ_PRIO when reading meta area Let's send REQ_META or REQ_PRIO when reading meta area such as NAT/SIT etc. Signed-off-by: Changman Lee <cm224.lee@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add flags and helpers to support inline data Add new inode flags F2FS_INLINE_DATA and FI_INLINE_DATA to indicate whether the inode has inline data. Inline data makes use of inode block's data indices region to save small file. Currently there are 923 data indices in an inode block. Since inline xattr has made use of the last 50 indices to save its data, there are 873 indices left which can be used for inline data. When FI_INLINE_DATA is set, the layout of inode block's indices region is like below: +-----------------+ | | Reserved. reserve_new_block() will make use of | i_addr[0] | i_addr[0] when we need to reserve a new data block | | to convert inline data into regular one's. |-----------------| | | Used by inline data. A file whose size is less than | i_addr[1~872] | 3488 bytes(~3.4k) and doesn't reserve extra | | blocks by fallocate() can be saved here. |-----------------| | | | i_addr[873~922] | Reserved for inline xattr | | +-----------------+ Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Huajun Li <huajun.li@intel.com> Signed-off-by: Weihong Xu <weihong.xu@intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add a new mount option: inline_data Add a mount option: inline_data. If the mount option is set, data of New created small files can be stored in their inode. Signed-off-by: Huajun Li <huajun.li@intel.com> Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Weihong Xu <weihong.xu@intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: remove unnecessary return value Let's remove the unnecessary return value. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: fix a potential out of range issue Fix a potential out of range issue introduced by commit: 22fb72225a f2fs: simplify write_orphan_inodes for better readable Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: bug fix on bit overflow from 32bits to 64bits This patch fixes some bit overflows by the shift operations. Dan Carpenter reported potential bugs on bit overflows as follows. fs/f2fs/segment.c:910 submit_write_page() warn: should 'blk_addr << ((sbi)->log_blocksize - 9)' be a 64 bit type? fs/f2fs/checkpoint.c:429 get_valid_checkpoint() warn: should '1 << ()' be a 64 bit type? fs/f2fs/data.c:408 f2fs_readpage() warn: should 'blk_addr << ((sbi)->log_blocksize - 9)' be a 64 bit type? fs/f2fs/data.c:457 submit_read_page() warn: should 'blk_addr << ((sbi)->log_blocksize - 9)' be a 64 bit type? fs/f2fs/data.c:525 get_data_block_ro() warn: should 'i << blkbits' be a 64 bit type? Bug-Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: remove unnecessary condition checks This patch removes the unnecessary condition checks on: fs/f2fs/gc.c:667 do_garbage_collect() warn: 'sum_page' isn't an ERR_PTR fs/f2fs/f2fs.h:795 f2fs_put_page() warn: 'page' isn't an ERR_PTR Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: remove unneeded code in punch_hole Because FALLOC_FL_PUNCH_HOLE flag must be ORed with FALLOC_FL_KEEP_SIZE in fallocate, so we could remove the useless 'keep size' branch code which will never be excuted in punch_hole. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Fan Li <fanofcode.li@samsung.com> [Jaegeuk Kim: remove an unnecessary parameter togather] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: avoid to calculate incorrect max orphan number Because we will write node summaries when do_checkpoint with umount flag, our number of max orphan blocks should minus NR_CURSEG_NODE_TYPE additional. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Shu Tan <shu.tan@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: correct type of wait in struct bio_private The void *wait in bio_private is used for waiting completion of checkpoint bio. So we don't need to use its type as void, but declare it as completion type. Signed-off-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: add description] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: use true and false for boolean variable The inode_page_locked should be a boolean variable. struct dnode_of_data { struct inode *inode; /* vfs inode pointer */ struct page *inode_page; /* its inode page, NULL is possible */ struct page *node_page; /* cached direct node page */ nid_t nid; /* node id of the direct node block */ unsigned int ofs_in_node; /* data offset in the node page */ ==> bool inode_page_locked; /* inode page is locked or not */ block_t data_blkaddr; /* block address of the node block */ }; Signed-off-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: add description] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: check return value of f2fs_readpage in find_data_page We should return error if we do not get an updated page in find_date_page when f2fs_readpage failed. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: convert recover_orphan_inodes to void The recover_orphan_inodes() returns no error all the time, so we don't need to check its errors. Signed-off-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: add description] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: remove the own bi_private allocation Previously f2fs allocates its own bi_private data structure all the time even though we don't use it. But, can we remove this bi_private allocation? This patch removes such the additional bi_private allocation. 1. Retrieve f2fs_sb_info from its page->mapping->host->i_sb. - This removes the usecases of bi_private in end_io. 2. Use bi_private only when we really need it. - The bi_private is used only when the checkpoint procedure is conducted. - When conducting the checkpoint, f2fs submits a META_FLUSH bio to wait its bio completion. - Since we have no dependancies to remove bi_private now, let's just use bi_private pointer as the completion pointer. Reviewed-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: refactor bio-related operations This patch integrates redundant bio operations on read and write IOs. 1. Move bio-related codes to the top of data.c. 2. Replace f2fs_submit_bio with f2fs_submit_merged_bio, which handles read bios additionally. 3. Introduce __submit_merged_bio to submit the merged bio. 4. Change f2fs_readpage to f2fs_submit_page_bio. 5. Introduce f2fs_submit_page_mbio to integrate previous submit_read_page and submit_write_page. Reviewed-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Reviewed-by: Chao Yu <chao2.yu@samsung.com > Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: readahead contiguous pages for restore_node_summary If cp has no CP_UMOUNT_FLAG, we will read all pages in whole node segment one by one, it makes low performance. So let's merge contiguous pages and readahead for better performance. Signed-off-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: adjust the new bio operations] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: remove debufs dir if debugfs_create_file() failed When debugfs_create_file() failed in f2fs_create_root_stats(), debugfs_root should be remove. Signed-off-by: Younger Liu <liuyiyang@hisense.com> Cc: Younger Liu <younger.liucn@gmail.com> Cc: Jaegeuk Kim <jaegeuk.kim@samsung.com> Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: replace the debugfs_root with f2fs_debugfs_root This minor change for the naming conventions of debugfs_root to avoid any possible conflicts to the other filesystem. Signed-off-by: Younger Liu <younger.liucn@gmail.com> Cc: Younger Liu <younger.liucn@gmail.com> Cc: Jaegeuk Kim <jaegeuk.kim@samsung.com> [Jaegeuk Kim: change the patch name] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: use inner macro GFP_F2FS_ZERO for simplification Use inner macro GFP_F2FS_ZERO to instead of GFP_NOFS | __GFP_ZERO for simplification of code. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: avoid unneeded page release for correct _count of page In find_fsync_dnodes() and recover_data(), our flow is like this: ->f2fs_submit_page_bio() -> f2fs_put_page() -> page_cache_release() ---- page->_count declined to zero. ->__free_pages() -> put_page_testzero() ---- page->_count will be declined again. We will get a segment fault in put_page_testzero when CONFIG_DEBUG_VM is on, or return MM with a bad page with wrong _count num. So let's just release this page. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add unlikely() macro for compiler optimization As we know, some of our branch condition will rarely be true. So we could add 'unlikely' to let compiler optimize these code, by this way we could drop unneeded 'jump' assemble code to improve performance. change log: o add *unlikely* as many as possible across the whole source files at once suggested by Jaegeuk Kim. Suggested-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add unlikely() macro for compiler more aggressively This patch adds unlikely() macro into the most of codes. The basic rule is to add that when: - checking unusual errors, - checking page mappings, - and the other unlikely conditions. Change log from v1: - Don't add unlikely for the NULL test and error test: advised by Andi Kleen. Cc: Chao Yu <chao2.yu@samsung.com> Cc: Andi Kleen <andi@firstfloor.org> Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: merge pages with the same sync_mode flag Previously f2fs submits most of write requests using WRITE_SYNC, but f2fs_write_data_pages submits last write requests by sync_mode flags callers pass. This causes a performance problem since continuous pages with different sync flags can't be merged in cfq IO scheduler(thanks yu chao for pointing it out), and synchronous requests often take more time. This patch makes the following modifies to DATA writebacks: 1. every page will be written back using the sync mode caller pass. 2. only pages with the same sync mode can be merged in one bio request. These changes are restricted to DATA pages.Other types of writebacks are modified To remain synchronous. In my test with tiotest, f2fs sequence write performance is improved by about 7%-10% , and this patch has no obvious impact on other performance tests. Signed-off-by: Fan Li <fanofcode.li@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: refactor bio->rw handling This patch introduces f2fs_io_info to mitigate the complex parameter list. struct f2fs_io_info { enum page_type type; /* contains DATA/NODE/META/META_FLUSH */ int rw; /* contains R/RS/W/WS */ int rw_flag; /* contains REQ_META/REQ_PRIO */ } 1. f2fs_write_data_pages - DATA - WRITE_SYNC is set when wbc->WB_SYNC_ALL. 2. sync_node_pages - NODE - WRITE_SYNC all the time 3. sync_meta_pages - META - WRITE_SYNC all the time - REQ_META | REQ_PRIO all the time ** f2fs_submit_merged_bio() handles META_FLUSH. 4. ra_nat_pages, ra_sit_pages, ra_sum_pages - META - READ_SYNC Cc: Fan Li <fanofcode.li@samsung.com> Cc: Changman Lee <cm224.lee@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: fix the location of tracepoint We need to get a trace before submit_bio, since its bi_sector is remapped during the submit_bio. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: missing kmem_cache_destroy for discard_entry insmod f2fs.ko is failed after insmod and rmmod firstly. $ sudo insmod fs/f2fs/f2fs.ko insmod: error inserting 'fs/f2fs/f2fs.ko': -1 Cannot allocate memory -- dmesg -- kmem_cache_sanity_check (free_nid): Cache name already exists. Signed-off-by: Changman Lee <cm224.lee@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: introduce sysfs entry to control in-place-update policy This patch introduces new sysfs entries for users to control the policy of in-place-updates, namely IPU, in f2fs. Sometimes f2fs suffers from performance degradation due to its out-of-place update policy that produces many additional node block writes. If the storage performance is very dependant on the amount of data writes instead of IO patterns, we'd better drop this out-of-place update policy. This patch suggests 5 polcies and their triggering conditions as follows. [sysfs entry name = ipu_policy] 0: F2FS_IPU_FORCE all the time, 1: F2FS_IPU_SSR if SSR mode is activated, 2: F2FS_IPU_UTIL if FS utilization is over threashold, 3: F2FS_IPU_SSR_UTIL if SSR mode is activated and FS utilization is over threashold, 4: F2FS_IPU_DISABLE disable IPU. (=default option) [sysfs entry name = min_ipu_util] This parameter controls the threshold to trigger in-place-updates. The number indicates percentage of the filesystem utilization, and used by F2FS_IPU_UTIL and F2FS_IPU_SSR_UTIL policies. For more details, see need_inplace_update() in segment.h. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: introduce a new direct_IO write path Previously, f2fs doesn't support direct IOs with high performance, which throws every write requests via the buffered write path, resulting in highly performance degradation due to memory opeations like copy_from_user. This patch introduces a new direct IO path in which every write requests are processed by generic blockdev_direct_IO() with enhanced get_block function. The get_data_block() in f2fs handles: 1. if original data blocks are allocates, then give them to blockdev. 2. otherwise, a. preallocate requested block addresses b. do not use extent cache for better performance c. give the block addresses to blockdev This policy induces that: - new allocated data are sequentially written to the disk - updated data are randomly written to the disk. - f2fs gives consistency on its file meta, not file data. Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: write dirty meta pages collectively This patch enhances writing dirty meta pages collectively in background. During the file data writes, it'd better avoid to write small dirty meta pages frequently. So let's give a chance to collect a number of dirty meta pages for a while. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: move all the bio initialization into __bio_alloc Move all the bio initialization into __bio_alloc, and some minor cleanups are also added. v3: Use 'bool' rather than 'int' as Kim suggested. v2: Use 'is_read' rather than 'rw' as Yu Chao suggested. Remove the needless initialization of bio->bi_private. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: remove the rw_flag domain from f2fs_io_info When using the f2fs_io_info in the low level, we still need to merge the rw and rw_flag, so use the rw to hold all the io flags directly, and remove the rw_flag field. ps.It is based on the previous patch: f2fs: move all the bio initialization into __bio_alloc Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: update several comments Update several comments: 1. use f2fs_{un}lock_op install of mutex_{un}lock_op. 2. update comment of get_data_block(). 3. update description of node offset. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: avoid to set wrong pino of inode when rename dir When we rename a dir to new name which is not exist previous, we will set pino of parent inode with ino of child inode in f2fs_set_link. It destroy consistency of pino, it should be fixed. Thanks for previous work of Shu Tan. Signed-off-by: Shu Tan <shu.tan@samsung.com> Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: check filename length in recover_dentry In current flow, we will get Null return value of f2fs_find_entry in recover_dentry when name.len is bigger than F2FS_NAME_LEN, and then we still add this inode into its dir entry. To avoid this situation, we must check filename length before we use it. Another point is that we could remove the code of checking filename length In f2fs_find_entry, because f2fs_lookup will be called previously to ensure of validity of filename length. V2: o add WARN_ON() as Jaegeuk Kim suggested. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: introduce F2FS_INODE macro to get f2fs_inode This patch introduces F2FS_INODE that returns struct f2fs_inode * from the inode page. By using this macro, we can remove unnecessary casting codes like below. struct f2fs_inode *ri = &F2FS_NODE(inode_page)->i; -> struct f2fs_inode *ri = F2FS_INODE(inode_page); Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: should put the dnode when NEW_ADDR is detected When get_dnode_of_data() in get_data_block() returns a successful dnode, we should put the dnode. But, previously, if its data block address is equal to NEW_ADDR, we didn't do that, resulting in a deadlock condition. So, this patch splits original error conditions with this case, and then calls f2fs_put_dnode before finishing the function. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: check the blocksize before calling generic_direct_IO path The f2fs supports 4KB block size. If user requests dwrite with under 4KB data, it allocates a new 4KB data block. However, f2fs doesn't add zero data into the untouched data area inside the newly allocated data block. This incurs an error during the xfstest #263 test as follow. 263 12s ... [failed, exit status 1] - output mismatch (see 263.out.bad) --- 263.out 2013-03-09 03:37:15.043967603 +0900 +++ 263.out.bad 2013-12-27 04:20:39.230203114 +0900 @@ -1,3 +1,976 @@ QA output created by 263 fsx -N 10000 -o 8192 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -fsx -N 10000 -o 128000 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z +fsx -N 10000 -o 8192 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z +truncating to largest ever: 0x12a00 +truncating to largest ever: 0x75400 +fallocating to largest ever: 0x79cbf ... (Run 'diff -u 263.out 263.out.bad' to see the entire diff) Ran: 263 Failures: 263 Failed 1 of 1 tests It turns out that, when the test tries to write 2KB data with dio, the new dio path allocates 4KB data block without filling zero data inside the remained 2KB area. Finally, the output file contains a garbage data for that region. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: convert max_orphans to a field of f2fs_sb_info Previously, we need to calculate the max orphan num when we try to acquire an orphan inode, but it's a stable value since the super block was inited. So converting it to a field of f2fs_sb_info and use it directly when needed seems a better choose. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: key functions to handle inline data Functions to implement inline data read/write, and move inline data to normal data block when file size exceeds inline data limitation. Signed-off-by: Huajun Li <huajun.li@intel.com> Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Weihong Xu <weihong.xu@intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: handle inline data operations Hook inline data read/write, truncate, fallocate, setattr, etc. Files need meet following 2 requirement to inline: 1) file size is not greater than MAX_INLINE_DATA; 2) file doesn't pre-allocate data blocks by fallocate(). FI_INLINE_DATA will not be set while creating a new regular inode because most of the files are bigger than ~3.4K. Set FI_INLINE_DATA only when data is submitted to block layer, ranther than set it while creating a new inode, this also avoids converting data from inline to normal data block and vice versa. While writting inline data to inode block, the first data block should be released if the file has a block indexed by i_addr[0]. On the other hand, when a file operation is appied to a file with inline data, we need to test if this file can remain inline by doing this operation, otherwise it should be convert into normal file by reserving a new data block, copying inline data to this new block and clear FI_INLINE_DATA flag. Because reserve a new data block here will make use of i_addr[0], if we save inline data in i_addr[0..872], then the first 4 bytes would be overwriten. This problem can be avoided simply by not using i_addr[0] for inline data. Signed-off-by: Huajun Li <huajun.li@intel.com> Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Weihong Xu <weihong.xu@intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: don't need to get f2fs_lock_op for the inline_data test This patch locates checking the inline_data prior to calling f2fs_lock_op() in truncate_blocks(), since getting the lock is unnecessary. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: convert inline_data for punch_hole In the punch_hole(), let's convert inline_data all the time for simplicity and to avoid potential deadlock conditions. It is pretty much not a big deal to do this. Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: call f2fs_put_page at the error case In f2fs_write_begin(), if f2fs_conver_inline_data() returns an error like -ENOSPC, f2fs should call f2fs_put_page(). Otherwise, it is remained as a locked page, resulting in the following bug. [<ffffffff8114657e>] sleep_on_page+0xe/0x20 [<ffffffff81146567>] __lock_page+0x67/0x70 [<ffffffff81157d08>] truncate_inode_pages_range+0x368/0x5d0 [<ffffffff81157ff5>] truncate_inode_pages+0x15/0x20 [<ffffffff8115804b>] truncate_pagecache+0x4b/0x70 [<ffffffff81158082>] truncate_setsize+0x12/0x20 [<ffffffffa02a1842>] f2fs_setattr+0x72/0x270 [f2fs] [<ffffffff811cdae3>] notify_change+0x213/0x400 [<ffffffff811ab376>] do_truncate+0x66/0xa0 [<ffffffff811ab541>] vfs_truncate+0x191/0x1b0 [<ffffffff811ab5bc>] do_sys_truncate+0x5c/0xa0 [<ffffffff811ab78e>] SyS_truncate+0xe/0x10 [<ffffffff81756052>] system_call_fastpath+0x16/0x1b [<ffffffffffffffff>] 0xffffffffffffffff Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: refactor f2fs_convert_inline_data Change log from v1: o handle NULL pointer of grab_cache_page_write_begin() pointed by Chao Yu. This patch refactors f2fs_convert_inline_data to check a couple of conditions internally for deciding whether it needs to convert inline_data or not. So, the new f2fs_convert_inline_data initially checks: 1) f2fs_has_inline_data(), and 2) the data size to be changed. If the inode has inline_data but the size to fill is less than MAX_INLINE_DATA, then we don't need to convert the inline_data with data allocation. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add the number of inline_data files to status info This patch adds the number of inline_data files into the status information. Note that the number is reset whenever the filesystem is newly mounted. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add inline_data recovery routine This patch adds a inline_data recovery routine with the following policy. [prev.] [next] of inline_data flag o o -> recover inline_data o x -> remove inline_data, and then recover data blocks x o -> remove inline_data, and then recover inline_data x x -> recover data blocks Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: handle errors correctly during f2fs_reserve_block The get_dnode_of_data nullifies inode and node page when error is occurred. There are two cases that passes inode page into get_dnode_of_data(). 1. make_empty_dir() -> get_new_data_page() -> f2fs_reserve_block(ipage) -> get_dnode_of_data() 2. f2fs_convert_inline_data() -> __f2fs_convert_inline_data() -> f2fs_reserve_block(ipage) -> get_dnode_of_data() This patch adds correct error handling codes when get_dnode_of_data() returns an error. At first, f2fs_reserve_block() calls f2fs_put_dnode() whenever reserve_new_block returns an error. So, the rule of f2fs_reserve_block() is to nullify inode page when there is any error internally. Finally, two callers of f2fs_reserve_block() should call f2fs_put_dnode() appropriately if they got an error since successful f2fs_reserve_block(). Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: fix truncate_partial_nodes bug The truncate_partial_nodes puts pages incorrectly in the following two cases. Note that the value for argc 'depth' can only be 2 or 3. Please see truncate_inode_blocks() and truncate_partial_nodes(). 1) An err is occurred in the first 'for' loop When err is occurred with depth = 2, pages[0] is invalid, so this page doesn't need to be put. There is no problem, however, when depth is 3, it doesn't put the pages correctly where pages[0] is valid and pages[1] is invalid. In this case, depth is set to 2 (ref to statemnt depth = i + 1), and then 'goto fail'. In label 'fail', for (i = depth - 3; i >= 0; i--) cannot meet the condition because i = -1, so pages[0] cann't be put. 2) An err happened in the second 'for' loop Now we've got pages[0] with depth = 2, or we've got pages[0] and pages[1] with depth = 3. When an err is detected, we need 'goto fail' to put such the pages. When depth is 2, in label 'fail', for (i = depth - 3; i >= 0; i--) cann't meet the condition because i = -1, so pages[0] cann't be put. When depth is 3, in label 'fail', for (i = depth - 3; i >= 0; i--) can only put pages[0], pages[1] also cann't be put. Note that 'depth' has been changed before first 'goto fail' (ref to statemnt depth = i + 1), so passing this modified 'depth' to the tracepoint, trace_f2fs_truncate_partial_nodes, is also incorrect. Signed-off-by: Shifei Ge <shifei10.ge@samsung.com> [Jaegeuk Kim: modify the description and fix one bug] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: avoid to left uninitialized data in page when read inline data Change log from v1: o reduce unneeded memset in __f2fs_convert_inline_data >From 58796be2bd2becbe8d52305210fb2a64e7dd80b6 Mon Sep 17 00:00:00 2001 From: Chao Yu <chao2.yu@samsung.com> Date: Mon, 30 Dec 2013 09:21:33 +0800 Subject: [PATCH] f2fs: avoid to left uninitialized data in page when read inline data We left uninitialized data in the tail of page when we read an inline data page. So let's initialize left part of the page excluding inline data region. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: avoid to read inline data except first page Here is a case which could read inline page data not from first page. 1. write inline data 2. lseek to offset 4096 3. read 4096 bytes from offset 4096 (read_inline_data read inline data page to non-first page, And previously VFS has add this page to page cache) 4. ftruncate offset 8192 5. read 4096 bytes from offset 4096 (we meet this updated page with inline data in cache) So we should leave this page with inited data and uptodate flag for this case. Change log from v1: o fix a deadlock bug Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: improve write performance under frequent fsync calls When considering a bunch of data writes with very frequent fsync calls, we are able to think the following performance regression. N: Node IO, D: Data IO, IO scheduler: cfq Issue pending IOs D1 D2 D3 D4 D1 D2 D3 D4 N1 D2 D3 D4 N1 N2 N1 D3 D4 N2 D1 --> N1 can be selected by cfq becase of the same priority of N and D. Then D3 and D4 would be delayed, resuling in performance degradation. So, when processing the fsync call, it'd better give higher priority to data IOs than node IOs by assigning WRITE and WRITE_SYNC respectively. This patch improves the random wirte performance with frequent fsync calls by up to 10%. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add a sysfs entry to control max_victim_search Previously during SSR and GC, the maximum number of retrials to find a victim segment was hard-coded by MAX_VICTIM_SEARCH, 4096 by default. This number makes an effect on IO locality, when SSR mode is activated, which results in performance fluctuation on some low-end devices. If max_victim_search = 4, the victim will be searched like below. ("D" represents a dirty segment, and "*" indicates a selected victim segment.) D1 D2 D3 D4 D5 D6 D7 D8 D9 [ * ] [ * ] [ * ] [ ....] This patch adds a sysfs entry to control the number dynamically through: /sys/fs/f2fs/$dev/max_victim_search Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: remove the needless parameter of f2fs_wait_on_page_writeback "boo sync" parameter is never referenced in f2fs_wait_on_page_writeback. We should remove this parameter. Signed-off-by: Yuan Zhong <yuan.mark.zhong@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: move grabing orphan pages out of protection region Move grabing orphan block page out of protection region, and grab all the orphan block pages ahead. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Reviewed-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: remove unnecessary code pointed by Chao Yu] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: move alloc new orphan node out of lock protection region Move alloc new orphan node out of lock protection region. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: use spinlock rather than mutex for better speed With the 2 previous changes, all the long time operations are moved out of the protection region, so here we can use spinlock rather than mutex (orphan_inode_mutex) for lower overhead. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add delimiter to seperate name and value in debug phrase Support for f2fs-tools/tools/f2stat to monitor /sys/kernel/debug/f2fs/status Signed-off-by: Changman Lee <cm224.lee@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: avoid f2fs_balance_fs call during pageout This patch should resolve the following bug. ========================================================= [ INFO: possible irq lock inversion dependency detected ] 3.13.0-rc5.f2fs+ #6 Not tainted --------------------------------------------------------- kswapd0/41 just changed the state of lock: (&sbi->gc_mutex){+.+.-.}, at: [<ffffffffa030503e>] f2fs_balance_fs+0xae/0xd0 [f2fs] but this lock took another, RECLAIM_FS-READ-unsafe lock in the past: (&sbi->cp_rwsem){++++.?} and interrupts could create inverse lock ordering between them. other info that might help us debug this: Chain exists of: &sbi->gc_mutex --> &sbi->cp_mutex --> &sbi->cp_rwsem Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&sbi->cp_rwsem); local_irq_disable(); lock(&sbi->gc_mutex); lock(&sbi->cp_mutex); <Interrupt> lock(&sbi->gc_mutex); *** DEADLOCK *** This bug is due to the f2fs_balance_fs call in f2fs_write_data_page. If f2fs_write_data_page is triggered by wbc->for_reclaim via kswapd, it should not call f2fs_balance_fs which tries to get a mutex grabbed by original syscall flow. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: missing REQ_META and REQ_PRIO when sync_meta_pages(META_FLUSH) Doing sync_meta_pages with META_FLUSH when checkpoint, we overide rw using WRITE_FLUSH_FUA. At this time, we also should set REQ_META|REQ_PRIO. Signed-off-by: Changman Lee <cm224.lee@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: clean checkpatch warnings Fixed a variety of trivial checkpatch warnings. The only delta should be some minor formatting on log strings that were split / too long. Signed-off-by: Chris Fries <cfries@motorola.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: call mark_inode_dirty to flush dirty pages If a dentry page is updated, we should call mark_inode_dirty to add the inode into the dirty list, so that its dentry pages are flushed to the disk. Otherwise, the inode can be evicted without flush. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: move a branch for code redability This patch moves a function in f2fs_delete_entry for code readability. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: add help function META_MAPPING Introduce help function META_MAPPING() to get the cache meta blocks' address space. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: remove the orphan block page array As the orphan_blocks may be max to 504, so it is not security and rigorous to store such a large array in the kernel stack as Dan Carpenter said. In fact, grab_meta_page has locked the page in the page cache, and we can use find_get_page() to fetch the page safely in the downstream, so we can remove the page array directly. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: introduce NODE_MAPPING for code consistency This patch adds NODE_MAPPING which is similar as META_MAPPING introduced by Gu Zheng. Cc: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> f2fs: drop obsolete node page when it is truncated If a node page is trucated, we'd better drop the page in the node_inode's page cache for better memory footprint. Change-Id: I1b7c3378b7f7adb13477376164e91ab080b50d8b Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
* f2fs: Pull in from upstream 3.13 kernelLinus Torvalds2014-07-243-0/+1158
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge tag 'for-3.8-merge' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull new F2FS filesystem from Jaegeuk Kim: "Introduce a new file system, Flash-Friendly File System (F2FS), to Linux 3.8. Highlights: - Add initial f2fs source codes - Fix an endian conversion bug - Fix build failures on random configs - Fix the power-off-recovery routine - Minor cleanup, coding style, and typos patches" From the Kconfig help text: F2FS is based on Log-structured File System (LFS), which supports versatile "flash-friendly" features. The design has been focused on addressing the fundamental issues in LFS, which are snowball effect of wandering tree and high cleaning overhead. Since flash-based storages show different characteristics according to the internal geometry or flash memory management schemes aka FTL, F2FS and tools support various parameters not only for configuring on-disk layout, but also for selecting allocation and cleaning algorithms. and there's an article by Neil Brown about it on lwn.net: http://lwn.net/Articles/518988/ * tag 'for-3.8-merge' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (36 commits) f2fs: fix tracking parent inode number f2fs: cleanup the f2fs_bio_alloc routine f2fs: introduce accessor to retrieve number of dentry slots f2fs: remove redundant call to f2fs_put_page in delete entry f2fs: make use of GFP_F2FS_ZERO for setting gfp_mask f2fs: rewrite f2fs_bio_alloc to make it simpler f2fs: fix a typo in f2fs documentation f2fs: remove unused variable f2fs: move error condition for mkdir at proper place f2fs: remove unneeded initialization f2fs: check read only condition before beginning write out f2fs: remove unneeded memset from init_once f2fs: show error in case of invalid mount arguments f2fs: fix the compiler warning for uninitialized use of variable f2fs: resolve build failures f2fs: adjust kernel coding style f2fs: fix endian conversion bugs reported by sparse f2fs: remove unneeded version.h header file from f2fs.h f2fs: update the f2fs document f2fs: update Kconfig and Makefile ... Conflicts: include/uapi/linux/magic.h Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs bug fixes from Jaegeuk Kim: "This patch-set includes two major bug fixes: - incorrect IUsed provided by *df -i*, and - lookup failure of parent inodes in corner cases. [Other Bug Fixes] - Fix error handling routines - Trigger recovery process correctly - Resolve build failures due to missing header files [Etc] - Add a MAINTAINERS entry for f2fs - Fix and clean up variables, functions, and equations - Avoid warnings during compilation" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: f2fs: unify string length declarations and usage f2fs: clean up unused variables and return values f2fs: clean up the start_bidx_of_node function f2fs: remove unneeded variable from f2fs_sync_fs f2fs: fix fsync_inode list addition logic and avoid invalid access to memory f2fs: remove unneeded initialization of nr_dirty in dirty_seglist_info f2fs: handle error from f2fs_iget_nowait f2fs: fix equation of has_not_enough_free_secs() f2fs: add MAINTAINERS entry f2fs: return a default value for non-void function f2fs: invalidate the node page if allocation is failed f2fs: add missing #include <linux/prefetch.h> f2fs: do f2fs_balance_fs in front of dir operations f2fs: should recover orphan and fsync data f2fs: fix handling errors got by f2fs_write_inode f2fs: fix up f2fs_get_parent issue to retrieve correct parent inode number f2fs: fix wrong calculation on f_files in statfs f2fs: remove set_page_dirty for atomic f2fs_end_io_write Merge tag 'f2fs-for-3.8-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs fixes from Jaegeuk Kim: o Support swap file and link generic_file_remap_pages o Enhance the bio streaming flow and free section control o Major bug fix on recovery routine o Minor bug/warning fixes and code cleanups * tag 'f2fs-for-3.8-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (22 commits) f2fs: use _safe() version of list_for_each f2fs: add comments of start_bidx_of_node f2fs: avoid issuing small bios due to several dirty node pages f2fs: support swapfile f2fs: add remap_pages as generic_file_remap_pages f2fs: add __init to functions in init_f2fs_fs f2fs: fix the debugfs entry creation path f2fs: add global mutex_lock to protect f2fs_stat_list f2fs: remove the blk_plug usage in f2fs_write_data_pages f2fs: avoid redundant time update for parent directory in f2fs_delete_entry f2fs: remove redundant call to set_blocksize in f2fs_fill_super f2fs: move f2fs_balance_fs to punch_hole f2fs: add f2fs_balance_fs in several interfaces f2fs: revisit the f2fs_gc flow f2fs: check return value during recovery f2fs: avoid null dereference in f2fs_acl_from_disk f2fs: initialize newly allocated dnode structure f2fs: update f2fs partition info about SIT/NAT layout f2fs: update f2fs document to reflect SIT/NAT layout correctly f2fs: remove unneeded INIT_LIST_HEAD at few places ... Merge tag 'f2fs-for-3.9' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs update from Jaegeuk Kim: "[Major bug fixes] o Store device file information correctly o Fix -EIO handling with respect to power-off-recovery o Allocate blocks with global locks o Fix wrong calculation of the SSR cost [Cleanups] o Get rid of fake on-stack dentries [Enhancement] o Support (un)freeze_fs o Enhance the f2fs_gc flow o Support 32-bit binary execution on 64-bit kernel" * tag 'f2fs-for-3.9' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (29 commits) f2fs: avoid build warning f2fs: add compat_ioctl to provide backward compatability f2fs: fix calculation of max. gc cost in the SSR case f2fs: clarify and enhance the f2fs_gc flow f2fs: optimize the return condition for has_not_enough_free_secs f2fs: make an accessor to get sections for particular block type f2fs: mark gc_thread as NULL when thread creation is failed f2fs: name gc task as per the block device f2fs: remove unnecessary gc option check and balance_fs f2fs: remove repeated F2FS_SET_SB_DIRT call f2fs: when check superblock failed, try to check another superblock f2fs: use F2FS_BLKSIZE to judge bloksize and page_cache_size f2fs: add device name in debugfs f2fs: stop repeated checking if cp is needed f2fs: avoid balanc_fs during evict_inode f2fs: remove the use of page_cache_release f2fs: fix typo mistake for data_version description f2fs: reorganize code for ra_node_page f2fs: avoid redundant call to has_not_enough_free_secs in f2fs_gc f2fs: add un/freeze_fs into super_operations ... Merge tag 'f2fs-for-v3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "This patch-set includes the following major enhancement patches. - introduce a new gloabl lock scheme - add tracepoints on several major functions - fix the overall cleaning process focused on victim selection - apply the block plugging to merge IOs as much as possible - enhance management of free nids and its list - enhance the readahead mode for node pages - address several cretical deadlock conditions - reduce lock_page calls The other minor bug fixes and enhancements are as follows. - calculation mistakes: overflow - bio types: READ, READA, and READ_SYNC - fix the recovery flow, data races, and null pointer errors" * tag 'f2fs-for-v3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (68 commits) f2fs: cover free_nid management with spin_lock f2fs: optimize scan_nat_page() f2fs: code cleanup for scan_nat_page() and build_free_nids() f2fs: bugfix for alloc_nid_failed() f2fs: recover when journal contains deleted files f2fs: continue to mount after failing recovery f2fs: avoid deadlock during evict after f2fs_gc f2fs: modify the number of issued pages to merge IOs f2fs: remove useless #include <linux/proc_fs.h> as we're now using sysfs as debug entry. f2fs: fix inconsistent using of NM_WOUT_THRESHOLD f2fs: check truncation of mapping after lock_page f2fs: enhance alloc_nid and build_free_nids flows f2fs: add a tracepoint on f2fs_new_inode f2fs: check nid == 0 in add_free_nid f2fs: add REQ_META about metadata requests for submit f2fs: give a chance to merge IOs by IO scheduler f2fs: avoid frequent background GC f2fs: add tracepoints to debug checkpoint request f2fs: add tracepoints for write page operations f2fs: add tracepoints to debug the block allocation ... Merge tag 'for-f2fs-3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "This patch-set includes the following major enhancement patches: - remount_fs callback function - restore parent inode number to enhance the fsync performance - xattr security labels - reduce the number of redundant lock/unlock data pages - avoid frequent write_inode calls The other minor bug fixes are as follows. - endian conversion bugs - various bugs in the roll-forward recovery routine" * tag 'for-f2fs-3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (56 commits) f2fs: fix to recover i_size from roll-forward f2fs: remove the unused argument "sbi" of func destroy_fsync_dnodes() f2fs: remove reusing any prefree segments f2fs: code cleanup and simplify in func {find/add}_gc_inode f2fs: optimize the init_dirty_segmap function f2fs: fix an endian conversion bug detected by sparse f2fs: fix crc endian conversion f2fs: add remount_fs callback support f2fs: recover wrong pino after checkpoint during fsync f2fs: optimize do_write_data_page() f2fs: make locate_dirty_segment() as static f2fs: remove unnecessary parameter "offset" from __add_sum_entry() f2fs: avoid freqeunt write_inode calls f2fs: optimise the truncate_data_blocks_range() range f2fs: use the F2FS specific flags in f2fs_ioctl() f2fs: sync dir->i_size with its block allocation f2fs: fix i_blocks translation on various types of files f2fs: set sb->s_fs_info before calling parse_options() f2fs: support xattr security labels f2fs: fix iget/iput of dir during recovery ... Merge tag 'for-f2fs-3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "This patch-set includes the following major enhancement patches: - support inline xattrs - add sysfs support to control GCs explicitly - add proc entry to show the current segment usage information - improve the GC/SSR performance The other bug fixes are as follows: - avoid the overflow on status calculation - fix some error handling routines - fix inconsistent xattr states after power-off-recovery - fix incorrect xattr node offset definition - fix deadlock condition in fsync - fix the fdatasync routine for power-off-recovery" * tag 'for-f2fs-3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (40 commits) f2fs: optimize gc for better performance f2fs: merge more bios of node block writes f2fs: avoid an overflow during utilization calculation f2fs: trigger GC when there are prefree segments f2fs: use strncasecmp() simplify the string comparison f2fs: fix omitting to update inode page f2fs: support the inline xattrs f2fs: add the truncate_xattr_node function f2fs: introduce __find_xattr for readability f2fs: reserve the xattr space dynamically f2fs: add flags for inline xattrs f2fs: fix error return code in init_f2fs_fs() f2fs: fix wrong BUG_ON condition f2fs: fix memory leak when init f2fs filesystem fail f2fs: fix a compound statement label error f2fs: avoid writing inode redundantly when creating a file f2fs: alloc_page() doesn't return an ERR_PTR f2fs: should cover i_xattr_nid with its xattr node page lock f2fs: check the free space first in new_node_page f2fs: clean up the needless end 'return' of void function ... Merge tag 'for-f2fs-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "This patch-set includes the following major enhancement patches. - add a sysfs to control reclaiming free segments - enhance the f2fs global lock procedures - enhance the victim selection flow - wait for selected node blocks during fsync - add some tracepoints - add a config to remove abundant BUG_ONs The other bug fixes are as follows. - fix deadlock on acl operations - fix some bugs with respect to orphan inodes And, there are a bunch of cleanups" * tag 'for-f2fs-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (42 commits) f2fs: issue more large discard command f2fs: fix memory leak after kobject init failed in fill_super f2fs: cleanup waiting routine for writeback pages in cp f2fs: avoid to use a NULL point in destroy_segment_manager f2fs: remove unnecessary TestClearPageError when wait pages writeback f2fs: update f2fs document f2fs: avoid to wait all the node blocks during fsync f2fs: check all ones or zeros bitmap with bitops for better mount performance f2fs: change the method of calculating the number summary blocks f2fs: fix calculating incorrect free size when update xattr in __f2fs_setxattr f2fs: add an option to avoid unnecessary BUG_ONs f2fs: introduce CONFIG_F2FS_CHECK_FS for BUG_ON control f2fs: fix a deadlock during init_acl procedure f2fs: clean up acl flow for better readability f2fs: remove unnecessary segment bitmap updates f2fs: add tracepoint for vm_page_mkwrite f2fs: add tracepoint for set_page_dirty f2fs: remove redundant set_page_dirty from write_compacted_summaries f2fs: add reclaiming control by sysfs f2fs: introduce f2fs_balance_fs_bg for some background jobs ... Change-Id: Ied5488471d49d64ce6abb4be19237c4e90829ff6
* m7: Linux 3.4.98Greg Kroah-Hartman2014-07-132-1/+25
| | | | | | | | | | ** The following commit from upstream wasn't included: Bluetooth: Fix SSP acceptor just-works confirmation without MITM (commit ba15a58b179ed76a7e887177f2b06de12c58ec8f upstream.) Change-Id: I217b44bc8887ec0fc79da6c01b9da180554ef373 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* m7: Linux 3.4.97doc2014-07-082-1/+33
| | | | | Change-Id: If76bfe02408af846e51529a3eefdc9d5a70aebe7 Signed-off-by: doc <doc.divxm@gmail.com>
* m7: Linux 3.4.96doc2014-07-072-0/+54
| | | | | | | | | | | | | | | The following commits from upstream weren't included: ** commit 1b15d2e5b8077670b1e6a33250a0d9577efff4a5 upstream. + USB: usb_wwan Changes (6 commits) ** commit db0904737947d509844e171c9863ecc5b4534005 upstream. ** commit d9e93c08d8d985e5ef89436ebc9f4aad7e31559f upstream. ** commit 170fad9e22df0063eba0701adb966786d7a4ec5a upstream. ** commit 79eed03e77d481b55d85d1cfe5a1636a0d3897fd upstream. ** commit fb7ad4f93d9f0f7d49beda32f5e7becb94b29a4d upstream. ** commit 9096f1fbba916c2e052651e9de82fcfb98d4bea7 Change-Id: I29990e47b287a5e727fcb8eeb9a6d7de1cf0a462 Signed-off-by: doc <doc.divxm@gmail.com>
* freezer: add new freezable helpers using freezer_do_not_countflar22014-06-301-2/+2
| | | | | | | | Conflicts: include/linux/freezer.h Change-Id: I97f206569bc142b9a22d2eb0c13e237760ddc71c Signed-off-by: flar2 <asegaert@gmail.com>
* m7: fix pocket detectionflar22014-06-291-0/+2
| | | | | | | | Conflicts: include/linux/pl_sensor.h Change-Id: I9f9c0c8a2c494e32b8c659d2f553fa4ab572e2a0 Signed-off-by: flar2 <asegaert@gmail.com>
* m7: Linux 3.4.95doc2014-06-293-39/+39
| | | | | | | | | | The following commit from upstream wasn't included: https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=3f8f4ae48f4023e1c53722b1dc1a7ab897cbca14 Change-Id: I030f99b027994bc23927a2b93c914827c40d1398 Signed-off-by: doc <doc.divxm@gmail.com>
* m7: Build fixup after updates for new battery, board & OTG driversBrian Larsen2014-06-232-24/+22
| | | | | | | | | | | | | | | | | | | | PS1: AICPifyng new drivers Changes to get the following patches to build: - http://gerrit.aicp-rom.com/#/c/241/ - http://gerrit.aicp-rom.com/#/c/242/ - http://gerrit.aicp-rom.com/#/c/243/ - http://gerrit.aicp-rom.com/#/c/244/ - http://gerrit.aicp-rom.com/#/c/245/ I'm not 100% sure if all my merges are correct as far as adding something from the CM version vs deleting something from the AICP version, but I was able to build and run on my M7 without any noticeable problem. PS2: Removed whitespaces in include/linux/usb/otg.h Change-Id: I1225badd3636066a7e58d00cb3fcac72e58426c3
* USB: msm_otg: Update HTC variant of msm_otgEthan Chen2014-06-232-6/+8
| | | | | | | | | | | | | HTC kernel version: m7stock_3.4.10-g1a25406 Conflicts: arch/arm/mach-msm/include/mach/board-ext-htc.h arch/arm/mach-msm/include/mach/cable_detect.h drivers/usb/otg/msm_otg_htc.c include/linux/usb/msm_hsusb.h include/linux/usb/otg.h Change-Id: I2707390c287a6d2a9f05814aea25362347240048
* power: pm8921: Update HTC pm8921 driversEthan Chen2014-06-232-0/+743
| | | | | | | | | | | * HTC kernel version: m7stock_3.4.10-g1a25406 Conflicts: drivers/power/pm8921-charger-htc.c include/linux/mfd/pm8xxx/pm8921-bms-htc.h include/linux/mfd/pm8xxx/pm8921-charger-htc.h Change-Id: I5b35f54a48b051b7f3b03f0d867b9d14d9ddf8a2
* m7: Linux 3.4.93Greg Kroah-Hartman2014-06-225-9/+22
| | | | | | | | | The following commit from upstream was reverted: https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=4b2cfc9508d9e509708f85748548e03db696dbcd Change-Id: I79ab561bb093a73de47a02b47efe8a1fee8ede91 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* m7: Linux 3.4.92doc2014-06-1812-13/+590
| | | | | | | | | | | The following commits were excluded: 1. https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=ad4751d31602b7e5334e3c9d6cd488ca35d0b89c 2. https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=4652951d1202cef2798b1a0dfbf4122794594b41 Change-Id: I18a9dc678aa420991b4e138ad7048af02c88aea6 Signed-off-by: doc <doc.divxm@gmail.com>
* m7: Linux 3.4.91Greg Kroah-Hartman2014-05-183-5/+66
| | | | | | Change-Id: I9f5372a493f370544425485a65e9810f109b5db2 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: doc <doc.divxm@gmail.com>
* irq_work: Make self-IPIs optableFrederic Weisbecker2014-05-171-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | On irq work initialization, let the user choose to define it as "lazy" or not. "Lazy" means that we don't want to send an IPI (provided the arch can anyway) when we enqueue this work but we rather prefer to wait for the next timer tick to execute our work if possible. This is going to be a benefit for non-urgent enqueuers (like printk in the future) that may prefer not to raise an IPI storm in case of frequent enqueuing on short periods of time. Conflicts: kernel/irq_work.c Change-Id: Ieca43ed96807dce785f3222c4165068d36301d9b Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* printk: Wake up klogd using irq_workFrederic Weisbecker2014-05-171-3/+0
| | | | | | | | | | | | | | | | | | | | | | | klogd is woken up asynchronously from the tick in order to do it safely. However if printk is called when the tick is stopped, the reader won't be woken up until the next interrupt, which might not fire for a while. As a result, the user may miss some message. To fix this, lets implement the printk tick using a lazy irq work. This subsystem takes care of the timer tick state and can fix up accordingly. Change-Id: I85b6f3b20d30bccb1970420a27f600e9ec25672d Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* irq_work: Don't stop the tick with pending worksFrederic Weisbecker2014-05-171-1/+7
| | | | | | | | | | | | | | | | | | | | Don't stop the tick if we have pending irq works on the queue, otherwise if the arch can't raise self-IPIs, we may not find an opportunity to execute the pending works for a while. Conflicts: include/linux/irq_work.h kernel/irq_work.c Change-Id: I71dbccedd7addb8cf257a94bd4ae64d4c66c55a6 Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* nohz: Add API to check tick stateFrederic Weisbecker2014-05-171-1/+16
| | | | | | | | | | | | | | | | | | | | We need some quick way to check if the CPU has stopped its tick. This will be useful to implement the printk tick using the irq work subsystem. Conflicts: include/linux/tick.h kernel/time/tick-sched.c Change-Id: I32b03081589a7d58a56344c9053bdf8a4130530f Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* nohz: Rename ts->idle_tick to ts->last_tickFrederic Weisbecker2014-05-171-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that idle and nohz logics are going to be independant each others, ts->idle_tick becomes too much a biased name to describe the field that saves the last scheduled tick on top of which we re-calculate the next tick to schedule when the timer is restarted. We want to reuse this even to stop the tick outside idle cases. So let's rename it to some more generic name: ts->last_tick. This changes a bit the timer list stat export so we need to increase its version. Conflicts: include/linux/tick.h Change-Id: Id8feb45c967744b7271cb2a6786ffb1ff02b065a Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Alessio Igor Bogani <abogani@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Avi Kivity <avi@redhat.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Christoph Lameter <cl@linux.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Geoff Levand <geoff@infradead.org> Cc: Gilad Ben Yossef <gilad@benyossef.com> Cc: Hakan Akkan <hakanakkan@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kevin Hilman <khilman@ti.com> Cc: Max Krasnyansky <maxk@qualcomm.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sven-Thorsten Dietrich <thebigcorporation@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* sched/nohz: Clean up select_nohz_load_balancer()Peter Zijlstra2014-05-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | There is no load_balancer to be selected now. It just sets the state of the nohz tick to stop. So rename the function, pass the 'cpu' as a parameter and then remove the useless call from tick_nohz_restart_sched_tick(). [ s/set_nohz_tick_stopped/nohz_balance_enter_idle/g s/clear_nohz_tick_stopped/nohz_balance_exit_idle/g ] Conflicts: kernel/sched/fair.c kernel/time/tick-sched.c Change-Id: Ia6a37e53e08a4af9cb7e9dd8e8a07004a1415936 Signed-off-by: Alex Shi <alex.shi@intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Venkatesh Pallipadi <venki@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1347261059-24747-1-git-send-email-alex.shi@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com> Signed-off-by: LaboDJ <jacopolabardi@gmail.com>
* sched: Fix init NOHZ_IDLE flagVincent Guittot2014-05-171-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On my SMP platform which is made of 5 cores in 2 clusters, I have the nr_busy_cpu field of sched_group_power struct that is not null when the platform is fully idle - which makes the scheduler unhappy. The root cause is: During the boot sequence, some CPUs reach the idle loop and set their NOHZ_IDLE flag while waiting for others CPUs to boot. But the nr_busy_cpus field is initialized later with the assumption that all CPUs are in the busy state whereas some CPUs have already set their NOHZ_IDLE flag. More generally, the NOHZ_IDLE flag must be initialized when new sched_domains are created in order to ensure that NOHZ_IDLE and nr_busy_cpus are aligned. This condition can be ensured by adding a synchronize_rcu() between the destruction of old sched_domains and the creation of new ones so the NOHZ_IDLE flag will not be updated with old sched_domain once it has been initialized. But this solution introduces a additionnal latency in the rebuild sequence that is called during cpu hotplug. As suggested by Frederic Weisbecker, another solution is to have the same rcu lifecycle for both NOHZ_IDLE and sched_domain struct. A new nohz_idle field is added to sched_domain so both status and sched_domain will share the same RCU lifecycle and will be always synchronized. In addition, there is no more need to protect nohz_idle against concurrent access as it is only modified by 2 exclusive functions called by local cpu. This solution has been prefered to the creation of a new struct with an extra pointer indirection for sched_domain. The synchronization is done at the cost of : - An additional indirection and a rcu_dereference for accessing nohz_idle. - We use only the nohz_idle field of the top sched_domain. Conflicts: include/linux/sched.h Change-Id: Ib9907196df8e17ecf06d8a430f2130d8a13025af Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: linaro-kernel@lists.linaro.org Cc: peterz@infradead.org Cc: fweisbec@gmail.com Cc: pjt@google.com Cc: rostedt@goodmis.org Cc: efault@gmx.de Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* net: support marking accepting TCP socketsLorenzo Colitti2014-05-172-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using mark-based routing, sockets returned from accept() may need to be marked differently depending on the incoming connection request. This is the case, for example, if different socket marks identify different networks: a listening socket may want to accept connections from all networks, but each connection should be marked with the network that the request came in on, so that subsequent packets are sent on the correct network. This patch adds a sysctl to mark TCP sockets based on the fwmark of the incoming SYN packet. If enabled, and an unmarked socket receives a SYN, then the SYN packet's fwmark is written to the connection's inet_request_sock, and later written back to the accepted socket when the connection is established. If the socket already has a nonzero mark, then the behaviour is the same as it is today, i.e., the listening socket's fwmark is used. Black-box tested using user-mode linux: - IPv4/IPv6 SYN+ACK, FIN, etc. packets are routed based on the mark of the incoming SYN packet. - The socket returned by accept() is marked with the mark of the incoming SYN packet. - Tested with syncookies=1 and syncookies=2. Conflicts: net/ipv4/syncookies.c Change-Id: I852f976bc844e33cf0ed0ce140748a8774338881 Signed-off-by: Lorenzo Colitti <lorenzo@google.com> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* net: add a sysctl to reflect the fwmark on repliesLorenzo Colitti2014-05-174-0/+8
| | | | | | | | | | | | | | | | | | | | | | | Kernel-originated IP packets that have no user socket associated with them (e.g., ICMP errors and echo replies, TCP RSTs, etc.) are emitted with a mark of zero. Add a sysctl to make them have the same mark as the packet they are replying to. This allows an administrator that wishes to do so to use mark-based routing, firewalling, etc. for these replies by marking the original packets inbound. Tested using user-mode linux: - ICMP/ICMPv6 echo replies and errors. - TCP RST packets (IPv4 and IPv6). Conflicts: Documentation/networking/ip-sysctl.txt net/ipv6/tcp_ipv6.c Change-Id: I1a3553ebea26dbc988272671cfa49f9a04dac520 Signed-off-by: Lorenzo Colitti <lorenzo@google.com> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* net: ipv6: autoconf routes into per-device tablesLorenzo Colitti2014-05-172-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, IPv6 router discovery always puts routes into RT6_TABLE_MAIN. This causes problems for connection managers that want to support multiple simultaneous network connections and want control over which one is used by default (e.g., wifi and wired). To work around this connection managers typically take the routes they prefer and copy them to static routes with low metrics in the main table. This puts the burden on the connection manager to watch netlink to see if the routes have changed, delete the routes when their lifetime expires, etc. Instead, this patch adds a per-interface sysctl to have the kernel put autoconf routes into different tables. This allows each interface to have its own autoconf table, and choosing the default interface (or using different interfaces at the same time for different types of traffic) can be done using appropriate ip rules. The sysctl behaves as follows: - = 0: default. Put routes into RT6_TABLE_MAIN as before. - > 0: manual. Put routes into the specified table. - < 0: automatic. Add the absolute value of the sysctl to the device's ifindex, and use that table. The automatic mode is most useful in conjunction with net.ipv6.conf.default.accept_ra_rt_table. A connection manager or distribution could set it to, say, -100 on boot, and thereafter just use IP rules. Conflicts: include/linux/ipv6.h net/ipv6/addrconf.c net/ipv6/route.c Change-Id: I093d39fb06ec413905dc0d0d5792c1bc5d5c73a9 Signed-off-by: Lorenzo Colitti <lorenzo@google.com> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* Linux 3.4.90Greg Kroah-Hartman2014-05-141-0/+1
| | | | | | Change-Id: I3d6e58051f7254a93fa56e89103b3ae31a454ad5 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: doc <doc.divxm@gmail.com>
* Linux 3.4.87Greg Kroah-Hartman2014-05-1410-13/+106
| | | | | | Change-Id: I390d63068bca01eb841751a85cced9e1f9d14c95 Signed-off-by: Greg Kroah-Hartman<gregkh@linuxfoundation.org> Signed-off-by: doc <doc.divxm@gmail.com>
* Linux 3.4.86Greg Kroah-Hartman2014-05-141-0/+15
| | | | | | Change-Id: I011cbad62c4fc4ba227f2b230c2d0dd3315cd1ff Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: doc <doc.divxm@gmail.com>
* Linux 3.4.84Greg Kroah-Hartman2014-05-123-5/+12
| | | | | | Change-Id: Iabbb785621ca1e2fac73594215025471c468d900 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: doc <doc.divxm@gmail.com>
* Linux 3.4.83houst0nn2014-05-126-6/+92
| | | | | | | | | | | The following commits from upstream were reverted: 1. https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=176485d0f36455db28edec5ab6581842c8d17203 2. https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=20d700bc51e717c502e51c3fb857b03ba8f99f34 Change-Id: I3743553ad5d86951a133aaf4fca3793739e0e427 Signed-off-by: houst0nn <houstonn@gmail.com> Signed-off-by: doc <doc.divxm@gmail.com>
* Linux 3.4.80Greg Kroah-Hartman2014-05-113-1/+3
| | | | | | | | | | | | -- The following commits from upstream were reverted: 1. https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=b06c0a0cc545114be1579934e90ecc477201fde7 2. https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=cf85cc93b24891b7e57b1d9939742b5774570b19 3. https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.4.y&id=cd34de10471a5ddad397739fae33555d47e53769 Change-Id: I28f2dd428b52582592881008828e3ed43d6b7ebd Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: doc <doc.divxm@gmail.com>
* Linux 3.4.77 - 26 commits squashedGreg Kroah-Hartman2014-05-112-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | rds: prevent BUG_ON triggered on congestion update to loopback [ Upstream commit 18fc25c94eadc52a42c025125af24657a93638c0 ] After congestion update on a local connection, when rds_ib_xmit returns less bytes than that are there in the message, rds_send_xmit calls back rds_ib_xmit with an offset that causes BUG_ON(off & RDS_FRAG_SIZE) to trigger. For a 4Kb PAGE_SIZE rds_ib_xmit returns min(8240,4096)=4096 when actually the message contains 8240 bytes. rds_send_xmit thinks there is more to send and calls rds_ib_xmit again with a data offset "off" of 4096-48(rds header) =4048 bytes thus hitting the BUG_ON(off & RDS_FRAG_SIZE) [RDS_FRAG_SIZE=4k]. The commit 6094628bfd94323fc1cea05ec2c6affd98c18f7f "rds: prevent BUG_ON triggering on congestion map updates" introduced this regression. That change was addressing the triggering of a different BUG_ON in rds_send_xmit() on PowerPC architecture with 64Kbytes PAGE_SIZE: BUG_ON(ret != 0 && conn->c_xmit_sg == rm->data.op_nents); This was the sequence it was going through: (rds_ib_xmit) /* Do not send cong updates to IB loopback */ if (conn->c_loopback && rm->m_inc.i_hdr.h_flags & RDS_FLAG_CONG_BITMAP) { rds_cong_map_updated(conn->c_fcong, ~(u64) 0); return sizeof(struct rds_header) + RDS_CONG_MAP_BYTES; } rds_ib_xmit returns 8240 rds_send_xmit: c_xmit_data_off = 0 + 8240 - 48 (rds header accounted only the first time) = 8192 c_xmit_data_off < 65536 (sg->length), so calls rds_ib_xmit again rds_ib_xmit returns 8240 rds_send_xmit: c_xmit_data_off = 8192 + 8240 = 16432, calls rds_ib_xmit again and so on (c_xmit_data_off 24672,32912,41152,49392,57632) rds_ib_xmit returns 8240 On this iteration this sequence causes the BUG_ON in rds_send_xmit: while (ret) { tmp = min_t(int, ret, sg->length - conn->c_xmit_data_off); [tmp = 65536 - 57632 = 7904] conn->c_xmit_data_off += tmp; [c_xmit_data_off = 57632 + 7904 = 65536] ret -= tmp; [ret = 8240 - 7904 = 336] if (conn->c_xmit_data_off == sg->length) { conn->c_xmit_data_off = 0; sg++; conn->c_xmit_sg++; BUG_ON(ret != 0 && conn->c_xmit_sg == rm->data.op_nents); [c_xmit_sg = 1, rm->data.op_nents = 1] What the current fix does: Since the congestion update over loopback is not actually transmitted as a message, all that rds_ib_xmit needs to do is let the caller think the full message has been transmitted and not return partial bytes. It will return 8240 (RDS_CONG_MAP_BYTES+48) when PAGE_SIZE is 4Kb. And 64Kb+48 when page size is 64Kb. Reported-by: Josh Hunt <joshhunt00@gmail.com> Tested-by: Honggang Li <honli@redhat.com> Acked-by: Bang Nguyen <bang.nguyen@oracle.com> Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> macvtap: Do not double-count received packets [ Upstream commit 006da7b07bc4d3a7ffabad17cf639eec6849c9dc ] Currently macvlan will count received packets after calling each vlans receive handler. Macvtap attempts to count the packet yet again when the user reads the packet from the tap socket. This code doesn't do this consistently either. Remove the counting from macvtap and let only macvlan count received packets. Signed-off-by: Vlad Yasevich <vyasevic@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> macvtap: update file current position [ Upstream commit e6ebc7f16ca1434a334647aa56399c546be4e64b ] Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> tun: update file current position [ Upstream commit d0b7da8afa079ffe018ab3e92879b7138977fc8f ] Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> macvtap: signal truncated packets [ Upstream commit ce232ce01d61b184202bb185103d119820e1260c ] macvtap_put_user() never return a value grater than iov length, this in fact bypasses the truncated checking in macvtap_recvmsg(). Fix this by always returning the size of packet plus the possible vlan header to let the trunca checking work. Cc: Vlad Yasevich <vyasevich@gmail.com> Cc: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> ipv6: don't count addrconf generated routes against gc limit [ Upstream commit a3300ef4bbb1f1e33ff0400e1e6cf7733d988f4f ] Brett Ciphery reported that new ipv6 addresses failed to get installed because the addrconf generated dsts where counted against the dst gc limit. We don't need to count those routes like we currently don't count administratively added routes. Because the max_addresses check enforces a limit on unbounded address generation first in case someone plays with router advertisments, we are still safe here. Reported-by: Brett Ciphery <brett.ciphery@windriver.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> net: drop_monitor: fix the value of maxattr [ Upstream commit d323e92cc3f4edd943610557c9ea1bb4bb5056e8 ] maxattr in genl_family should be used to save the max attribute type, but not the max command type. Drop monitor doesn't support any attributes, so we should leave it as zero. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> net: unix: allow set_peek_off to fail [ Upstream commit 12663bfc97c8b3fdb292428105dd92d563164050 ] unix_dgram_recvmsg() will hold the readlock of the socket until recv is complete. In the same time, we may try to setsockopt(SO_PEEK_OFF) which will hang until unix_dgram_recvmsg() will complete (which can take a while) without allowing us to break out of it, triggering a hung task spew. Instead, allow set_peek_off to fail, this way userspace will not hang. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Acked-by: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> tg3: Initialize REG_BASE_ADDR at PCI config offset 120 to 0 [ Upstream commit 388d3335575f4c056dcf7138a30f1454e2145cd8 ] The new tg3 driver leaves REG_BASE_ADDR (PCI config offset 120) uninitialized. From power on reset this register may have garbage in it. The Register Base Address register defines the device local address of a register. The data pointed to by this location is read or written using the Register Data register (PCI config offset 128). When REG_BASE_ADDR has garbage any read or write of Register Data Register (PCI 128) will cause the PCI bus to lock up. The TCO watchdog will fire and bring down the system. Signed-off-by: Nat Gurumoorthy <natg@google.com> Acked-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> netvsc: don't flush peers notifying work during setting mtu [ Upstream commit 50dc875f2e6e2e04aed3b3033eb0ac99192d6d02 ] There's a possible deadlock if we flush the peers notifying work during setting mtu: [ 22.991149] ====================================================== [ 22.991173] [ INFO: possible circular locking dependency detected ] [ 22.991198] 3.10.0-54.0.1.el7.x86_64.debug #1 Not tainted [ 22.991219] ------------------------------------------------------- [ 22.991243] ip/974 is trying to acquire lock: [ 22.991261] ((&(&net_device_ctx->dwork)->work)){+.+.+.}, at: [<ffffffff8108af95>] flush_work+0x5/0x2e0 [ 22.991307] but task is already holding lock: [ 22.991330] (rtnl_mutex){+.+.+.}, at: [<ffffffff81539deb>] rtnetlink_rcv+0x1b/0x40 [ 22.991367] which lock already depends on the new lock. [ 22.991398] the existing dependency chain (in reverse order) is: [ 22.991426] -> #1 (rtnl_mutex){+.+.+.}: [ 22.991449] [<ffffffff810dfdd9>] __lock_acquire+0xb19/0x1260 [ 22.991477] [<ffffffff810e0d12>] lock_acquire+0xa2/0x1f0 [ 22.991501] [<ffffffff81673659>] mutex_lock_nested+0x89/0x4f0 [ 22.991529] [<ffffffff815392b7>] rtnl_lock+0x17/0x20 [ 22.991552] [<ffffffff815230b2>] netdev_notify_peers+0x12/0x30 [ 22.991579] [<ffffffffa0340212>] netvsc_send_garp+0x22/0x30 [hv_netvsc] [ 22.991610] [<ffffffff8108d251>] process_one_work+0x211/0x6e0 [ 22.991637] [<ffffffff8108d83b>] worker_thread+0x11b/0x3a0 [ 22.991663] [<ffffffff81095e5d>] kthread+0xed/0x100 [ 22.991686] [<ffffffff81681c6c>] ret_from_fork+0x7c/0xb0 [ 22.991715] -> #0 ((&(&net_device_ctx->dwork)->work)){+.+.+.}: [ 22.991715] [<ffffffff810de817>] check_prevs_add+0x967/0x970 [ 22.991715] [<ffffffff810dfdd9>] __lock_acquire+0xb19/0x1260 [ 22.991715] [<ffffffff810e0d12>] lock_acquire+0xa2/0x1f0 [ 22.991715] [<ffffffff8108afde>] flush_work+0x4e/0x2e0 [ 22.991715] [<ffffffff8108e1b5>] __cancel_work_timer+0x95/0x130 [ 22.991715] [<ffffffff8108e303>] cancel_delayed_work_sync+0x13/0x20 [ 22.991715] [<ffffffffa03404e4>] netvsc_change_mtu+0x84/0x200 [hv_netvsc] [ 22.991715] [<ffffffff815233d4>] dev_set_mtu+0x34/0x80 [ 22.991715] [<ffffffff8153bc2a>] do_setlink+0x23a/0xa00 [ 22.991715] [<ffffffff8153d054>] rtnl_newlink+0x394/0x5e0 [ 22.991715] [<ffffffff81539eac>] rtnetlink_rcv_msg+0x9c/0x260 [ 22.991715] [<ffffffff8155cdd9>] netlink_rcv_skb+0xa9/0xc0 [ 22.991715] [<ffffffff81539dfa>] rtnetlink_rcv+0x2a/0x40 [ 22.991715] [<ffffffff8155c41d>] netlink_unicast+0xdd/0x190 [ 22.991715] [<ffffffff8155c807>] netlink_sendmsg+0x337/0x750 [ 22.991715] [<ffffffff8150d219>] sock_sendmsg+0x99/0xd0 [ 22.991715] [<ffffffff8150d63e>] ___sys_sendmsg+0x39e/0x3b0 [ 22.991715] [<ffffffff8150eba2>] __sys_sendmsg+0x42/0x80 [ 22.991715] [<ffffffff8150ebf2>] SyS_sendmsg+0x12/0x20 [ 22.991715] [<ffffffff81681d19>] system_call_fastpath+0x16/0x1b This is because we hold the rtnl_lock() before ndo_change_mtu() and try to flush the work in netvsc_change_mtu(), in the mean time, netdev_notify_peers() may be called from worker and also trying to hold the rtnl_lock. This will lead the flush won't succeed forever. Solve this by not canceling and flushing the work, this is safe because the transmission done by NETDEV_NOTIFY_PEERS was synchronized with the netif_tx_disable() called by netvsc_change_mtu(). Reported-by: Yaju Cao <yacao@redhat.com> Tested-by: Yaju Cao <yacao@redhat.com> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> net: unix: allow bind to fail on mutex lock [ Upstream commit 37ab4fa7844a044dc21fde45e2a0fc2f3c3b6490 ] This is similar to the set_peek_off patch where calling bind while the socket is stuck in unix_dgram_recvmsg() will block and cause a hung task spew after a while. This is also the last place that did a straightforward mutex_lock(), so there shouldn't be any more of these patches. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> net: inet_diag: zero out uninitialized idiag_{src,dst} fields [ Upstream commit b1aac815c0891fe4a55a6b0b715910142227700f ] Jakub reported while working with nlmon netlink sniffer that parts of the inet_diag_sockid are not initialized when r->idiag_family != AF_INET6. That is, fields of r->id.idiag_src[1 ... 3], r->id.idiag_dst[1 ... 3]. In fact, it seems that we can leak 6 * sizeof(u32) byte of kernel [slab] memory through this. At least, in udp_dump_one(), we allocate a skb in ... rep = nlmsg_new(sizeof(struct inet_diag_msg) + ..., GFP_KERNEL); ... and then pass that to inet_sk_diag_fill() that puts the whole struct inet_diag_msg into the skb, where we only fill out r->id.idiag_src[0], r->id.idiag_dst[0] and leave the rest untouched: r->id.idiag_src[0] = inet->inet_rcv_saddr; r->id.idiag_dst[0] = inet->inet_daddr; struct inet_diag_msg embeds struct inet_diag_sockid that is correctly / fully filled out in IPv6 case, but for IPv4 not. So just zero them out by using plain memset (for this little amount of bytes it's probably not worth the extra check for idiag_family == AF_INET). Similarly, fix also other places where we fill that out. Reported-by: Jakub Zawadzki <darkjames-ws@darkjames.pl> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> drivers/net/hamradio: Integer overflow in hdlcdrv_ioctl() [ Upstream commit e9db5c21d3646a6454fcd04938dd215ac3ab620a ] The local variable 'bi' comes from userspace. If userspace passed a large number to 'bi.data.calibrate', there would be an integer overflow in the following line: s->hdlctx.calibrate = bi.data.calibrate * s->par.bitrate / 16; Signed-off-by: Wenliang Fan <fanwlexca@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> hamradio/yam: fix info leak in ioctl [ Upstream commit 8e3fbf870481eb53b2d3a322d1fc395ad8b367ed ] The yam_ioctl() code fails to initialise the cmd field of the struct yamdrv_ioctl_cfg. Add an explicit memset(0) before filling the structure to avoid the 4-byte info leak. Signed-off-by: Salva Peiró <speiro@ai2.upv.es> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> rds: prevent dereference of a NULL device [ Upstream commit c2349758acf1874e4c2b93fe41d072336f1a31d0 ] Binding might result in a NULL device, which is dereferenced causing this BUG: [ 1317.260548] BUG: unable to handle kernel NULL pointer dereference at 000000000000097 4 [ 1317.261847] IP: [<ffffffff84225f52>] rds_ib_laddr_check+0x82/0x110 [ 1317.263315] PGD 418bcb067 PUD 3ceb21067 PMD 0 [ 1317.263502] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC [ 1317.264179] Dumping ftrace buffer: [ 1317.264774] (ftrace buffer empty) [ 1317.265220] Modules linked in: [ 1317.265824] CPU: 4 PID: 836 Comm: trinity-child46 Tainted: G W 3.13.0-rc4- next-20131218-sasha-00013-g2cebb9b-dirty #4159 [ 1317.267415] task: ffff8803ddf33000 ti: ffff8803cd31a000 task.ti: ffff8803cd31a000 [ 1317.268399] RIP: 0010:[<ffffffff84225f52>] [<ffffffff84225f52>] rds_ib_laddr_check+ 0x82/0x110 [ 1317.269670] RSP: 0000:ffff8803cd31bdf8 EFLAGS: 00010246 [ 1317.270230] RAX: 0000000000000000 RBX: ffff88020b0dd388 RCX: 0000000000000000 [ 1317.270230] RDX: ffffffff8439822e RSI: 00000000000c000a RDI: 0000000000000286 [ 1317.270230] RBP: ffff8803cd31be38 R08: 0000000000000000 R09: 0000000000000000 [ 1317.270230] R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000000 [ 1317.270230] R13: 0000000054086700 R14: 0000000000a25de0 R15: 0000000000000031 [ 1317.270230] FS: 00007ff40251d700(0000) GS:ffff88022e200000(0000) knlGS:000000000000 0000 [ 1317.270230] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 1317.270230] CR2: 0000000000000974 CR3: 00000003cd478000 CR4: 00000000000006e0 [ 1317.270230] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1317.270230] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000090602 [ 1317.270230] Stack: [ 1317.270230] 0000000054086700 5408670000a25de0 5408670000000002 0000000000000000 [ 1317.270230] ffffffff84223542 00000000ea54c767 0000000000000000 ffffffff86d26160 [ 1317.270230] ffff8803cd31be68 ffffffff84223556 ffff8803cd31beb8 ffff8800c6765280 [ 1317.270230] Call Trace: [ 1317.270230] [<ffffffff84223542>] ? rds_trans_get_preferred+0x42/0xa0 [ 1317.270230] [<ffffffff84223556>] rds_trans_get_preferred+0x56/0xa0 [ 1317.270230] [<ffffffff8421c9c3>] rds_bind+0x73/0xf0 [ 1317.270230] [<ffffffff83e4ce62>] SYSC_bind+0x92/0xf0 [ 1317.270230] [<ffffffff812493f8>] ? context_tracking_user_exit+0xb8/0x1d0 [ 1317.270230] [<ffffffff8119313d>] ? trace_hardirqs_on+0xd/0x10 [ 1317.270230] [<ffffffff8107a852>] ? syscall_trace_enter+0x32/0x290 [ 1317.270230] [<ffffffff83e4cece>] SyS_bind+0xe/0x10 [ 1317.270230] [<ffffffff843a6ad0>] tracesys+0xdd/0xe2 [ 1317.270230] Code: 00 8b 45 cc 48 8d 75 d0 48 c7 45 d8 00 00 00 00 66 c7 45 d0 02 00 89 45 d4 48 89 df e8 78 49 76 ff 41 89 c4 85 c0 75 0c 48 8b 03 <80> b8 74 09 00 00 01 7 4 06 41 bc 9d ff ff ff f6 05 2a b6 c2 02 [ 1317.270230] RIP [<ffffffff84225f52>] rds_ib_laddr_check+0x82/0x110 [ 1317.270230] RSP <ffff8803cd31bdf8> [ 1317.270230] CR2: 0000000000000974 Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> net: rose: restore old recvmsg behavior [ Upstream commit f81152e35001e91997ec74a7b4e040e6ab0acccf ] recvmsg handler in net/rose/af_rose.c performs size-check ->msg_namelen. After commit f3d3342602f8bcbf37d7c46641cb9bca7618eb1c (net: rework recvmsg handler msg_name and msg_namelen logic), we now always take the else branch due to namelen being initialized to 0. Digging in netdev-vger-cvs git repo shows that msg_namelen was initialized with a fixed-size since at least 1995, so the else branch was never taken. Compile tested only. Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> vlan: Fix header ops passthru when doing TX VLAN offload. [ Upstream commit 2205369a314e12fcec4781cc73ac9c08fc2b47de ] When the vlan code detects that the real device can do TX VLAN offloads in hardware, it tries to arrange for the real device's header_ops to be invoked directly. But it does so illegally, by simply hooking the real device's header_ops up to the VLAN device. This doesn't work because we will end up invoking a set of header_ops routines which expect a device type which matches the real device, but will see a VLAN device instead. Fix this by providing a pass-thru set of header_ops which will arrange to pass the proper real device instead. To facilitate this add a dev_rebuild_header(). There are implementations which provide a ->cache and ->create but not a ->rebuild (f.e. PLIP). So we need a helper function just like dev_hard_header() to avoid crashes. Use this helper in the one existing place where the header_ops->rebuild was being invoked, the neighbour code. With lots of help from Florian Westphal. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> net: llc: fix use after free in llc_ui_recvmsg [ Upstream commit 4d231b76eef6c4a6bd9c96769e191517765942cb ] While commit 30a584d944fb fixes datagram interface in LLC, a use after free bug has been introduced for SOCK_STREAM sockets that do not make use of MSG_PEEK. The flow is as follow ... if (!(flags & MSG_PEEK)) { ... sk_eat_skb(sk, skb, false); ... } ... if (used + offset < skb->len) continue; ... where sk_eat_skb() calls __kfree_skb(). Therefore, cache original length and work on skb_len to check partial reads. Fixes: 30a584d944fb ("[LLX]: SOCK_DGRAM interface fixes") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Stephen Hemminger <stephen@networkplumber.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> bridge: use spin_lock_bh() in br_multicast_set_hash_max [ Upstream commit fe0d692bbc645786bce1a98439e548ae619269f5 ] br_multicast_set_hash_max() is called from process context in net/bridge/br_sysfs_br.c by the sysfs store_hash_max() function. br_multicast_set_hash_max() calls spin_lock(&br->multicast_lock), which can deadlock the CPU if a softirq that also tries to take the same lock interrupts br_multicast_set_hash_max() while the lock is held . This can happen quite easily when any of the bridge multicast timers expire, which try to take the same lock. The fix here is to use spin_lock_bh(), preventing other softirqs from executing on this CPU. Steps to reproduce: 1. Create a bridge with several interfaces (I used 4). 2. Set the "multicast query interval" to a low number, like 2. 3. Enable the bridge as a multicast querier. 4. Repeatedly set the bridge hash_max parameter via sysfs. # brctl addbr br0 # brctl addif br0 eth1 eth2 eth3 eth4 # brctl setmcqi br0 2 # brctl setmcquerier br0 1 # while true ; do echo 4096 > /sys/class/net/br0/bridge/hash_max; done Signed-off-by: Curt Brune <curt@cumulusnetworks.com> Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> ARM: shmobile: mackerel: Fix coherent DMA mask commit b6328a6b7ba57fc84c38248f6f0e387e1170f1a8 upstream. Commit 4dcfa60071b3d23f0181f27d8519f12e37cefbb9 ("ARM: DMA-API: better handing of DMA masks for coherent allocations") added an additional check to the coherent DMA mask that results in an error when the mask is larger than what dma_addr_t can address. Set the LCDC coherent DMA mask to DMA_BIT_MASK(32) instead of ~0 to fix the problem. Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> x86, fpu, amd: Clear exceptions in AMD FXSAVE workaround commit 26bef1318adc1b3a530ecc807ef99346db2aa8b0 upstream. Before we do an EMMS in the AMD FXSAVE information leak workaround we need to clear any pending exceptions, otherwise we trap with a floating-point exception inside this code. Reported-by: halfdog <me@halfdog.net> Tested-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/CA%2B55aFxQnY_PCG_n4=0w-VG=YLXL-yr7oMxyy0WU2gCBAf3ydg@mail.gmail.com Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> sched: Fix race on toggling cfs_bandwidth_used commit 1ee14e6c8cddeeb8a490d7b54cd9016e4bb900b4 upstream. When we transition cfs_bandwidth_used to false, any currently throttled groups will incorrectly return false from cfs_rq_throttled. While tg_set_cfs_bandwidth will unthrottle them eventually, currently running code (including at least dequeue_task_fair and distribute_cfs_runtime) will cause errors. Fix this by turning off cfs_bandwidth_used only after unthrottling all cfs_rqs. Tested: toggle bandwidth back and forth on a loaded cgroup. Caused crashes in minutes without the patch, hasn't crashed with it. Signed-off-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: pjt@google.com Link: http://lkml.kernel.org/r/20131016181611.22647.80365.stgit@sword-of-the-dawn.mtv.corp.google.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chris J Arges <chris.j.arges@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> sched: Fix cfs_bandwidth misuse of hrtimer_expires_remaining commit db06e78cc13d70f10877e0557becc88ab3ad2be8 upstream. hrtimer_expires_remaining does not take internal hrtimer locks and thus must be guarded against concurrent __hrtimer_start_range_ns (but returning HRTIMER_RESTART is safe). Use cfs_b->lock to make it safe. Signed-off-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: pjt@google.com Link: http://lkml.kernel.org/r/20131016181617.22647.73829.stgit@sword-of-the-dawn.mtv.corp.google.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chris J Arges <chris.j.arges@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> sched: Fix hrtimer_cancel()/rq->lock deadlock commit 927b54fccbf04207ec92f669dce6806848cbec7d upstream. __start_cfs_bandwidth calls hrtimer_cancel while holding rq->lock, waiting for the hrtimer to finish. However, if sched_cfs_period_timer runs for another loop iteration, the hrtimer can attempt to take rq->lock, resulting in deadlock. Fix this by ensuring that cfs_b->timer_active is cleared only if the _latest_ call to do_sched_cfs_period_timer is returning as idle. Then __start_cfs_bandwidth can just call hrtimer_try_to_cancel and wait for that to succeed or timer_active == 1. Signed-off-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: pjt@google.com Link: http://lkml.kernel.org/r/20131016181622.22647.16643.stgit@sword-of-the-dawn.mtv.corp.google.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chris J Arges <chris.j.arges@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> sched: Guarantee new group-entities always have weight commit 0ac9b1c21874d2490331233b3242085f8151e166 upstream. Currently, group entity load-weights are initialized to zero. This admits some races with respect to the first time they are re-weighted in earlty use. ( Let g[x] denote the se for "g" on cpu "x". ) Suppose that we have root->a and that a enters a throttled state, immediately followed by a[0]->t1 (the only task running on cpu[0]) blocking: put_prev_task(group_cfs_rq(a[0]), t1) put_prev_entity(..., t1) check_cfs_rq_runtime(group_cfs_rq(a[0])) throttle_cfs_rq(group_cfs_rq(a[0])) Then, before unthrottling occurs, let a[0]->b[0]->t2 wake for the first time: enqueue_task_fair(rq[0], t2) enqueue_entity(group_cfs_rq(b[0]), t2) enqueue_entity_load_avg(group_cfs_rq(b[0]), t2) account_entity_enqueue(group_cfs_ra(b[0]), t2) update_cfs_shares(group_cfs_rq(b[0])) < skipped because b is part of a throttled hierarchy > enqueue_entity(group_cfs_rq(a[0]), b[0]) ... We now have b[0] enqueued, yet group_cfs_rq(a[0])->load.weight == 0 which violates invariants in several code-paths. Eliminate the possibility of this by initializing group entity weight. Conflicts: Makefile kernel/sched/fair.c net/rose/af_rose.c Signed-off-by: Paul Turner <pjt@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20131016181627.22647.47543.stgit@sword-of-the-dawn.mtv.corp.google.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chris J Arges <chris.j.arges@canonical.com> Change-Id: Iad3ac9d3bcc32c4b9edc83545f9cf81b95bb2af3 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Implement kexec-hardbootVojtech Bocek2014-02-021-3/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "Allows hard booting (i.e., with a full hardware reboot) to a kernel previously loaded in memory by kexec. This works around the problem of soft-booted kernel hangs due to improper device shutdown and/or reinitialization." More info in /arch/arm/Kconfig. Original author: Mike Kasick <mike@kasick.org> Vojtech Bocek <vbocek@gmail.com>: I've ported it to M7, it is based of my flo port, which is based of my grouper port, which is based of Asus TF201 patches ported by Jens Andersen <jens.andersen@gmail.com>. I've moved atags copying from guest to the host kernel, which means there is no need to patch the guest kernel, assuming the --mem-min in kexec call is within the first 256MB of System RAM, otherwise it will take a long time to load. I've also fixed /proc/atags entry, which would give the kexec-tools userspace binary only the first 1024 bytes of atags, see arch/arm/kernel/atags.c for more details. Other than that, memory-reservation code for the hardboot page and some assembler to do the watchdog reset on MSM chip are new for this device. M7 also needed bigger BOOT_PARAMS_SIZE, because it has more atags. Modified kexec-tools binary is required. See: https://lkml.org/lkml/2013/9/27/517 https://github.com/Tasssadar/kexec-tools/commit/c6844e1ddb13a6b60cfefcb01c3843da97d6174c https://github.com/Tasssadar/kexec-tools/commit/210c3b8a4aab69778d93d08b7ff2bfd9c1146cef Conflicts: arch/arm/boot/compressed/head.S Change-Id: If54a4f329fd4944bb7e09b040ac4cdb4eccbb5e3 Signed-off-by: Vojtech Bocek <vbocek@gmail.com> Signed-off-by: flar2 <asegaert@gmail.com>
* blinking buttons: bln for m7 by @tbaldenflar22014-01-311-0/+1
| | | | | | | | | Signed-off-by: flar2 <asegaert@gmail.com> Conflicts: include/linux/synaptics_i2c_rmi.h Change-Id: I45b79a603f6089028f15c940ebd3c10e1d358e24
* update wireless, sound and misc stuff from GPEEthan Chen2014-01-3119-21/+673
| | | | | | | | | | | | | | | | | | | | | | HTC kernel version: m7-kk-3.4.10-17db3b4 Conflicts: arch/arm/configs/cyanogenmod_m7_defconfig arch/arm/mach-msm/Kconfig arch/arm/mach-msm/htc_battery_8960.c drivers/i2c/chips/Makefile drivers/i2c/chips/bma250_bosch.c drivers/i2c/chips/cm3629.c drivers/i2c/chips/rt5501.c drivers/input/misc/gpio_input.c drivers/input/touchscreen/Kconfig drivers/input/touchscreen/synaptics_3200.c drivers/leds/leds-pm8921.c fs/Kconfig include/linux/akm8963_nst.h include/linux/pn544.h Change-Id: I1d5d31f04febb50bf801e21cd8c074bcfc395bf3
* Merge "drivers/leds: update to latest GPE edition (temp removal of BLN ↵n3ocort3x2014-01-311-1/+5
|\ | | | | | | notifictaion)" into kitkat
| * drivers/leds: update to latest GPE edition (temp removal of BLN notifictaion)n3ocort3x2014-01-301-1/+5
| | | | | | | | Change-Id: Ic140f2f2c870d27697cf1788803f6b6bf6b4e711
* | update I2C drivers to latest GPE Edition sourcen3ocort3x2014-01-291-3/+1
| | | | | | | | Change-Id: I6c993b627d52ac40c177c0ad229d4805b9f274eb
* | updated i2c drivers, removed flick2wake, pick2wake, pocket detectionn3ocort3x2014-01-296-19/+90
|/ | | | Change-Id: I334d28a1643f78f7180cd2ae3af8d516ddc9accc
* iommu/core: pass a user-provided token to fault handlersOhad Ben-Cohen2013-12-131-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes a single IOMMU user may have to deal with several different IOMMU devices (e.g. remoteproc). When an IOMMU fault happens, such users have to regain their context in order to deal with the fault. Users can't use the private fields of neither the iommu_domain nor the IOMMU device, because those are already used by the IOMMU core and low level driver (respectively). This patch just simply allows users to pass a private token (most notably their own context pointer) to iommu_set_fault_handler(), and then makes sure it is provided back to the users whenever an IOMMU fault happens. The patch also adopts remoteproc to the new fault handling interface, but the real functionality using this (recovery of remote processors) will only be added later in a subsequent patch set. Change-Id: Ic04659686e72838a0db518e9303dd037191e3879 Cc: Fernando Guzman Lugo <fernando.lugo@ti.com> Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> [ohaugan@codeaurora.org: Resolved compilation and merge issues] Signed-off-by: Olav Haugan <ohaugan@codeaurora.org> Conflicts: drivers/video/msm/mdss/mdss_mdp.c
* gpu: msm: Add new Adreno driverSteve Kondik2013-12-131-19/+117
| | | | | | * Temporary place for this. Change-Id: I83b5d75fbd201c352d011ed43f21ebe3576e058c
* kref: Implement kref_get_unless_zero v3Thomas Hellstrom2013-12-131-1/+23
| | | | | | | | | | | | | | | | | | | | | | | This function is intended to simplify locking around refcounting for objects that can be looked up from a lookup structure, and which are removed from that lookup structure in the object destructor. Operations on such objects require at least a read lock around lookup + kref_get, and a write lock around kref_put + remove from lookup structure. Furthermore, RCU implementations become extremely tricky. With a lookup followed by a kref_get_unless_zero *with return value check* locking in the kref_put path can be deferred to the actual removal from the lookup structure and RCU lookups become trivial. v2: Formatting fixes. v3: Invert the return value. Change-Id: Ibcb524294c2cebe1d832e77714895822adc1c89a Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Git-commit: 4b20db3de8dab005b07c74161cb041db8c5ff3a7 Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git [shrenujb@codeaurora.org: resolve trivial merge conflicts] Signed-off-by: Shrenuj Bansal <shrenujb@codeaurora.org>
* msm: vidc: Validate userspace buffer countPachika, Vikas Reddy2013-12-131-0/+1
| | | | | | | | | Makesure the number of buffers count is less than the maximum limit to avoid structure overflow errors. Change-Id: Icf3850de36325637ae43ac95f1c8f0f63e201d31 CRs-fixed: 563694 Signed-off-by: Pachika, Vikas Reddy <vpachi@codeaurora.org>
* msm: vidc: Initialize kernel space stack variablesDeepak Verma2013-12-131-1/+1
| | | | | | | | | | | | | This change initializes kernel space stack variables that are passed between kernel space and user space using ioctls. Non initialization of these variables may lead to leakage of memory values from the kernel stack to user space. Change-Id: Icb195470545ee48b55671ac09798610178e833e1 CRs-fixed: 556771,563420 Signed-off-by: Deepak Verma <dverma@codeaurora.org>