Networking fixes for 5.16-rc5, including fixes from bpf, can and netfilter.

Current release - regressions:
 
  - bpf, sockmap: re-evaluate proto ops when psock is removed from sockmap
 
 Current release - new code bugs:
 
  - bpf: fix bpf_check_mod_kfunc_call for built-in modules
 
  - ice: fixes for TC classifier offloads
 
  - vrf: don't run conntrack on vrf with !dflt qdisc
 
 Previous releases - regressions:
 
  - bpf: fix the off-by-two error in range markings
 
  - seg6: fix the iif in the IPv6 socket control block
 
  - devlink: fix netns refcount leak in devlink_nl_cmd_reload()
 
  - dsa: mv88e6xxx: fix "don't use PHY_DETECT on internal PHY's"
 
  - dsa: mv88e6xxx: allow use of PHYs on CPU and DSA ports
 
 Previous releases - always broken:
 
  - ethtool: do not perform operations on net devices being unregistered
 
  - udp: use datalen to cap max gso segments
 
  - ice: fix races in stats collection
 
  - fec: only clear interrupt of handling queue in fec_enet_rx_queue()
 
  - m_can: pci: fix incorrect reference clock rate
 
  - m_can: disable and ignore ELO interrupt
 
  - mvpp2: fix XDP rx queues registering
 
 Misc:
 
  - treewide: add missing includes masked by cgroup -> bpf.h dependency
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmGyN1AACgkQMUZtbf5S
 IrtgMA/8D0qk3c75ts0hCzGXwdNdEBs+e7u1bJVPqdyU8x/ZLAp2c0EKB/7IWuxA
 CtsnbanPcmibqvQJDI1hZEBdafi43BmF5VuFSIxYC4EM/1vLoRprurXlIwL2YWki
 aWi//tyOIGBl6/ClzJ9Vm51HTJQwDmdv8GRnKAbsC1eOTM3pmmcg+6TLbDhycFEQ
 F9kkDCvyB9kWIH645QyJRH+Y5qQOvneCyQCPkkyjTgEADzV5i7YgtRol6J3QIbw3
 umPHSckCBTjMacYcCLsbhQaF2gTMgPV1basNLPMjCquJVrItE0ZaeX3MiD6nBFae
 yY5+Wt5KAZDzjERhneX8AINHoRPA/tNIahC1+ytTmsTA8Hj230FHE5hH1ajWiJ9+
 GSTBCBqjtZXce3r2Efxfzy0Kb9JwL3vDi7LS2eKQLv0zBLfYp2ry9Sp9qe4NhPkb
 OYrxws9kl5GOPvrFB5BWI9XBINciC9yC3PjIsz1noi0vD8/Hi9dPwXeAYh36fXU3
 rwRg9uAt6tvFCpwbuQ9T2rsMST0miur2cDYd8qkJtuJ7zFvc+suMXwBZyI29nF2D
 uyymIC2XStHJfAjUkFsGVUSXF5FhML9OQsqmisdQ8KdH26jMnDeMjIWJM7UWK+zY
 E/fqWT8UyS3mXWqaggid4ZbotipCwA0gxiDHuqqUGTM+dbKrzmk=
 =F6rS
 -----END PGP SIGNATURE-----

Merge tag 'net-5.16-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bpf, can and netfilter.

  Current release - regressions:

   - bpf, sockmap: re-evaluate proto ops when psock is removed from
     sockmap

  Current release - new code bugs:

   - bpf: fix bpf_check_mod_kfunc_call for built-in modules

   - ice: fixes for TC classifier offloads

   - vrf: don't run conntrack on vrf with !dflt qdisc

  Previous releases - regressions:

   - bpf: fix the off-by-two error in range markings

   - seg6: fix the iif in the IPv6 socket control block

   - devlink: fix netns refcount leak in devlink_nl_cmd_reload()

   - dsa: mv88e6xxx: fix "don't use PHY_DETECT on internal PHY's"

   - dsa: mv88e6xxx: allow use of PHYs on CPU and DSA ports

  Previous releases - always broken:

   - ethtool: do not perform operations on net devices being
     unregistered

   - udp: use datalen to cap max gso segments

   - ice: fix races in stats collection

   - fec: only clear interrupt of handling queue in fec_enet_rx_queue()

   - m_can: pci: fix incorrect reference clock rate

   - m_can: disable and ignore ELO interrupt

   - mvpp2: fix XDP rx queues registering

  Misc:

   - treewide: add missing includes masked by cgroup -> bpf.h
     dependency"

* tag 'net-5.16-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (82 commits)
  net: dsa: mv88e6xxx: allow use of PHYs on CPU and DSA ports
  net: wwan: iosm: fixes unable to send AT command during mbim tx
  net: wwan: iosm: fixes net interface nonfunctional after fw flash
  net: wwan: iosm: fixes unnecessary doorbell send
  net: dsa: felix: Fix memory leak in felix_setup_mmio_filtering
  MAINTAINERS: s390/net: remove myself as maintainer
  net/sched: fq_pie: prevent dismantle issue
  net: mana: Fix memory leak in mana_hwc_create_wq
  seg6: fix the iif in the IPv6 socket control block
  nfp: Fix memory leak in nfp_cpp_area_cache_add()
  nfc: fix potential NULL pointer deref in nfc_genl_dump_ses_done
  nfc: fix segfault in nfc_genl_dump_devices_done
  udp: using datalen to cap max gso segments
  net: dsa: mv88e6xxx: error handling for serdes_power functions
  can: kvaser_usb: get CAN clock frequency from device
  can: kvaser_pciefd: kvaser_pciefd_rx_error_frame(): increase correct stats->{rx,tx}_errors counter
  net: mvpp2: fix XDP rx queues registering
  vmxnet3: fix minimum vectors alloc issue
  net, neigh: clear whole pneigh_entry at alloc time
  net: dsa: mv88e6xxx: fix "don't use PHY_DETECT on internal PHY's"
  ...
This commit is contained in:
Linus Torvalds 2021-12-09 11:26:44 -08:00
commit ded746bfc9
100 changed files with 1370 additions and 383 deletions

View file

@ -439,11 +439,9 @@ preemption. The following substitution works on both kernels::
spin_lock(&p->lock); spin_lock(&p->lock);
p->count += this_cpu_read(var2); p->count += this_cpu_read(var2);
On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable()
which makes the above code fully equivalent. On a PREEMPT_RT kernel
migrate_disable() ensures that the task is pinned on the current CPU which migrate_disable() ensures that the task is pinned on the current CPU which
in turn guarantees that the per-CPU access to var1 and var2 are staying on in turn guarantees that the per-CPU access to var1 and var2 are staying on
the same CPU. the same CPU while the task remains preemptible.
The migrate_disable() substitution is not valid for the following The migrate_disable() substitution is not valid for the following
scenario:: scenario::
@ -456,9 +454,8 @@ scenario::
p = this_cpu_ptr(&var1); p = this_cpu_ptr(&var1);
p->val = func2(); p->val = func2();
While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because This breaks because migrate_disable() does not protect against reentrancy from
here migrate_disable() does not protect against reentrancy from a a preempting task. A correct substitution for this case is::
preempting task. A correct substitution for this case is::
func() func()
{ {

View file

@ -12180,8 +12180,8 @@ F: drivers/net/ethernet/mellanox/mlx5/core/fpga/*
F: include/linux/mlx5/mlx5_ifc_fpga.h F: include/linux/mlx5/mlx5_ifc_fpga.h
MELLANOX ETHERNET SWITCH DRIVERS MELLANOX ETHERNET SWITCH DRIVERS
M: Jiri Pirko <jiri@nvidia.com>
M: Ido Schimmel <idosch@nvidia.com> M: Ido Schimmel <idosch@nvidia.com>
M: Petr Machata <petrm@nvidia.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
W: http://www.mellanox.com W: http://www.mellanox.com
@ -16629,7 +16629,6 @@ W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/iommu/s390-iommu.c F: drivers/iommu/s390-iommu.c
S390 IUCV NETWORK LAYER S390 IUCV NETWORK LAYER
M: Julian Wiedmann <jwi@linux.ibm.com>
M: Alexandra Winter <wintera@linux.ibm.com> M: Alexandra Winter <wintera@linux.ibm.com>
M: Wenjia Zhang <wenjia@linux.ibm.com> M: Wenjia Zhang <wenjia@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
@ -16641,7 +16640,6 @@ F: include/net/iucv/
F: net/iucv/ F: net/iucv/
S390 NETWORK DRIVERS S390 NETWORK DRIVERS
M: Julian Wiedmann <jwi@linux.ibm.com>
M: Alexandra Winter <wintera@linux.ibm.com> M: Alexandra Winter <wintera@linux.ibm.com>
M: Wenjia Zhang <wenjia@linux.ibm.com> M: Wenjia Zhang <wenjia@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org

View file

@ -98,7 +98,7 @@ do { \
#define emit(...) __emit(__VA_ARGS__) #define emit(...) __emit(__VA_ARGS__)
/* Workaround for R10000 ll/sc errata */ /* Workaround for R10000 ll/sc errata */
#ifdef CONFIG_WAR_R10000 #ifdef CONFIG_WAR_R10000_LLSC
#define LLSC_beqz beqzl #define LLSC_beqz beqzl
#else #else
#define LLSC_beqz beqz #define LLSC_beqz beqz

View file

@ -15,6 +15,7 @@
#include <linux/falloc.h> #include <linux/falloc.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/module.h>
#include "blk.h" #include "blk.h"
static inline struct inode *bdev_file_inode(struct file *file) static inline struct inode *bdev_file_inode(struct file *file)

View file

@ -9,6 +9,7 @@
#include <linux/shmem_fs.h> #include <linux/shmem_fs.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/module.h>
#ifdef CONFIG_X86 #ifdef CONFIG_X86
#include <asm/set_memory.h> #include <asm/set_memory.h>

View file

@ -6,6 +6,7 @@
#include <linux/slab.h> /* fault-inject.h is not standalone! */ #include <linux/slab.h> /* fault-inject.h is not standalone! */
#include <linux/fault-inject.h> #include <linux/fault-inject.h>
#include <linux/sched/mm.h>
#include "gem/i915_gem_lmem.h" #include "gem/i915_gem_lmem.h"
#include "i915_trace.h" #include "i915_trace.h"

View file

@ -29,6 +29,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sched/clock.h> #include <linux/sched/clock.h>
#include <linux/sched/signal.h> #include <linux/sched/signal.h>
#include <linux/sched/mm.h>
#include "gem/i915_gem_context.h" #include "gem/i915_gem_context.h"
#include "gt/intel_breadcrumbs.h" #include "gt/intel_breadcrumbs.h"

View file

@ -4,6 +4,7 @@
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
#include <linux/reset.h> #include <linux/reset.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/slab.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>

View file

@ -5,6 +5,7 @@
*/ */
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/sched/mm.h>
#include "msm_drv.h" #include "msm_drv.h"
#include "msm_gem.h" #include "msm_gem.h"

View file

@ -34,6 +34,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/shmem_fs.h> #include <linux/shmem_fs.h>
#include <linux/file.h> #include <linux/file.h>
#include <linux/module.h>
#include <drm/drm_cache.h> #include <drm/drm_cache.h>
#include <drm/ttm/ttm_bo_driver.h> #include <drm/ttm/ttm_bo_driver.h>

View file

@ -1501,14 +1501,14 @@ void bond_alb_monitor(struct work_struct *work)
struct slave *slave; struct slave *slave;
if (!bond_has_slaves(bond)) { if (!bond_has_slaves(bond)) {
bond_info->tx_rebalance_counter = 0; atomic_set(&bond_info->tx_rebalance_counter, 0);
bond_info->lp_counter = 0; bond_info->lp_counter = 0;
goto re_arm; goto re_arm;
} }
rcu_read_lock(); rcu_read_lock();
bond_info->tx_rebalance_counter++; atomic_inc(&bond_info->tx_rebalance_counter);
bond_info->lp_counter++; bond_info->lp_counter++;
/* send learning packets */ /* send learning packets */
@ -1530,7 +1530,7 @@ void bond_alb_monitor(struct work_struct *work)
} }
/* rebalance tx traffic */ /* rebalance tx traffic */
if (bond_info->tx_rebalance_counter >= BOND_TLB_REBALANCE_TICKS) { if (atomic_read(&bond_info->tx_rebalance_counter) >= BOND_TLB_REBALANCE_TICKS) {
bond_for_each_slave_rcu(bond, slave, iter) { bond_for_each_slave_rcu(bond, slave, iter) {
tlb_clear_slave(bond, slave, 1); tlb_clear_slave(bond, slave, 1);
if (slave == rcu_access_pointer(bond->curr_active_slave)) { if (slave == rcu_access_pointer(bond->curr_active_slave)) {
@ -1540,7 +1540,7 @@ void bond_alb_monitor(struct work_struct *work)
bond_info->unbalanced_load = 0; bond_info->unbalanced_load = 0;
} }
} }
bond_info->tx_rebalance_counter = 0; atomic_set(&bond_info->tx_rebalance_counter, 0);
} }
if (bond_info->rlb_enabled) { if (bond_info->rlb_enabled) {
@ -1610,7 +1610,8 @@ int bond_alb_init_slave(struct bonding *bond, struct slave *slave)
tlb_init_slave(slave); tlb_init_slave(slave);
/* order a rebalance ASAP */ /* order a rebalance ASAP */
bond->alb_info.tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS; atomic_set(&bond->alb_info.tx_rebalance_counter,
BOND_TLB_REBALANCE_TICKS);
if (bond->alb_info.rlb_enabled) if (bond->alb_info.rlb_enabled)
bond->alb_info.rlb_rebalance = 1; bond->alb_info.rlb_rebalance = 1;
@ -1647,7 +1648,8 @@ void bond_alb_handle_link_change(struct bonding *bond, struct slave *slave, char
rlb_clear_slave(bond, slave); rlb_clear_slave(bond, slave);
} else if (link == BOND_LINK_UP) { } else if (link == BOND_LINK_UP) {
/* order a rebalance ASAP */ /* order a rebalance ASAP */
bond_info->tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS; atomic_set(&bond_info->tx_rebalance_counter,
BOND_TLB_REBALANCE_TICKS);
if (bond->alb_info.rlb_enabled) { if (bond->alb_info.rlb_enabled) {
bond->alb_info.rlb_rebalance = 1; bond->alb_info.rlb_rebalance = 1;
/* If the updelay module parameter is smaller than the /* If the updelay module parameter is smaller than the

View file

@ -248,6 +248,9 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
#define KVASER_PCIEFD_SPACK_EWLR BIT(23) #define KVASER_PCIEFD_SPACK_EWLR BIT(23)
#define KVASER_PCIEFD_SPACK_EPLR BIT(24) #define KVASER_PCIEFD_SPACK_EPLR BIT(24)
/* Kvaser KCAN_EPACK second word */
#define KVASER_PCIEFD_EPACK_DIR_TX BIT(0)
struct kvaser_pciefd; struct kvaser_pciefd;
struct kvaser_pciefd_can { struct kvaser_pciefd_can {
@ -1285,7 +1288,10 @@ static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
can->err_rep_cnt++; can->err_rep_cnt++;
can->can.can_stats.bus_error++; can->can.can_stats.bus_error++;
stats->rx_errors++; if (p->header[1] & KVASER_PCIEFD_EPACK_DIR_TX)
stats->tx_errors++;
else
stats->rx_errors++;
can->bec.txerr = bec.txerr; can->bec.txerr = bec.txerr;
can->bec.rxerr = bec.rxerr; can->bec.rxerr = bec.rxerr;

View file

@ -204,16 +204,16 @@ enum m_can_reg {
/* Interrupts for version 3.0.x */ /* Interrupts for version 3.0.x */
#define IR_ERR_LEC_30X (IR_STE | IR_FOE | IR_ACKE | IR_BE | IR_CRCE) #define IR_ERR_LEC_30X (IR_STE | IR_FOE | IR_ACKE | IR_BE | IR_CRCE)
#define IR_ERR_BUS_30X (IR_ERR_LEC_30X | IR_WDI | IR_ELO | IR_BEU | \ #define IR_ERR_BUS_30X (IR_ERR_LEC_30X | IR_WDI | IR_BEU | IR_BEC | \
IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \ IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \
IR_RF1L | IR_RF0L) IR_RF0L)
#define IR_ERR_ALL_30X (IR_ERR_STATE | IR_ERR_BUS_30X) #define IR_ERR_ALL_30X (IR_ERR_STATE | IR_ERR_BUS_30X)
/* Interrupts for version >= 3.1.x */ /* Interrupts for version >= 3.1.x */
#define IR_ERR_LEC_31X (IR_PED | IR_PEA) #define IR_ERR_LEC_31X (IR_PED | IR_PEA)
#define IR_ERR_BUS_31X (IR_ERR_LEC_31X | IR_WDI | IR_ELO | IR_BEU | \ #define IR_ERR_BUS_31X (IR_ERR_LEC_31X | IR_WDI | IR_BEU | IR_BEC | \
IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \ IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \
IR_RF1L | IR_RF0L) IR_RF0L)
#define IR_ERR_ALL_31X (IR_ERR_STATE | IR_ERR_BUS_31X) #define IR_ERR_ALL_31X (IR_ERR_STATE | IR_ERR_BUS_31X)
/* Interrupt Line Select (ILS) */ /* Interrupt Line Select (ILS) */
@ -517,7 +517,7 @@ static int m_can_read_fifo(struct net_device *dev, u32 rxfs)
err = m_can_fifo_read(cdev, fgi, M_CAN_FIFO_DATA, err = m_can_fifo_read(cdev, fgi, M_CAN_FIFO_DATA,
cf->data, DIV_ROUND_UP(cf->len, 4)); cf->data, DIV_ROUND_UP(cf->len, 4));
if (err) if (err)
goto out_fail; goto out_free_skb;
} }
/* acknowledge rx fifo 0 */ /* acknowledge rx fifo 0 */
@ -532,6 +532,8 @@ static int m_can_read_fifo(struct net_device *dev, u32 rxfs)
return 0; return 0;
out_free_skb:
kfree_skb(skb);
out_fail: out_fail:
netdev_err(dev, "FIFO read returned %d\n", err); netdev_err(dev, "FIFO read returned %d\n", err);
return err; return err;
@ -810,8 +812,6 @@ static void m_can_handle_other_err(struct net_device *dev, u32 irqstatus)
{ {
if (irqstatus & IR_WDI) if (irqstatus & IR_WDI)
netdev_err(dev, "Message RAM Watchdog event due to missing READY\n"); netdev_err(dev, "Message RAM Watchdog event due to missing READY\n");
if (irqstatus & IR_ELO)
netdev_err(dev, "Error Logging Overflow\n");
if (irqstatus & IR_BEU) if (irqstatus & IR_BEU)
netdev_err(dev, "Bit Error Uncorrected\n"); netdev_err(dev, "Bit Error Uncorrected\n");
if (irqstatus & IR_BEC) if (irqstatus & IR_BEC)
@ -1494,20 +1494,32 @@ static int m_can_dev_setup(struct m_can_classdev *cdev)
case 30: case 30:
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.0.x */ /* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.0.x */
can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO); can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
cdev->can.bittiming_const = &m_can_bittiming_const_30X; cdev->can.bittiming_const = cdev->bit_timing ?
cdev->can.data_bittiming_const = &m_can_data_bittiming_const_30X; cdev->bit_timing : &m_can_bittiming_const_30X;
cdev->can.data_bittiming_const = cdev->data_timing ?
cdev->data_timing :
&m_can_data_bittiming_const_30X;
break; break;
case 31: case 31:
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.1.x */ /* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.1.x */
can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO); can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
cdev->can.bittiming_const = &m_can_bittiming_const_31X; cdev->can.bittiming_const = cdev->bit_timing ?
cdev->can.data_bittiming_const = &m_can_data_bittiming_const_31X; cdev->bit_timing : &m_can_bittiming_const_31X;
cdev->can.data_bittiming_const = cdev->data_timing ?
cdev->data_timing :
&m_can_data_bittiming_const_31X;
break; break;
case 32: case 32:
case 33: case 33:
/* Support both MCAN version v3.2.x and v3.3.0 */ /* Support both MCAN version v3.2.x and v3.3.0 */
cdev->can.bittiming_const = &m_can_bittiming_const_31X; cdev->can.bittiming_const = cdev->bit_timing ?
cdev->can.data_bittiming_const = &m_can_data_bittiming_const_31X; cdev->bit_timing : &m_can_bittiming_const_31X;
cdev->can.data_bittiming_const = cdev->data_timing ?
cdev->data_timing :
&m_can_data_bittiming_const_31X;
cdev->can.ctrlmode_supported |= cdev->can.ctrlmode_supported |=
(m_can_niso_supported(cdev) ? (m_can_niso_supported(cdev) ?

View file

@ -85,6 +85,9 @@ struct m_can_classdev {
struct sk_buff *tx_skb; struct sk_buff *tx_skb;
struct phy *transceiver; struct phy *transceiver;
const struct can_bittiming_const *bit_timing;
const struct can_bittiming_const *data_timing;
struct m_can_ops *ops; struct m_can_ops *ops;
int version; int version;

View file

@ -18,9 +18,14 @@
#define M_CAN_PCI_MMIO_BAR 0 #define M_CAN_PCI_MMIO_BAR 0
#define M_CAN_CLOCK_FREQ_EHL 100000000
#define CTL_CSR_INT_CTL_OFFSET 0x508 #define CTL_CSR_INT_CTL_OFFSET 0x508
struct m_can_pci_config {
const struct can_bittiming_const *bit_timing;
const struct can_bittiming_const *data_timing;
unsigned int clock_freq;
};
struct m_can_pci_priv { struct m_can_pci_priv {
struct m_can_classdev cdev; struct m_can_classdev cdev;
@ -42,8 +47,13 @@ static u32 iomap_read_reg(struct m_can_classdev *cdev, int reg)
static int iomap_read_fifo(struct m_can_classdev *cdev, int offset, void *val, size_t val_count) static int iomap_read_fifo(struct m_can_classdev *cdev, int offset, void *val, size_t val_count)
{ {
struct m_can_pci_priv *priv = cdev_to_priv(cdev); struct m_can_pci_priv *priv = cdev_to_priv(cdev);
void __iomem *src = priv->base + offset;
ioread32_rep(priv->base + offset, val, val_count); while (val_count--) {
*(unsigned int *)val = ioread32(src);
val += 4;
src += 4;
}
return 0; return 0;
} }
@ -61,8 +71,13 @@ static int iomap_write_fifo(struct m_can_classdev *cdev, int offset,
const void *val, size_t val_count) const void *val, size_t val_count)
{ {
struct m_can_pci_priv *priv = cdev_to_priv(cdev); struct m_can_pci_priv *priv = cdev_to_priv(cdev);
void __iomem *dst = priv->base + offset;
iowrite32_rep(priv->base + offset, val, val_count); while (val_count--) {
iowrite32(*(unsigned int *)val, dst);
val += 4;
dst += 4;
}
return 0; return 0;
} }
@ -74,9 +89,40 @@ static struct m_can_ops m_can_pci_ops = {
.read_fifo = iomap_read_fifo, .read_fifo = iomap_read_fifo,
}; };
static const struct can_bittiming_const m_can_bittiming_const_ehl = {
.name = KBUILD_MODNAME,
.tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */
.tseg1_max = 64,
.tseg2_min = 1, /* Time segment 2 = phase_seg2 */
.tseg2_max = 128,
.sjw_max = 128,
.brp_min = 1,
.brp_max = 512,
.brp_inc = 1,
};
static const struct can_bittiming_const m_can_data_bittiming_const_ehl = {
.name = KBUILD_MODNAME,
.tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */
.tseg1_max = 16,
.tseg2_min = 1, /* Time segment 2 = phase_seg2 */
.tseg2_max = 8,
.sjw_max = 4,
.brp_min = 1,
.brp_max = 32,
.brp_inc = 1,
};
static const struct m_can_pci_config m_can_pci_ehl = {
.bit_timing = &m_can_bittiming_const_ehl,
.data_timing = &m_can_data_bittiming_const_ehl,
.clock_freq = 200000000,
};
static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id) static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
{ {
struct device *dev = &pci->dev; struct device *dev = &pci->dev;
const struct m_can_pci_config *cfg;
struct m_can_classdev *mcan_class; struct m_can_classdev *mcan_class;
struct m_can_pci_priv *priv; struct m_can_pci_priv *priv;
void __iomem *base; void __iomem *base;
@ -104,6 +150,8 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
if (!mcan_class) if (!mcan_class)
return -ENOMEM; return -ENOMEM;
cfg = (const struct m_can_pci_config *)id->driver_data;
priv = cdev_to_priv(mcan_class); priv = cdev_to_priv(mcan_class);
priv->base = base; priv->base = base;
@ -115,7 +163,9 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
mcan_class->dev = &pci->dev; mcan_class->dev = &pci->dev;
mcan_class->net->irq = pci_irq_vector(pci, 0); mcan_class->net->irq = pci_irq_vector(pci, 0);
mcan_class->pm_clock_support = 1; mcan_class->pm_clock_support = 1;
mcan_class->can.clock.freq = id->driver_data; mcan_class->bit_timing = cfg->bit_timing;
mcan_class->data_timing = cfg->data_timing;
mcan_class->can.clock.freq = cfg->clock_freq;
mcan_class->ops = &m_can_pci_ops; mcan_class->ops = &m_can_pci_ops;
pci_set_drvdata(pci, mcan_class); pci_set_drvdata(pci, mcan_class);
@ -168,8 +218,8 @@ static SIMPLE_DEV_PM_OPS(m_can_pci_pm_ops,
m_can_pci_suspend, m_can_pci_resume); m_can_pci_suspend, m_can_pci_resume);
static const struct pci_device_id m_can_pci_id_table[] = { static const struct pci_device_id m_can_pci_id_table[] = {
{ PCI_VDEVICE(INTEL, 0x4bc1), M_CAN_CLOCK_FREQ_EHL, }, { PCI_VDEVICE(INTEL, 0x4bc1), (kernel_ulong_t)&m_can_pci_ehl, },
{ PCI_VDEVICE(INTEL, 0x4bc2), M_CAN_CLOCK_FREQ_EHL, }, { PCI_VDEVICE(INTEL, 0x4bc2), (kernel_ulong_t)&m_can_pci_ehl, },
{ } /* Terminating Entry */ { } /* Terminating Entry */
}; };
MODULE_DEVICE_TABLE(pci, m_can_pci_id_table); MODULE_DEVICE_TABLE(pci, m_can_pci_id_table);

View file

@ -692,11 +692,11 @@ static int pch_can_rx_normal(struct net_device *ndev, u32 obj_num, int quota)
cf->data[i + 1] = data_reg >> 8; cf->data[i + 1] = data_reg >> 8;
} }
netif_receive_skb(skb);
rcv_pkts++; rcv_pkts++;
stats->rx_packets++; stats->rx_packets++;
quota--; quota--;
stats->rx_bytes += cf->len; stats->rx_bytes += cf->len;
netif_receive_skb(skb);
pch_fifo_thresh(priv, obj_num); pch_fifo_thresh(priv, obj_num);
obj_num++; obj_num++;

View file

@ -234,7 +234,12 @@ static int ems_pcmcia_add_card(struct pcmcia_device *pdev, unsigned long base)
free_sja1000dev(dev); free_sja1000dev(dev);
} }
err = request_irq(dev->irq, &ems_pcmcia_interrupt, IRQF_SHARED, if (!card->channels) {
err = -ENODEV;
goto failure_cleanup;
}
err = request_irq(pdev->irq, &ems_pcmcia_interrupt, IRQF_SHARED,
DRV_NAME, card); DRV_NAME, card);
if (!err) if (!err)
return 0; return 0;

View file

@ -28,10 +28,6 @@
#include "kvaser_usb.h" #include "kvaser_usb.h"
/* Forward declaration */
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg;
#define CAN_USB_CLOCK 8000000
#define MAX_USBCAN_NET_DEVICES 2 #define MAX_USBCAN_NET_DEVICES 2
/* Command header size */ /* Command header size */
@ -80,6 +76,12 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg;
#define CMD_LEAF_LOG_MESSAGE 106 #define CMD_LEAF_LOG_MESSAGE 106
/* Leaf frequency options */
#define KVASER_USB_LEAF_SWOPTION_FREQ_MASK 0x60
#define KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK 0
#define KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK BIT(5)
#define KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK BIT(6)
/* error factors */ /* error factors */
#define M16C_EF_ACKE BIT(0) #define M16C_EF_ACKE BIT(0)
#define M16C_EF_CRCE BIT(1) #define M16C_EF_CRCE BIT(1)
@ -340,6 +342,50 @@ struct kvaser_usb_err_summary {
}; };
}; };
static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
.name = "kvaser_usb",
.tseg1_min = KVASER_USB_TSEG1_MIN,
.tseg1_max = KVASER_USB_TSEG1_MAX,
.tseg2_min = KVASER_USB_TSEG2_MIN,
.tseg2_max = KVASER_USB_TSEG2_MAX,
.sjw_max = KVASER_USB_SJW_MAX,
.brp_min = KVASER_USB_BRP_MIN,
.brp_max = KVASER_USB_BRP_MAX,
.brp_inc = KVASER_USB_BRP_INC,
};
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = {
.clock = {
.freq = 8000000,
},
.timestamp_freq = 1,
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
};
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = {
.clock = {
.freq = 16000000,
},
.timestamp_freq = 1,
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
};
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = {
.clock = {
.freq = 24000000,
},
.timestamp_freq = 1,
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
};
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = {
.clock = {
.freq = 32000000,
},
.timestamp_freq = 1,
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
};
static void * static void *
kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv, kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
const struct sk_buff *skb, int *frame_len, const struct sk_buff *skb, int *frame_len,
@ -471,6 +517,27 @@ static int kvaser_usb_leaf_send_simple_cmd(const struct kvaser_usb *dev,
return rc; return rc;
} }
static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev,
const struct leaf_cmd_softinfo *softinfo)
{
u32 sw_options = le32_to_cpu(softinfo->sw_options);
dev->fw_version = le32_to_cpu(softinfo->fw_version);
dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx);
switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) {
case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK:
dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz;
break;
case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK:
dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz;
break;
case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK:
dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz;
break;
}
}
static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev) static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
{ {
struct kvaser_cmd cmd; struct kvaser_cmd cmd;
@ -486,14 +553,13 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
switch (dev->card_data.leaf.family) { switch (dev->card_data.leaf.family) {
case KVASER_LEAF: case KVASER_LEAF:
dev->fw_version = le32_to_cpu(cmd.u.leaf.softinfo.fw_version); kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo);
dev->max_tx_urbs =
le16_to_cpu(cmd.u.leaf.softinfo.max_outstanding_tx);
break; break;
case KVASER_USBCAN: case KVASER_USBCAN:
dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version); dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version);
dev->max_tx_urbs = dev->max_tx_urbs =
le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx); le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx);
dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz;
break; break;
} }
@ -1225,24 +1291,11 @@ static int kvaser_usb_leaf_init_card(struct kvaser_usb *dev)
{ {
struct kvaser_usb_dev_card_data *card_data = &dev->card_data; struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
dev->cfg = &kvaser_usb_leaf_dev_cfg;
card_data->ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES; card_data->ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
return 0; return 0;
} }
static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
.name = "kvaser_usb",
.tseg1_min = KVASER_USB_TSEG1_MIN,
.tseg1_max = KVASER_USB_TSEG1_MAX,
.tseg2_min = KVASER_USB_TSEG2_MIN,
.tseg2_max = KVASER_USB_TSEG2_MAX,
.sjw_max = KVASER_USB_SJW_MAX,
.brp_min = KVASER_USB_BRP_MIN,
.brp_max = KVASER_USB_BRP_MAX,
.brp_inc = KVASER_USB_BRP_INC,
};
static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev) static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
{ {
struct kvaser_usb_net_priv *priv = netdev_priv(netdev); struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
@ -1348,11 +1401,3 @@ const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops = {
.dev_read_bulk_callback = kvaser_usb_leaf_read_bulk_callback, .dev_read_bulk_callback = kvaser_usb_leaf_read_bulk_callback,
.dev_frame_to_cmd = kvaser_usb_leaf_frame_to_cmd, .dev_frame_to_cmd = kvaser_usb_leaf_frame_to_cmd,
}; };
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg = {
.clock = {
.freq = CAN_USB_CLOCK,
},
.timestamp_freq = 1,
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
};

View file

@ -471,6 +471,12 @@ static int mv88e6xxx_port_ppu_updates(struct mv88e6xxx_chip *chip, int port)
u16 reg; u16 reg;
int err; int err;
/* The 88e6250 family does not have the PHY detect bit. Instead,
* report whether the port is internal.
*/
if (chip->info->family == MV88E6XXX_FAMILY_6250)
return port < chip->info->num_internal_phys;
err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg); err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg);
if (err) { if (err) {
dev_err(chip->dev, dev_err(chip->dev,
@ -692,44 +698,48 @@ static void mv88e6xxx_mac_config(struct dsa_switch *ds, int port,
{ {
struct mv88e6xxx_chip *chip = ds->priv; struct mv88e6xxx_chip *chip = ds->priv;
struct mv88e6xxx_port *p; struct mv88e6xxx_port *p;
int err; int err = 0;
p = &chip->ports[port]; p = &chip->ports[port];
/* FIXME: is this the correct test? If we're in fixed mode on an
* internal port, why should we process this any different from
* PHY mode? On the other hand, the port may be automedia between
* an internal PHY and the serdes...
*/
if ((mode == MLO_AN_PHY) && mv88e6xxx_phy_is_internal(ds, port))
return;
mv88e6xxx_reg_lock(chip); mv88e6xxx_reg_lock(chip);
/* In inband mode, the link may come up at any time while the link
* is not forced down. Force the link down while we reconfigure the
* interface mode.
*/
if (mode == MLO_AN_INBAND && p->interface != state->interface &&
chip->info->ops->port_set_link)
chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN);
err = mv88e6xxx_port_config_interface(chip, port, state->interface); if (mode != MLO_AN_PHY || !mv88e6xxx_phy_is_internal(ds, port)) {
if (err && err != -EOPNOTSUPP) /* In inband mode, the link may come up at any time while the
goto err_unlock; * link is not forced down. Force the link down while we
* reconfigure the interface mode.
*/
if (mode == MLO_AN_INBAND &&
p->interface != state->interface &&
chip->info->ops->port_set_link)
chip->info->ops->port_set_link(chip, port,
LINK_FORCED_DOWN);
err = mv88e6xxx_serdes_pcs_config(chip, port, mode, state->interface, err = mv88e6xxx_port_config_interface(chip, port,
state->advertising); state->interface);
/* FIXME: we should restart negotiation if something changed - which if (err && err != -EOPNOTSUPP)
* is something we get if we convert to using phylinks PCS operations. goto err_unlock;
*/
if (err > 0) err = mv88e6xxx_serdes_pcs_config(chip, port, mode,
err = 0; state->interface,
state->advertising);
/* FIXME: we should restart negotiation if something changed -
* which is something we get if we convert to using phylinks
* PCS operations.
*/
if (err > 0)
err = 0;
}
/* Undo the forced down state above after completing configuration /* Undo the forced down state above after completing configuration
* irrespective of its state on entry, which allows the link to come up. * irrespective of its state on entry, which allows the link to come
* up in the in-band case where there is no separate SERDES. Also
* ensure that the link can come up if the PPU is in use and we are
* in PHY mode (we treat the PPU as an effective in-band mechanism.)
*/ */
if (mode == MLO_AN_INBAND && p->interface != state->interface && if (chip->info->ops->port_set_link &&
chip->info->ops->port_set_link) ((mode == MLO_AN_INBAND && p->interface != state->interface) ||
(mode == MLO_AN_PHY && mv88e6xxx_port_ppu_updates(chip, port))))
chip->info->ops->port_set_link(chip, port, LINK_UNFORCED); chip->info->ops->port_set_link(chip, port, LINK_UNFORCED);
p->interface = state->interface; p->interface = state->interface;
@ -752,11 +762,10 @@ static void mv88e6xxx_mac_link_down(struct dsa_switch *ds, int port,
ops = chip->info->ops; ops = chip->info->ops;
mv88e6xxx_reg_lock(chip); mv88e6xxx_reg_lock(chip);
/* Internal PHYs propagate their configuration directly to the MAC. /* Force the link down if we know the port may not be automatically
* External PHYs depend on whether the PPU is enabled for this port. * updated by the switch or if we are using fixed-link mode.
*/ */
if (((!mv88e6xxx_phy_is_internal(ds, port) && if ((!mv88e6xxx_port_ppu_updates(chip, port) ||
!mv88e6xxx_port_ppu_updates(chip, port)) ||
mode == MLO_AN_FIXED) && ops->port_sync_link) mode == MLO_AN_FIXED) && ops->port_sync_link)
err = ops->port_sync_link(chip, port, mode, false); err = ops->port_sync_link(chip, port, mode, false);
mv88e6xxx_reg_unlock(chip); mv88e6xxx_reg_unlock(chip);
@ -779,11 +788,11 @@ static void mv88e6xxx_mac_link_up(struct dsa_switch *ds, int port,
ops = chip->info->ops; ops = chip->info->ops;
mv88e6xxx_reg_lock(chip); mv88e6xxx_reg_lock(chip);
/* Internal PHYs propagate their configuration directly to the MAC. /* Configure and force the link up if we know that the port may not
* External PHYs depend on whether the PPU is enabled for this port. * automatically updated by the switch or if we are using fixed-link
* mode.
*/ */
if ((!mv88e6xxx_phy_is_internal(ds, port) && if (!mv88e6xxx_port_ppu_updates(chip, port) ||
!mv88e6xxx_port_ppu_updates(chip, port)) ||
mode == MLO_AN_FIXED) { mode == MLO_AN_FIXED) {
/* FIXME: for an automedia port, should we force the link /* FIXME: for an automedia port, should we force the link
* down here - what if the link comes up due to "other" media * down here - what if the link comes up due to "other" media

View file

@ -830,7 +830,7 @@ int mv88e6390_serdes_power(struct mv88e6xxx_chip *chip, int port, int lane,
bool up) bool up)
{ {
u8 cmode = chip->ports[port].cmode; u8 cmode = chip->ports[port].cmode;
int err = 0; int err;
switch (cmode) { switch (cmode) {
case MV88E6XXX_PORT_STS_CMODE_SGMII: case MV88E6XXX_PORT_STS_CMODE_SGMII:
@ -842,6 +842,9 @@ int mv88e6390_serdes_power(struct mv88e6xxx_chip *chip, int port, int lane,
case MV88E6XXX_PORT_STS_CMODE_RXAUI: case MV88E6XXX_PORT_STS_CMODE_RXAUI:
err = mv88e6390_serdes_power_10g(chip, lane, up); err = mv88e6390_serdes_power_10g(chip, lane, up);
break; break;
default:
err = -EINVAL;
break;
} }
if (!err && up) if (!err && up)
@ -1541,6 +1544,9 @@ int mv88e6393x_serdes_power(struct mv88e6xxx_chip *chip, int port, int lane,
case MV88E6393X_PORT_STS_CMODE_10GBASER: case MV88E6393X_PORT_STS_CMODE_10GBASER:
err = mv88e6390_serdes_power_10g(chip, lane, on); err = mv88e6390_serdes_power_10g(chip, lane, on);
break; break;
default:
err = -EINVAL;
break;
} }
if (err) if (err)

View file

@ -290,8 +290,11 @@ static int felix_setup_mmio_filtering(struct felix *felix)
} }
} }
if (cpu < 0) if (cpu < 0) {
kfree(tagging_rule);
kfree(redirect_rule);
return -EINVAL; return -EINVAL;
}
tagging_rule->key_type = OCELOT_VCAP_KEY_ETYPE; tagging_rule->key_type = OCELOT_VCAP_KEY_ETYPE;
*(__be16 *)tagging_rule->key.etype.etype.value = htons(ETH_P_1588); *(__be16 *)tagging_rule->key.etype.etype.value = htons(ETH_P_1588);

View file

@ -1430,16 +1430,19 @@ static int altera_tse_probe(struct platform_device *pdev)
priv->rxdescmem_busaddr = dma_res->start; priv->rxdescmem_busaddr = dma_res->start;
} else { } else {
ret = -ENODEV;
goto err_free_netdev; goto err_free_netdev;
} }
if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) {
dma_set_coherent_mask(priv->device, dma_set_coherent_mask(priv->device,
DMA_BIT_MASK(priv->dmaops->dmamask)); DMA_BIT_MASK(priv->dmaops->dmamask));
else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) } else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) {
dma_set_coherent_mask(priv->device, DMA_BIT_MASK(32)); dma_set_coherent_mask(priv->device, DMA_BIT_MASK(32));
else } else {
ret = -EIO;
goto err_free_netdev; goto err_free_netdev;
}
/* MAC address space */ /* MAC address space */
ret = request_and_map(pdev, "control_port", &control_port, ret = request_and_map(pdev, "control_port", &control_port,

View file

@ -708,7 +708,9 @@ static int bcm4908_enet_probe(struct platform_device *pdev)
enet->irq_tx = platform_get_irq_byname(pdev, "tx"); enet->irq_tx = platform_get_irq_byname(pdev, "tx");
dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); err = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
if (err)
return err;
err = bcm4908_enet_dma_alloc(enet); err = bcm4908_enet_dma_alloc(enet);
if (err) if (err)

View file

@ -377,6 +377,9 @@ struct bufdesc_ex {
#define FEC_ENET_WAKEUP ((uint)0x00020000) /* Wakeup request */ #define FEC_ENET_WAKEUP ((uint)0x00020000) /* Wakeup request */
#define FEC_ENET_TXF (FEC_ENET_TXF_0 | FEC_ENET_TXF_1 | FEC_ENET_TXF_2) #define FEC_ENET_TXF (FEC_ENET_TXF_0 | FEC_ENET_TXF_1 | FEC_ENET_TXF_2)
#define FEC_ENET_RXF (FEC_ENET_RXF_0 | FEC_ENET_RXF_1 | FEC_ENET_RXF_2) #define FEC_ENET_RXF (FEC_ENET_RXF_0 | FEC_ENET_RXF_1 | FEC_ENET_RXF_2)
#define FEC_ENET_RXF_GET(X) (((X) == 0) ? FEC_ENET_RXF_0 : \
(((X) == 1) ? FEC_ENET_RXF_1 : \
FEC_ENET_RXF_2))
#define FEC_ENET_TS_AVAIL ((uint)0x00010000) #define FEC_ENET_TS_AVAIL ((uint)0x00010000)
#define FEC_ENET_TS_TIMER ((uint)0x00008000) #define FEC_ENET_TS_TIMER ((uint)0x00008000)

View file

@ -1480,7 +1480,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
break; break;
pkt_received++; pkt_received++;
writel(FEC_ENET_RXF, fep->hwp + FEC_IEVENT); writel(FEC_ENET_RXF_GET(queue_id), fep->hwp + FEC_IEVENT);
/* Check for errors. */ /* Check for errors. */
status ^= BD_ENET_RX_LAST; status ^= BD_ENET_RX_LAST;

View file

@ -68,6 +68,9 @@ struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi,
set_protocol = ctx->curr_frag_cnt == ctx->expected_frag_cnt - 1; set_protocol = ctx->curr_frag_cnt == ctx->expected_frag_cnt - 1;
} else { } else {
skb = napi_alloc_skb(napi, len); skb = napi_alloc_skb(napi, len);
if (unlikely(!skb))
return NULL;
set_protocol = true; set_protocol = true;
} }
__skb_put(skb, len); __skb_put(skb, len);

View file

@ -8,6 +8,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <linux/module.h>
#include "hinic_hw_dev.h" #include "hinic_hw_dev.h"
#include "hinic_dev.h" #include "hinic_dev.h"

View file

@ -553,6 +553,14 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
dev_info(&pf->pdev->dev, "vsi %d not found\n", vsi_seid); dev_info(&pf->pdev->dev, "vsi %d not found\n", vsi_seid);
return; return;
} }
if (vsi->type != I40E_VSI_MAIN &&
vsi->type != I40E_VSI_FDIR &&
vsi->type != I40E_VSI_VMDQ2) {
dev_info(&pf->pdev->dev,
"vsi %d type %d descriptor rings not available\n",
vsi_seid, vsi->type);
return;
}
if (type == RING_TYPE_XDP && !i40e_enabled_xdp_vsi(vsi)) { if (type == RING_TYPE_XDP && !i40e_enabled_xdp_vsi(vsi)) {
dev_info(&pf->pdev->dev, "XDP not enabled on VSI %d\n", vsi_seid); dev_info(&pf->pdev->dev, "XDP not enabled on VSI %d\n", vsi_seid);
return; return;

View file

@ -1948,6 +1948,32 @@ static int i40e_vc_send_resp_to_vf(struct i40e_vf *vf,
return i40e_vc_send_msg_to_vf(vf, opcode, retval, NULL, 0); return i40e_vc_send_msg_to_vf(vf, opcode, retval, NULL, 0);
} }
/**
* i40e_sync_vf_state
* @vf: pointer to the VF info
* @state: VF state
*
* Called from a VF message to synchronize the service with a potential
* VF reset state
**/
static bool i40e_sync_vf_state(struct i40e_vf *vf, enum i40e_vf_states state)
{
int i;
/* When handling some messages, it needs VF state to be set.
* It is possible that this flag is cleared during VF reset,
* so there is a need to wait until the end of the reset to
* handle the request message correctly.
*/
for (i = 0; i < I40E_VF_STATE_WAIT_COUNT; i++) {
if (test_bit(state, &vf->vf_states))
return true;
usleep_range(10000, 20000);
}
return test_bit(state, &vf->vf_states);
}
/** /**
* i40e_vc_get_version_msg * i40e_vc_get_version_msg
* @vf: pointer to the VF info * @vf: pointer to the VF info
@ -2008,7 +2034,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
size_t len = 0; size_t len = 0;
int ret; int ret;
if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -2131,7 +2157,7 @@ static int i40e_vc_config_promiscuous_mode_msg(struct i40e_vf *vf, u8 *msg)
bool allmulti = false; bool allmulti = false;
bool alluni = false; bool alluni = false;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err_out; goto err_out;
} }
@ -2219,7 +2245,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
struct i40e_vsi *vsi; struct i40e_vsi *vsi;
u16 num_qps_all = 0; u16 num_qps_all = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto error_param; goto error_param;
} }
@ -2368,7 +2394,7 @@ static int i40e_vc_config_irq_map_msg(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
int i; int i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto error_param; goto error_param;
} }
@ -2540,7 +2566,7 @@ static int i40e_vc_disable_queues_msg(struct i40e_vf *vf, u8 *msg)
struct i40e_pf *pf = vf->pf; struct i40e_pf *pf = vf->pf;
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto error_param; goto error_param;
} }
@ -2590,7 +2616,7 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
u8 cur_pairs = vf->num_queue_pairs; u8 cur_pairs = vf->num_queue_pairs;
struct i40e_pf *pf = vf->pf; struct i40e_pf *pf = vf->pf;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE))
return -EINVAL; return -EINVAL;
if (req_pairs > I40E_MAX_VF_QUEUES) { if (req_pairs > I40E_MAX_VF_QUEUES) {
@ -2635,7 +2661,7 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg)
memset(&stats, 0, sizeof(struct i40e_eth_stats)); memset(&stats, 0, sizeof(struct i40e_eth_stats));
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto error_param; goto error_param;
} }
@ -2752,7 +2778,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
i40e_status ret = 0; i40e_status ret = 0;
int i; int i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
!i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) { !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {
ret = I40E_ERR_PARAM; ret = I40E_ERR_PARAM;
goto error_param; goto error_param;
@ -2824,7 +2850,7 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
i40e_status ret = 0; i40e_status ret = 0;
int i; int i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
!i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) { !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {
ret = I40E_ERR_PARAM; ret = I40E_ERR_PARAM;
goto error_param; goto error_param;
@ -2968,7 +2994,7 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
int i; int i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
!i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) { !i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto error_param; goto error_param;
@ -3088,9 +3114,9 @@ static int i40e_vc_config_rss_key(struct i40e_vf *vf, u8 *msg)
struct i40e_vsi *vsi = NULL; struct i40e_vsi *vsi = NULL;
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
!i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) || !i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) ||
(vrk->key_len != I40E_HKEY_ARRAY_SIZE)) { vrk->key_len != I40E_HKEY_ARRAY_SIZE) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -3119,9 +3145,9 @@ static int i40e_vc_config_rss_lut(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
u16 i; u16 i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
!i40e_vc_isvalid_vsi_id(vf, vrl->vsi_id) || !i40e_vc_isvalid_vsi_id(vf, vrl->vsi_id) ||
(vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) { vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -3154,7 +3180,7 @@ static int i40e_vc_get_rss_hena(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
int len = 0; int len = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -3190,7 +3216,7 @@ static int i40e_vc_set_rss_hena(struct i40e_vf *vf, u8 *msg)
struct i40e_hw *hw = &pf->hw; struct i40e_hw *hw = &pf->hw;
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -3215,7 +3241,7 @@ static int i40e_vc_enable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
struct i40e_vsi *vsi; struct i40e_vsi *vsi;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -3241,7 +3267,7 @@ static int i40e_vc_disable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
struct i40e_vsi *vsi; struct i40e_vsi *vsi;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -3468,7 +3494,7 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
int i, ret; int i, ret;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -3599,7 +3625,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
int i, ret; int i, ret;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err_out; goto err_out;
} }
@ -3708,7 +3734,7 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
u64 speed = 0; u64 speed = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }
@ -3797,11 +3823,6 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
/* set this flag only after making sure all inputs are sane */ /* set this flag only after making sure all inputs are sane */
vf->adq_enabled = true; vf->adq_enabled = true;
/* num_req_queues is set when user changes number of queues via ethtool
* and this causes issue for default VSI(which depends on this variable)
* when ADq is enabled, hence reset it.
*/
vf->num_req_queues = 0;
/* reset the VF in order to allocate resources */ /* reset the VF in order to allocate resources */
i40e_vc_reset_vf(vf, true); i40e_vc_reset_vf(vf, true);
@ -3824,7 +3845,7 @@ static int i40e_vc_del_qch_msg(struct i40e_vf *vf, u8 *msg)
struct i40e_pf *pf = vf->pf; struct i40e_pf *pf = vf->pf;
i40e_status aq_ret = 0; i40e_status aq_ret = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
aq_ret = I40E_ERR_PARAM; aq_ret = I40E_ERR_PARAM;
goto err; goto err;
} }

View file

@ -18,6 +18,8 @@
#define I40E_MAX_VF_PROMISC_FLAGS 3 #define I40E_MAX_VF_PROMISC_FLAGS 3
#define I40E_VF_STATE_WAIT_COUNT 20
/* Various queue ctrls */ /* Various queue ctrls */
enum i40e_queue_ctrl { enum i40e_queue_ctrl {
I40E_QUEUE_CTRL_UNKNOWN = 0, I40E_QUEUE_CTRL_UNKNOWN = 0,

View file

@ -615,23 +615,44 @@ static int iavf_set_ringparam(struct net_device *netdev,
if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending)) if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
return -EINVAL; return -EINVAL;
new_tx_count = clamp_t(u32, ring->tx_pending, if (ring->tx_pending > IAVF_MAX_TXD ||
IAVF_MIN_TXD, ring->tx_pending < IAVF_MIN_TXD ||
IAVF_MAX_TXD); ring->rx_pending > IAVF_MAX_RXD ||
new_tx_count = ALIGN(new_tx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE); ring->rx_pending < IAVF_MIN_RXD) {
netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n",
ring->tx_pending, ring->rx_pending, IAVF_MIN_TXD,
IAVF_MAX_RXD, IAVF_REQ_DESCRIPTOR_MULTIPLE);
return -EINVAL;
}
new_rx_count = clamp_t(u32, ring->rx_pending, new_tx_count = ALIGN(ring->tx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE);
IAVF_MIN_RXD, if (new_tx_count != ring->tx_pending)
IAVF_MAX_RXD); netdev_info(netdev, "Requested Tx descriptor count rounded up to %d\n",
new_rx_count = ALIGN(new_rx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE); new_tx_count);
new_rx_count = ALIGN(ring->rx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE);
if (new_rx_count != ring->rx_pending)
netdev_info(netdev, "Requested Rx descriptor count rounded up to %d\n",
new_rx_count);
/* if nothing to do return success */ /* if nothing to do return success */
if ((new_tx_count == adapter->tx_desc_count) && if ((new_tx_count == adapter->tx_desc_count) &&
(new_rx_count == adapter->rx_desc_count)) (new_rx_count == adapter->rx_desc_count)) {
netdev_dbg(netdev, "Nothing to change, descriptor count is same as requested\n");
return 0; return 0;
}
adapter->tx_desc_count = new_tx_count; if (new_tx_count != adapter->tx_desc_count) {
adapter->rx_desc_count = new_rx_count; netdev_dbg(netdev, "Changing Tx descriptor count from %d to %d\n",
adapter->tx_desc_count, new_tx_count);
adapter->tx_desc_count = new_tx_count;
}
if (new_rx_count != adapter->rx_desc_count) {
netdev_dbg(netdev, "Changing Rx descriptor count from %d to %d\n",
adapter->rx_desc_count, new_rx_count);
adapter->rx_desc_count = new_rx_count;
}
if (netif_running(netdev)) { if (netif_running(netdev)) {
adapter->flags |= IAVF_FLAG_RESET_NEEDED; adapter->flags |= IAVF_FLAG_RESET_NEEDED;

View file

@ -2248,6 +2248,7 @@ static void iavf_reset_task(struct work_struct *work)
} }
pci_set_master(adapter->pdev); pci_set_master(adapter->pdev);
pci_restore_msi_state(adapter->pdev);
if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) { if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) {
dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n", dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",

View file

@ -97,6 +97,9 @@ static int ice_dcbnl_setets(struct net_device *netdev, struct ieee_ets *ets)
new_cfg->etscfg.maxtcs = pf->hw.func_caps.common_cap.maxtc; new_cfg->etscfg.maxtcs = pf->hw.func_caps.common_cap.maxtc;
if (!bwcfg)
new_cfg->etscfg.tcbwtable[0] = 100;
if (!bwrec) if (!bwrec)
new_cfg->etsrec.tcbwtable[0] = 100; new_cfg->etsrec.tcbwtable[0] = 100;
@ -167,15 +170,18 @@ static u8 ice_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
if (mode == pf->dcbx_cap) if (mode == pf->dcbx_cap)
return ICE_DCB_NO_HW_CHG; return ICE_DCB_NO_HW_CHG;
pf->dcbx_cap = mode;
qos_cfg = &pf->hw.port_info->qos_cfg; qos_cfg = &pf->hw.port_info->qos_cfg;
if (mode & DCB_CAP_DCBX_VER_CEE) {
if (qos_cfg->local_dcbx_cfg.pfc_mode == ICE_QOS_MODE_DSCP) /* DSCP configuration is not DCBx negotiated */
return ICE_DCB_NO_HW_CHG; if (qos_cfg->local_dcbx_cfg.pfc_mode == ICE_QOS_MODE_DSCP)
return ICE_DCB_NO_HW_CHG;
pf->dcbx_cap = mode;
if (mode & DCB_CAP_DCBX_VER_CEE)
qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE; qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE;
} else { else
qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE; qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE;
}
dev_info(ice_pf_to_dev(pf), "DCBx mode = 0x%x\n", mode); dev_info(ice_pf_to_dev(pf), "DCBx mode = 0x%x\n", mode);
return ICE_DCB_HW_CHG_RST; return ICE_DCB_HW_CHG_RST;

View file

@ -1268,7 +1268,7 @@ ice_fdir_write_all_fltr(struct ice_pf *pf, struct ice_fdir_fltr *input,
bool is_tun = tun == ICE_FD_HW_SEG_TUN; bool is_tun = tun == ICE_FD_HW_SEG_TUN;
int err; int err;
if (is_tun && !ice_get_open_tunnel_port(&pf->hw, &port_num)) if (is_tun && !ice_get_open_tunnel_port(&pf->hw, &port_num, TNL_ALL))
continue; continue;
err = ice_fdir_write_fltr(pf, input, add, is_tun); err = ice_fdir_write_fltr(pf, input, add, is_tun);
if (err) if (err)
@ -1652,7 +1652,7 @@ int ice_add_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
} }
/* return error if not an update and no available filters */ /* return error if not an update and no available filters */
fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port) ? 2 : 1; fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port, TNL_ALL) ? 2 : 1;
if (!ice_fdir_find_fltr_by_idx(hw, fsp->location) && if (!ice_fdir_find_fltr_by_idx(hw, fsp->location) &&
ice_fdir_num_avail_fltr(hw, pf->vsi[vsi->idx]) < fltrs_needed) { ice_fdir_num_avail_fltr(hw, pf->vsi[vsi->idx]) < fltrs_needed) {
dev_err(dev, "Failed to add filter. The maximum number of flow director filters has been reached.\n"); dev_err(dev, "Failed to add filter. The maximum number of flow director filters has been reached.\n");

View file

@ -924,7 +924,7 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
memcpy(pkt, ice_fdir_pkt[idx].pkt, ice_fdir_pkt[idx].pkt_len); memcpy(pkt, ice_fdir_pkt[idx].pkt, ice_fdir_pkt[idx].pkt_len);
loc = pkt; loc = pkt;
} else { } else {
if (!ice_get_open_tunnel_port(hw, &tnl_port)) if (!ice_get_open_tunnel_port(hw, &tnl_port, TNL_ALL))
return ICE_ERR_DOES_NOT_EXIST; return ICE_ERR_DOES_NOT_EXIST;
if (!ice_fdir_pkt[idx].tun_pkt) if (!ice_fdir_pkt[idx].tun_pkt)
return ICE_ERR_PARAM; return ICE_ERR_PARAM;

View file

@ -1899,9 +1899,11 @@ static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
* ice_get_open_tunnel_port - retrieve an open tunnel port * ice_get_open_tunnel_port - retrieve an open tunnel port
* @hw: pointer to the HW structure * @hw: pointer to the HW structure
* @port: returns open port * @port: returns open port
* @type: type of tunnel, can be TNL_LAST if it doesn't matter
*/ */
bool bool
ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port) ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port,
enum ice_tunnel_type type)
{ {
bool res = false; bool res = false;
u16 i; u16 i;
@ -1909,7 +1911,8 @@ ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port)
mutex_lock(&hw->tnl_lock); mutex_lock(&hw->tnl_lock);
for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++) for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].port) { if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].port &&
(type == TNL_LAST || type == hw->tnl.tbl[i].type)) {
*port = hw->tnl.tbl[i].port; *port = hw->tnl.tbl[i].port;
res = true; res = true;
break; break;

View file

@ -33,7 +33,8 @@ enum ice_status
ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt,
unsigned long *bm, struct list_head *fv_list); unsigned long *bm, struct list_head *fv_list);
bool bool
ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port); ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port,
enum ice_tunnel_type type);
int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table, int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table,
unsigned int idx, struct udp_tunnel_info *ti); unsigned int idx, struct udp_tunnel_info *ti);
int ice_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table, int ice_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table,

View file

@ -5881,6 +5881,9 @@ static int ice_up_complete(struct ice_vsi *vsi)
netif_carrier_on(vsi->netdev); netif_carrier_on(vsi->netdev);
} }
/* clear this now, and the first stats read will be used as baseline */
vsi->stat_offsets_loaded = false;
ice_service_task_schedule(pf); ice_service_task_schedule(pf);
return 0; return 0;
@ -5927,14 +5930,15 @@ ice_fetch_u64_stats_per_ring(struct u64_stats_sync *syncp, struct ice_q_stats st
/** /**
* ice_update_vsi_tx_ring_stats - Update VSI Tx ring stats counters * ice_update_vsi_tx_ring_stats - Update VSI Tx ring stats counters
* @vsi: the VSI to be updated * @vsi: the VSI to be updated
* @vsi_stats: the stats struct to be updated
* @rings: rings to work on * @rings: rings to work on
* @count: number of rings * @count: number of rings
*/ */
static void static void
ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, struct ice_tx_ring **rings, ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi,
u16 count) struct rtnl_link_stats64 *vsi_stats,
struct ice_tx_ring **rings, u16 count)
{ {
struct rtnl_link_stats64 *vsi_stats = &vsi->net_stats;
u16 i; u16 i;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
@ -5958,15 +5962,13 @@ ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, struct ice_tx_ring **rings,
*/ */
static void ice_update_vsi_ring_stats(struct ice_vsi *vsi) static void ice_update_vsi_ring_stats(struct ice_vsi *vsi)
{ {
struct rtnl_link_stats64 *vsi_stats = &vsi->net_stats; struct rtnl_link_stats64 *vsi_stats;
u64 pkts, bytes; u64 pkts, bytes;
int i; int i;
/* reset netdev stats */ vsi_stats = kzalloc(sizeof(*vsi_stats), GFP_ATOMIC);
vsi_stats->tx_packets = 0; if (!vsi_stats)
vsi_stats->tx_bytes = 0; return;
vsi_stats->rx_packets = 0;
vsi_stats->rx_bytes = 0;
/* reset non-netdev (extended) stats */ /* reset non-netdev (extended) stats */
vsi->tx_restart = 0; vsi->tx_restart = 0;
@ -5978,7 +5980,8 @@ static void ice_update_vsi_ring_stats(struct ice_vsi *vsi)
rcu_read_lock(); rcu_read_lock();
/* update Tx rings counters */ /* update Tx rings counters */
ice_update_vsi_tx_ring_stats(vsi, vsi->tx_rings, vsi->num_txq); ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->tx_rings,
vsi->num_txq);
/* update Rx rings counters */ /* update Rx rings counters */
ice_for_each_rxq(vsi, i) { ice_for_each_rxq(vsi, i) {
@ -5993,10 +5996,17 @@ static void ice_update_vsi_ring_stats(struct ice_vsi *vsi)
/* update XDP Tx rings counters */ /* update XDP Tx rings counters */
if (ice_is_xdp_ena_vsi(vsi)) if (ice_is_xdp_ena_vsi(vsi))
ice_update_vsi_tx_ring_stats(vsi, vsi->xdp_rings, ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->xdp_rings,
vsi->num_xdp_txq); vsi->num_xdp_txq);
rcu_read_unlock(); rcu_read_unlock();
vsi->net_stats.tx_packets = vsi_stats->tx_packets;
vsi->net_stats.tx_bytes = vsi_stats->tx_bytes;
vsi->net_stats.rx_packets = vsi_stats->rx_packets;
vsi->net_stats.rx_bytes = vsi_stats->rx_bytes;
kfree(vsi_stats);
} }
/** /**

View file

@ -3796,10 +3796,13 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
* ice_find_recp - find a recipe * ice_find_recp - find a recipe
* @hw: pointer to the hardware structure * @hw: pointer to the hardware structure
* @lkup_exts: extension sequence to match * @lkup_exts: extension sequence to match
* @tun_type: type of recipe tunnel
* *
* Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found. * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found.
*/ */
static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts) static u16
ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts,
enum ice_sw_tunnel_type tun_type)
{ {
bool refresh_required = true; bool refresh_required = true;
struct ice_sw_recipe *recp; struct ice_sw_recipe *recp;
@ -3860,8 +3863,9 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
} }
/* If for "i"th recipe the found was never set to false /* If for "i"th recipe the found was never set to false
* then it means we found our match * then it means we found our match
* Also tun type of recipe needs to be checked
*/ */
if (found) if (found && recp[i].tun_type == tun_type)
return i; /* Return the recipe ID */ return i; /* Return the recipe ID */
} }
} }
@ -4651,11 +4655,12 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
} }
/* Look for a recipe which matches our requested fv / mask list */ /* Look for a recipe which matches our requested fv / mask list */
*rid = ice_find_recp(hw, lkup_exts); *rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type);
if (*rid < ICE_MAX_NUM_RECIPES) if (*rid < ICE_MAX_NUM_RECIPES)
/* Success if found a recipe that match the existing criteria */ /* Success if found a recipe that match the existing criteria */
goto err_unroll; goto err_unroll;
rm->tun_type = rinfo->tun_type;
/* Recipe we need does not exist, add a recipe */ /* Recipe we need does not exist, add a recipe */
status = ice_add_sw_recipe(hw, rm, profiles); status = ice_add_sw_recipe(hw, rm, profiles);
if (status) if (status)
@ -4958,11 +4963,13 @@ ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type,
switch (tun_type) { switch (tun_type) {
case ICE_SW_TUN_VXLAN: case ICE_SW_TUN_VXLAN:
case ICE_SW_TUN_GENEVE: if (!ice_get_open_tunnel_port(hw, &open_port, TNL_VXLAN))
if (!ice_get_open_tunnel_port(hw, &open_port)) return ICE_ERR_CFG;
break;
case ICE_SW_TUN_GENEVE:
if (!ice_get_open_tunnel_port(hw, &open_port, TNL_GENEVE))
return ICE_ERR_CFG; return ICE_ERR_CFG;
break; break;
default: default:
/* Nothing needs to be done for this tunnel type */ /* Nothing needs to be done for this tunnel type */
return 0; return 0;
@ -5555,7 +5562,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
if (status) if (status)
return status; return status;
rid = ice_find_recp(hw, &lkup_exts); rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type);
/* If did not find a recipe that match the existing criteria */ /* If did not find a recipe that match the existing criteria */
if (rid == ICE_MAX_NUM_RECIPES) if (rid == ICE_MAX_NUM_RECIPES)
return ICE_ERR_PARAM; return ICE_ERR_PARAM;

View file

@ -74,21 +74,13 @@ static enum ice_protocol_type ice_proto_type_from_ipv6(bool inner)
return inner ? ICE_IPV6_IL : ICE_IPV6_OFOS; return inner ? ICE_IPV6_IL : ICE_IPV6_OFOS;
} }
static enum ice_protocol_type static enum ice_protocol_type ice_proto_type_from_l4_port(u16 ip_proto)
ice_proto_type_from_l4_port(bool inner, u16 ip_proto)
{ {
if (inner) { switch (ip_proto) {
switch (ip_proto) { case IPPROTO_TCP:
case IPPROTO_UDP: return ICE_TCP_IL;
return ICE_UDP_ILOS; case IPPROTO_UDP:
} return ICE_UDP_ILOS;
} else {
switch (ip_proto) {
case IPPROTO_TCP:
return ICE_TCP_IL;
case IPPROTO_UDP:
return ICE_UDP_OF;
}
} }
return 0; return 0;
@ -191,8 +183,9 @@ ice_tc_fill_tunnel_outer(u32 flags, struct ice_tc_flower_fltr *fltr,
i++; i++;
} }
if (flags & ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT) { if ((flags & ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT) &&
list[i].type = ice_proto_type_from_l4_port(false, hdr->l3_key.ip_proto); hdr->l3_key.ip_proto == IPPROTO_UDP) {
list[i].type = ICE_UDP_OF;
list[i].h_u.l4_hdr.dst_port = hdr->l4_key.dst_port; list[i].h_u.l4_hdr.dst_port = hdr->l4_key.dst_port;
list[i].m_u.l4_hdr.dst_port = hdr->l4_mask.dst_port; list[i].m_u.l4_hdr.dst_port = hdr->l4_mask.dst_port;
i++; i++;
@ -317,7 +310,7 @@ ice_tc_fill_rules(struct ice_hw *hw, u32 flags,
ICE_TC_FLWR_FIELD_SRC_L4_PORT)) { ICE_TC_FLWR_FIELD_SRC_L4_PORT)) {
struct ice_tc_l4_hdr *l4_key, *l4_mask; struct ice_tc_l4_hdr *l4_key, *l4_mask;
list[i].type = ice_proto_type_from_l4_port(inner, headers->l3_key.ip_proto); list[i].type = ice_proto_type_from_l4_port(headers->l3_key.ip_proto);
l4_key = &headers->l4_key; l4_key = &headers->l4_key;
l4_mask = &headers->l4_mask; l4_mask = &headers->l4_mask;
@ -802,7 +795,8 @@ ice_parse_tunnel_attr(struct net_device *dev, struct flow_rule *rule,
headers->l3_mask.ttl = match.mask->ttl; headers->l3_mask.ttl = match.mask->ttl;
} }
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) { if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS) &&
fltr->tunnel_type != TNL_VXLAN && fltr->tunnel_type != TNL_GENEVE) {
struct flow_match_ports match; struct flow_match_ports match;
flow_rule_match_enc_ports(rule, &match); flow_rule_match_enc_ports(rule, &match);

View file

@ -1617,6 +1617,7 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
ice_vc_set_default_allowlist(vf); ice_vc_set_default_allowlist(vf);
ice_vf_fdir_exit(vf); ice_vf_fdir_exit(vf);
ice_vf_fdir_init(vf);
/* clean VF control VSI when resetting VFs since it should be /* clean VF control VSI when resetting VFs since it should be
* setup only when VF creates its first FDIR rule. * setup only when VF creates its first FDIR rule.
*/ */
@ -1747,6 +1748,7 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
} }
ice_vf_fdir_exit(vf); ice_vf_fdir_exit(vf);
ice_vf_fdir_init(vf);
/* clean VF control VSI when resetting VF since it should be setup /* clean VF control VSI when resetting VF since it should be setup
* only when VF creates its first FDIR rule. * only when VF creates its first FDIR rule.
*/ */
@ -2021,6 +2023,10 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
if (ret) if (ret)
goto err_unroll_sriov; goto err_unroll_sriov;
/* rearm global interrupts */
if (test_and_clear_bit(ICE_OICR_INTR_DIS, pf->state))
ice_irq_dynamic_ena(hw, NULL, NULL);
return 0; return 0;
err_unroll_sriov: err_unroll_sriov:

View file

@ -2960,11 +2960,11 @@ static int mvpp2_rxq_init(struct mvpp2_port *port,
mvpp2_rxq_status_update(port, rxq->id, 0, rxq->size); mvpp2_rxq_status_update(port, rxq->id, 0, rxq->size);
if (priv->percpu_pools) { if (priv->percpu_pools) {
err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->id, 0); err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->logic_rxq, 0);
if (err < 0) if (err < 0)
goto err_free_dma; goto err_free_dma;
err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->id, 0); err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->logic_rxq, 0);
if (err < 0) if (err < 0)
goto err_unregister_rxq_short; goto err_unregister_rxq_short;

View file

@ -5,6 +5,8 @@
* *
*/ */
#include <linux/module.h>
#include "otx2_common.h" #include "otx2_common.h"
#include "otx2_ptp.h" #include "otx2_ptp.h"

View file

@ -480,16 +480,16 @@ static int mana_hwc_create_wq(struct hw_channel_context *hwc,
if (err) if (err)
goto out; goto out;
err = mana_hwc_alloc_dma_buf(hwc, q_depth, max_msg_size,
&hwc_wq->msg_buf);
if (err)
goto out;
hwc_wq->hwc = hwc; hwc_wq->hwc = hwc;
hwc_wq->gdma_wq = queue; hwc_wq->gdma_wq = queue;
hwc_wq->queue_depth = q_depth; hwc_wq->queue_depth = q_depth;
hwc_wq->hwc_cq = hwc_cq; hwc_wq->hwc_cq = hwc_cq;
err = mana_hwc_alloc_dma_buf(hwc, q_depth, max_msg_size,
&hwc_wq->msg_buf);
if (err)
goto out;
*hwc_wq_ptr = hwc_wq; *hwc_wq_ptr = hwc_wq;
return 0; return 0;
out: out:

View file

@ -803,8 +803,10 @@ int nfp_cpp_area_cache_add(struct nfp_cpp *cpp, size_t size)
return -ENOMEM; return -ENOMEM;
cache = kzalloc(sizeof(*cache), GFP_KERNEL); cache = kzalloc(sizeof(*cache), GFP_KERNEL);
if (!cache) if (!cache) {
nfp_cpp_area_free(area);
return -ENOMEM; return -ENOMEM;
}
cache->id = 0; cache->id = 0;
cache->addr = 0; cache->addr = 0;

View file

@ -1643,6 +1643,13 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb, struct net_device *ndev)
data_split = true; data_split = true;
} }
} else { } else {
if (unlikely(skb->len > ETH_TX_MAX_NON_LSO_PKT_LEN)) {
DP_ERR(edev, "Unexpected non LSO skb length = 0x%x\n", skb->len);
qede_free_failed_tx_pkt(txq, first_bd, 0, false);
qede_update_tx_producer(txq);
return NETDEV_TX_OK;
}
val |= ((skb->len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) << val |= ((skb->len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) <<
ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT); ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT);
} }

View file

@ -3480,20 +3480,19 @@ static int ql_adapter_up(struct ql3_adapter *qdev)
spin_lock_irqsave(&qdev->hw_lock, hw_flags); spin_lock_irqsave(&qdev->hw_lock, hw_flags);
err = ql_wait_for_drvr_lock(qdev); if (!ql_wait_for_drvr_lock(qdev)) {
if (err) {
err = ql_adapter_initialize(qdev);
if (err) {
netdev_err(ndev, "Unable to initialize adapter\n");
goto err_init;
}
netdev_err(ndev, "Releasing driver lock\n");
ql_sem_unlock(qdev, QL_DRVR_SEM_MASK);
} else {
netdev_err(ndev, "Could not acquire driver lock\n"); netdev_err(ndev, "Could not acquire driver lock\n");
err = -ENODEV;
goto err_lock; goto err_lock;
} }
err = ql_adapter_initialize(qdev);
if (err) {
netdev_err(ndev, "Unable to initialize adapter\n");
goto err_init;
}
ql_sem_unlock(qdev, QL_DRVR_SEM_MASK);
spin_unlock_irqrestore(&qdev->hw_lock, hw_flags); spin_unlock_irqrestore(&qdev->hw_lock, hw_flags);
set_bit(QL_ADAPTER_UP, &qdev->flags); set_bit(QL_ADAPTER_UP, &qdev->flags);

View file

@ -1388,6 +1388,7 @@ EXPORT_SYMBOL_GPL(phylink_stop);
* @mac_wol: true if the MAC needs to receive packets for Wake-on-Lan * @mac_wol: true if the MAC needs to receive packets for Wake-on-Lan
* *
* Handle a network device suspend event. There are several cases: * Handle a network device suspend event. There are several cases:
*
* - If Wake-on-Lan is not active, we can bring down the link between * - If Wake-on-Lan is not active, we can bring down the link between
* the MAC and PHY by calling phylink_stop(). * the MAC and PHY by calling phylink_stop().
* - If Wake-on-Lan is active, and being handled only by the PHY, we * - If Wake-on-Lan is active, and being handled only by the PHY, we

View file

@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32); min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize)); max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
if (max == 0)
max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
/* some devices set dwNtbOutMaxSize too low for the above default */ /* some devices set dwNtbOutMaxSize too low for the above default */
min = min(min, max); min = min(min, max);

View file

@ -3261,7 +3261,7 @@ vmxnet3_alloc_intr_resources(struct vmxnet3_adapter *adapter)
#ifdef CONFIG_PCI_MSI #ifdef CONFIG_PCI_MSI
if (adapter->intr.type == VMXNET3_IT_MSIX) { if (adapter->intr.type == VMXNET3_IT_MSIX) {
int i, nvec; int i, nvec, nvec_allocated;
nvec = adapter->share_intr == VMXNET3_INTR_TXSHARE ? nvec = adapter->share_intr == VMXNET3_INTR_TXSHARE ?
1 : adapter->num_tx_queues; 1 : adapter->num_tx_queues;
@ -3274,14 +3274,15 @@ vmxnet3_alloc_intr_resources(struct vmxnet3_adapter *adapter)
for (i = 0; i < nvec; i++) for (i = 0; i < nvec; i++)
adapter->intr.msix_entries[i].entry = i; adapter->intr.msix_entries[i].entry = i;
nvec = vmxnet3_acquire_msix_vectors(adapter, nvec); nvec_allocated = vmxnet3_acquire_msix_vectors(adapter, nvec);
if (nvec < 0) if (nvec_allocated < 0)
goto msix_err; goto msix_err;
/* If we cannot allocate one MSIx vector per queue /* If we cannot allocate one MSIx vector per queue
* then limit the number of rx queues to 1 * then limit the number of rx queues to 1
*/ */
if (nvec == VMXNET3_LINUX_MIN_MSIX_VECT) { if (nvec_allocated == VMXNET3_LINUX_MIN_MSIX_VECT &&
nvec != VMXNET3_LINUX_MIN_MSIX_VECT) {
if (adapter->share_intr != VMXNET3_INTR_BUDDYSHARE if (adapter->share_intr != VMXNET3_INTR_BUDDYSHARE
|| adapter->num_rx_queues != 1) { || adapter->num_rx_queues != 1) {
adapter->share_intr = VMXNET3_INTR_TXSHARE; adapter->share_intr = VMXNET3_INTR_TXSHARE;
@ -3291,14 +3292,14 @@ vmxnet3_alloc_intr_resources(struct vmxnet3_adapter *adapter)
} }
} }
adapter->intr.num_intrs = nvec; adapter->intr.num_intrs = nvec_allocated;
return; return;
msix_err: msix_err:
/* If we cannot allocate MSIx vectors use only one rx queue */ /* If we cannot allocate MSIx vectors use only one rx queue */
dev_info(&adapter->pdev->dev, dev_info(&adapter->pdev->dev,
"Failed to enable MSI-X, error %d. " "Failed to enable MSI-X, error %d. "
"Limiting #rx queues to 1, try MSI.\n", nvec); "Limiting #rx queues to 1, try MSI.\n", nvec_allocated);
adapter->intr.type = VMXNET3_IT_MSI; adapter->intr.type = VMXNET3_IT_MSI;
} }

View file

@ -770,8 +770,6 @@ static struct sk_buff *vrf_ip6_out_direct(struct net_device *vrf_dev,
skb->dev = vrf_dev; skb->dev = vrf_dev;
vrf_nf_set_untracked(skb);
err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk,
skb, NULL, vrf_dev, vrf_ip6_out_direct_finish); skb, NULL, vrf_dev, vrf_ip6_out_direct_finish);
@ -792,6 +790,8 @@ static struct sk_buff *vrf_ip6_out(struct net_device *vrf_dev,
if (rt6_need_strict(&ipv6_hdr(skb)->daddr)) if (rt6_need_strict(&ipv6_hdr(skb)->daddr))
return skb; return skb;
vrf_nf_set_untracked(skb);
if (qdisc_tx_is_default(vrf_dev) || if (qdisc_tx_is_default(vrf_dev) ||
IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED) IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED)
return vrf_ip6_out_direct(vrf_dev, sk, skb); return vrf_ip6_out_direct(vrf_dev, sk, skb);
@ -1000,8 +1000,6 @@ static struct sk_buff *vrf_ip_out_direct(struct net_device *vrf_dev,
skb->dev = vrf_dev; skb->dev = vrf_dev;
vrf_nf_set_untracked(skb);
err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk, err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk,
skb, NULL, vrf_dev, vrf_ip_out_direct_finish); skb, NULL, vrf_dev, vrf_ip_out_direct_finish);
@ -1023,6 +1021,8 @@ static struct sk_buff *vrf_ip_out(struct net_device *vrf_dev,
ipv4_is_lbcast(ip_hdr(skb)->daddr)) ipv4_is_lbcast(ip_hdr(skb)->daddr))
return skb; return skb;
vrf_nf_set_untracked(skb);
if (qdisc_tx_is_default(vrf_dev) || if (qdisc_tx_is_default(vrf_dev) ||
IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED) IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
return vrf_ip_out_direct(vrf_dev, sk, skb); return vrf_ip_out_direct(vrf_dev, sk, skb);

View file

@ -181,9 +181,9 @@ void ipc_imem_hrtimer_stop(struct hrtimer *hr_timer)
bool ipc_imem_ul_write_td(struct iosm_imem *ipc_imem) bool ipc_imem_ul_write_td(struct iosm_imem *ipc_imem)
{ {
struct ipc_mem_channel *channel; struct ipc_mem_channel *channel;
bool hpda_ctrl_pending = false;
struct sk_buff_head *ul_list; struct sk_buff_head *ul_list;
bool hpda_pending = false; bool hpda_pending = false;
bool forced_hpdu = false;
struct ipc_pipe *pipe; struct ipc_pipe *pipe;
int i; int i;
@ -200,15 +200,19 @@ bool ipc_imem_ul_write_td(struct iosm_imem *ipc_imem)
ul_list = &channel->ul_list; ul_list = &channel->ul_list;
/* Fill the transfer descriptor with the uplink buffer info. */ /* Fill the transfer descriptor with the uplink buffer info. */
hpda_pending |= ipc_protocol_ul_td_send(ipc_imem->ipc_protocol, if (!ipc_imem_check_wwan_ips(channel)) {
hpda_ctrl_pending |=
ipc_protocol_ul_td_send(ipc_imem->ipc_protocol,
pipe, ul_list); pipe, ul_list);
} else {
/* forced HP update needed for non data channels */ hpda_pending |=
if (hpda_pending && !ipc_imem_check_wwan_ips(channel)) ipc_protocol_ul_td_send(ipc_imem->ipc_protocol,
forced_hpdu = true; pipe, ul_list);
}
} }
if (forced_hpdu) { /* forced HP update needed for non data channels */
if (hpda_ctrl_pending) {
hpda_pending = false; hpda_pending = false;
ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol, ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol,
IPC_HP_UL_WRITE_TD); IPC_HP_UL_WRITE_TD);
@ -527,6 +531,9 @@ static void ipc_imem_run_state_worker(struct work_struct *instance)
return; return;
} }
if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag))
ipc_devlink_deinit(ipc_imem->ipc_devlink);
if (!ipc_imem_setup_cp_mux_cap_init(ipc_imem, &mux_cfg)) if (!ipc_imem_setup_cp_mux_cap_init(ipc_imem, &mux_cfg))
ipc_imem->mux = ipc_mux_init(&mux_cfg, ipc_imem); ipc_imem->mux = ipc_mux_init(&mux_cfg, ipc_imem);
@ -1167,7 +1174,7 @@ void ipc_imem_cleanup(struct iosm_imem *ipc_imem)
ipc_port_deinit(ipc_imem->ipc_port); ipc_port_deinit(ipc_imem->ipc_port);
} }
if (ipc_imem->ipc_devlink) if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag))
ipc_devlink_deinit(ipc_imem->ipc_devlink); ipc_devlink_deinit(ipc_imem->ipc_devlink);
ipc_imem_device_ipc_uninit(ipc_imem); ipc_imem_device_ipc_uninit(ipc_imem);
@ -1263,7 +1270,6 @@ struct iosm_imem *ipc_imem_init(struct iosm_pcie *pcie, unsigned int device_id,
ipc_imem->pci_device_id = device_id; ipc_imem->pci_device_id = device_id;
ipc_imem->ev_cdev_write_pending = false;
ipc_imem->cp_version = 0; ipc_imem->cp_version = 0;
ipc_imem->device_sleep = IPC_HOST_SLEEP_ENTER_SLEEP; ipc_imem->device_sleep = IPC_HOST_SLEEP_ENTER_SLEEP;
@ -1331,6 +1337,8 @@ struct iosm_imem *ipc_imem_init(struct iosm_pcie *pcie, unsigned int device_id,
if (ipc_flash_link_establish(ipc_imem)) if (ipc_flash_link_establish(ipc_imem))
goto devlink_channel_fail; goto devlink_channel_fail;
set_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag);
} }
return ipc_imem; return ipc_imem;
devlink_channel_fail: devlink_channel_fail:

View file

@ -101,6 +101,7 @@ struct ipc_chnl_cfg;
#define IOSM_CHIP_INFO_SIZE_MAX 100 #define IOSM_CHIP_INFO_SIZE_MAX 100
#define FULLY_FUNCTIONAL 0 #define FULLY_FUNCTIONAL 0
#define IOSM_DEVLINK_INIT 1
/* List of the supported UL/DL pipes. */ /* List of the supported UL/DL pipes. */
enum ipc_mem_pipes { enum ipc_mem_pipes {
@ -335,8 +336,6 @@ enum ipc_phase {
* process the irq actions. * process the irq actions.
* @flag: Flag to monitor the state of driver * @flag: Flag to monitor the state of driver
* @td_update_timer_suspended: if true then td update timer suspend * @td_update_timer_suspended: if true then td update timer suspend
* @ev_cdev_write_pending: 0 means inform the IPC tasklet to pass
* the accumulated uplink buffers to CP.
* @ev_mux_net_transmit_pending:0 means inform the IPC tasklet to pass * @ev_mux_net_transmit_pending:0 means inform the IPC tasklet to pass
* @reset_det_n: Reset detect flag * @reset_det_n: Reset detect flag
* @pcie_wake_n: Pcie wake flag * @pcie_wake_n: Pcie wake flag
@ -374,7 +373,6 @@ struct iosm_imem {
u8 ev_irq_pending[IPC_IRQ_VECTORS]; u8 ev_irq_pending[IPC_IRQ_VECTORS];
unsigned long flag; unsigned long flag;
u8 td_update_timer_suspended:1, u8 td_update_timer_suspended:1,
ev_cdev_write_pending:1,
ev_mux_net_transmit_pending:1, ev_mux_net_transmit_pending:1,
reset_det_n:1, reset_det_n:1,
pcie_wake_n:1; pcie_wake_n:1;

View file

@ -41,7 +41,6 @@ void ipc_imem_sys_wwan_close(struct iosm_imem *ipc_imem, int if_id,
static int ipc_imem_tq_cdev_write(struct iosm_imem *ipc_imem, int arg, static int ipc_imem_tq_cdev_write(struct iosm_imem *ipc_imem, int arg,
void *msg, size_t size) void *msg, size_t size)
{ {
ipc_imem->ev_cdev_write_pending = false;
ipc_imem_ul_send(ipc_imem); ipc_imem_ul_send(ipc_imem);
return 0; return 0;
@ -50,11 +49,6 @@ static int ipc_imem_tq_cdev_write(struct iosm_imem *ipc_imem, int arg,
/* Through tasklet to do sio write. */ /* Through tasklet to do sio write. */
static int ipc_imem_call_cdev_write(struct iosm_imem *ipc_imem) static int ipc_imem_call_cdev_write(struct iosm_imem *ipc_imem)
{ {
if (ipc_imem->ev_cdev_write_pending)
return -1;
ipc_imem->ev_cdev_write_pending = true;
return ipc_task_queue_send_task(ipc_imem, ipc_imem_tq_cdev_write, 0, return ipc_task_queue_send_task(ipc_imem, ipc_imem_tq_cdev_write, 0,
NULL, 0, false); NULL, 0, false);
} }
@ -450,6 +444,7 @@ void ipc_imem_sys_devlink_close(struct iosm_devlink *ipc_devlink)
/* Release the pipe resources */ /* Release the pipe resources */
ipc_imem_pipe_cleanup(ipc_imem, &channel->ul_pipe); ipc_imem_pipe_cleanup(ipc_imem, &channel->ul_pipe);
ipc_imem_pipe_cleanup(ipc_imem, &channel->dl_pipe); ipc_imem_pipe_cleanup(ipc_imem, &channel->dl_pipe);
ipc_imem->nr_of_channels--;
} }
void ipc_imem_sys_devlink_notify_rx(struct iosm_devlink *ipc_devlink, void ipc_imem_sys_devlink_notify_rx(struct iosm_devlink *ipc_devlink,

View file

@ -19,6 +19,7 @@
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/phy/phy.h> #include <linux/phy/phy.h>
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
#include <linux/module.h>
#include "pcie-designware.h" #include "pcie-designware.h"

View file

@ -18,6 +18,7 @@
#include <linux/pm_domain.h> #include <linux/pm_domain.h>
#include <linux/regmap.h> #include <linux/regmap.h>
#include <linux/reset.h> #include <linux/reset.h>
#include <linux/module.h>
#include "pcie-designware.h" #include "pcie-designware.h"

View file

@ -10,6 +10,7 @@
*/ */
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h>
#include "core.h" #include "core.h"
#include "drd.h" #include "drd.h"
#include "host-export.h" #include "host-export.h"

View file

@ -732,6 +732,7 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
struct bpf_trampoline *bpf_trampoline_get(u64 key, struct bpf_trampoline *bpf_trampoline_get(u64 key,
struct bpf_attach_target_info *tgt_info); struct bpf_attach_target_info *tgt_info);
void bpf_trampoline_put(struct bpf_trampoline *tr); void bpf_trampoline_put(struct bpf_trampoline *tr);
int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs);
#define BPF_DISPATCHER_INIT(_name) { \ #define BPF_DISPATCHER_INIT(_name) { \
.mutex = __MUTEX_INITIALIZER(_name.mutex), \ .mutex = __MUTEX_INITIALIZER(_name.mutex), \
.func = &_name##_func, \ .func = &_name##_func, \
@ -1352,28 +1353,16 @@ extern struct mutex bpf_stats_enabled_mutex;
* kprobes, tracepoints) to prevent deadlocks on map operations as any of * kprobes, tracepoints) to prevent deadlocks on map operations as any of
* these events can happen inside a region which holds a map bucket lock * these events can happen inside a region which holds a map bucket lock
* and can deadlock on it. * and can deadlock on it.
*
* Use the preemption safe inc/dec variants on RT because migrate disable
* is preemptible on RT and preemption in the middle of the RMW operation
* might lead to inconsistent state. Use the raw variants for non RT
* kernels as migrate_disable() maps to preempt_disable() so the slightly
* more expensive save operation can be avoided.
*/ */
static inline void bpf_disable_instrumentation(void) static inline void bpf_disable_instrumentation(void)
{ {
migrate_disable(); migrate_disable();
if (IS_ENABLED(CONFIG_PREEMPT_RT)) this_cpu_inc(bpf_prog_active);
this_cpu_inc(bpf_prog_active);
else
__this_cpu_inc(bpf_prog_active);
} }
static inline void bpf_enable_instrumentation(void) static inline void bpf_enable_instrumentation(void)
{ {
if (IS_ENABLED(CONFIG_PREEMPT_RT)) this_cpu_dec(bpf_prog_active);
this_cpu_dec(bpf_prog_active);
else
__this_cpu_dec(bpf_prog_active);
migrate_enable(); migrate_enable();
} }

View file

@ -245,7 +245,10 @@ struct kfunc_btf_id_set {
struct module *owner; struct module *owner;
}; };
struct kfunc_btf_id_list; struct kfunc_btf_id_list {
struct list_head list;
struct mutex mutex;
};
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
@ -254,6 +257,9 @@ void unregister_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
struct kfunc_btf_id_set *s); struct kfunc_btf_id_set *s);
bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist, u32 kfunc_id, bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist, u32 kfunc_id,
struct module *owner); struct module *owner);
extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list;
extern struct kfunc_btf_id_list prog_test_kfunc_list;
#else #else
static inline void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, static inline void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
struct kfunc_btf_id_set *s) struct kfunc_btf_id_set *s)
@ -268,13 +274,13 @@ static inline bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist,
{ {
return false; return false;
} }
static struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list __maybe_unused;
static struct kfunc_btf_id_list prog_test_kfunc_list __maybe_unused;
#endif #endif
#define DEFINE_KFUNC_BTF_ID_SET(set, name) \ #define DEFINE_KFUNC_BTF_ID_SET(set, name) \
struct kfunc_btf_id_set name = { LIST_HEAD_INIT(name.list), (set), \ struct kfunc_btf_id_set name = { LIST_HEAD_INIT(name.list), (set), \
THIS_MODULE } THIS_MODULE }
extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list;
extern struct kfunc_btf_id_list prog_test_kfunc_list;
#endif #endif

View file

@ -3,7 +3,6 @@
#define _LINUX_CACHEINFO_H #define _LINUX_CACHEINFO_H
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/cpu.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/smp.h> #include <linux/smp.h>

View file

@ -18,6 +18,7 @@
#include <linux/klist.h> #include <linux/klist.h>
#include <linux/pm.h> #include <linux/pm.h>
#include <linux/device/bus.h> #include <linux/device/bus.h>
#include <linux/module.h>
/** /**
* enum probe_type - device driver probe type to try * enum probe_type - device driver probe type to try

View file

@ -6,6 +6,7 @@
#define __LINUX_FILTER_H__ #define __LINUX_FILTER_H__
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/bpf.h>
#include <linux/refcount.h> #include <linux/refcount.h>
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
@ -26,7 +27,6 @@
#include <asm/byteorder.h> #include <asm/byteorder.h>
#include <uapi/linux/filter.h> #include <uapi/linux/filter.h>
#include <uapi/linux/bpf.h>
struct sk_buff; struct sk_buff;
struct sock; struct sock;
@ -640,9 +640,6 @@ static __always_inline u32 bpf_prog_run(const struct bpf_prog *prog, const void
* This uses migrate_disable/enable() explicitly to document that the * This uses migrate_disable/enable() explicitly to document that the
* invocation of a BPF program does not require reentrancy protection * invocation of a BPF program does not require reentrancy protection
* against a BPF program which is invoked from a preempting task. * against a BPF program which is invoked from a preempting task.
*
* For non RT enabled kernels migrate_disable/enable() maps to
* preempt_disable/enable(), i.e. it disables also preemption.
*/ */
static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog, static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog,
const void *ctx) const void *ctx)

View file

@ -538,11 +538,12 @@ struct macsec_ops;
* @mac_managed_pm: Set true if MAC driver takes of suspending/resuming PHY * @mac_managed_pm: Set true if MAC driver takes of suspending/resuming PHY
* @state: State of the PHY for management purposes * @state: State of the PHY for management purposes
* @dev_flags: Device-specific flags used by the PHY driver. * @dev_flags: Device-specific flags used by the PHY driver.
* Bits [15:0] are free to use by the PHY driver to communicate *
* driver specific behavior. * - Bits [15:0] are free to use by the PHY driver to communicate
* Bits [23:16] are currently reserved for future use. * driver specific behavior.
* Bits [31:24] are reserved for defining generic * - Bits [23:16] are currently reserved for future use.
* PHY driver behavior. * - Bits [31:24] are reserved for defining generic
* PHY driver behavior.
* @irq: IRQ number of the PHY's interrupt (-1 if none) * @irq: IRQ number of the PHY's interrupt (-1 if none)
* @phy_timer: The timer for handling the state machine * @phy_timer: The timer for handling the state machine
* @phylink: Pointer to phylink instance for this PHY * @phylink: Pointer to phylink instance for this PHY

View file

@ -126,7 +126,7 @@ struct tlb_slave_info {
struct alb_bond_info { struct alb_bond_info {
struct tlb_client_info *tx_hashtbl; /* Dynamically allocated */ struct tlb_client_info *tx_hashtbl; /* Dynamically allocated */
u32 unbalanced_load; u32 unbalanced_load;
int tx_rebalance_counter; atomic_t tx_rebalance_counter;
int lp_counter; int lp_counter;
/* -------- rlb parameters -------- */ /* -------- rlb parameters -------- */
int rlb_enabled; int rlb_enabled;

View file

@ -136,6 +136,19 @@ static inline void sk_mark_napi_id(struct sock *sk, const struct sk_buff *skb)
sk_rx_queue_update(sk, skb); sk_rx_queue_update(sk, skb);
} }
/* Variant of sk_mark_napi_id() for passive flow setup,
* as sk->sk_napi_id and sk->sk_rx_queue_mapping content
* needs to be set.
*/
static inline void sk_mark_napi_id_set(struct sock *sk,
const struct sk_buff *skb)
{
#ifdef CONFIG_NET_RX_BUSY_POLL
WRITE_ONCE(sk->sk_napi_id, skb->napi_id);
#endif
sk_rx_queue_set(sk, skb);
}
static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id) static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id)
{ {
#ifdef CONFIG_NET_RX_BUSY_POLL #ifdef CONFIG_NET_RX_BUSY_POLL

View file

@ -276,14 +276,14 @@ static inline bool nf_is_loopback_packet(const struct sk_buff *skb)
/* jiffies until ct expires, 0 if already expired */ /* jiffies until ct expires, 0 if already expired */
static inline unsigned long nf_ct_expires(const struct nf_conn *ct) static inline unsigned long nf_ct_expires(const struct nf_conn *ct)
{ {
s32 timeout = ct->timeout - nfct_time_stamp; s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp;
return timeout > 0 ? timeout : 0; return timeout > 0 ? timeout : 0;
} }
static inline bool nf_ct_is_expired(const struct nf_conn *ct) static inline bool nf_ct_is_expired(const struct nf_conn *ct)
{ {
return (__s32)(ct->timeout - nfct_time_stamp) <= 0; return (__s32)(READ_ONCE(ct->timeout) - nfct_time_stamp) <= 0;
} }
/* use after obtaining a reference count */ /* use after obtaining a reference count */
@ -302,7 +302,7 @@ static inline bool nf_ct_should_gc(const struct nf_conn *ct)
static inline void nf_ct_offload_timeout(struct nf_conn *ct) static inline void nf_ct_offload_timeout(struct nf_conn *ct)
{ {
if (nf_ct_expires(ct) < NF_CT_DAY / 2) if (nf_ct_expires(ct) < NF_CT_DAY / 2)
ct->timeout = nfct_time_stamp + NF_CT_DAY; WRITE_ONCE(ct->timeout, nfct_time_stamp + NF_CT_DAY);
} }
struct kernel_param; struct kernel_param;

View file

@ -6346,11 +6346,6 @@ BTF_ID_LIST_GLOBAL_SINGLE(btf_task_struct_ids, struct, task_struct)
/* BTF ID set registration API for modules */ /* BTF ID set registration API for modules */
struct kfunc_btf_id_list {
struct list_head list;
struct mutex mutex;
};
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
@ -6376,8 +6371,6 @@ bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist, u32 kfunc_id,
{ {
struct kfunc_btf_id_set *s; struct kfunc_btf_id_set *s;
if (!owner)
return false;
mutex_lock(&klist->mutex); mutex_lock(&klist->mutex);
list_for_each_entry(s, &klist->list, list) { list_for_each_entry(s, &klist->list, list) {
if (s->owner == owner && btf_id_set_contains(s->set, kfunc_id)) { if (s->owner == owner && btf_id_set_contains(s->set, kfunc_id)) {
@ -6389,8 +6382,6 @@ bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist, u32 kfunc_id,
return false; return false;
} }
#endif
#define DEFINE_KFUNC_BTF_ID_LIST(name) \ #define DEFINE_KFUNC_BTF_ID_LIST(name) \
struct kfunc_btf_id_list name = { LIST_HEAD_INIT(name.list), \ struct kfunc_btf_id_list name = { LIST_HEAD_INIT(name.list), \
__MUTEX_INITIALIZER(name.mutex) }; \ __MUTEX_INITIALIZER(name.mutex) }; \
@ -6398,3 +6389,5 @@ bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist, u32 kfunc_id,
DEFINE_KFUNC_BTF_ID_LIST(bpf_tcp_ca_kfunc_list); DEFINE_KFUNC_BTF_ID_LIST(bpf_tcp_ca_kfunc_list);
DEFINE_KFUNC_BTF_ID_LIST(prog_test_kfunc_list); DEFINE_KFUNC_BTF_ID_LIST(prog_test_kfunc_list);
#endif

View file

@ -8422,7 +8422,7 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
new_range = dst_reg->off; new_range = dst_reg->off;
if (range_right_open) if (range_right_open)
new_range--; new_range++;
/* Examples for register markings: /* Examples for register markings:
* *

View file

@ -316,6 +316,7 @@ config DEBUG_INFO_BTF
bool "Generate BTF typeinfo" bool "Generate BTF typeinfo"
depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED
depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST
depends on BPF_SYSCALL
help help
Generate deduplicated BTF type information from DWARF debug info. Generate deduplicated BTF type information from DWARF debug info.
Turning this on expects presence of pahole tool, which will convert Turning this on expects presence of pahole tool, which will convert

View file

@ -13,6 +13,7 @@
#include <linux/mmu_notifier.h> #include <linux/mmu_notifier.h>
#include <linux/page_idle.h> #include <linux/page_idle.h>
#include <linux/pagewalk.h> #include <linux/pagewalk.h>
#include <linux/sched/mm.h>
#include "prmtv-common.h" #include "prmtv-common.h"

View file

@ -35,6 +35,7 @@
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/compaction.h> #include <linux/compaction.h>
#include <linux/rmap.h> #include <linux/rmap.h>
#include <linux/module.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View file

@ -30,6 +30,7 @@
#include <linux/swap_slots.h> #include <linux/swap_slots.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/slab.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/mm.h> #include <linux/mm.h>

View file

@ -4110,14 +4110,6 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
return err; return err;
} }
if (info->attrs[DEVLINK_ATTR_NETNS_PID] ||
info->attrs[DEVLINK_ATTR_NETNS_FD] ||
info->attrs[DEVLINK_ATTR_NETNS_ID]) {
dest_net = devlink_netns_get(skb, info);
if (IS_ERR(dest_net))
return PTR_ERR(dest_net);
}
if (info->attrs[DEVLINK_ATTR_RELOAD_ACTION]) if (info->attrs[DEVLINK_ATTR_RELOAD_ACTION])
action = nla_get_u8(info->attrs[DEVLINK_ATTR_RELOAD_ACTION]); action = nla_get_u8(info->attrs[DEVLINK_ATTR_RELOAD_ACTION]);
else else
@ -4160,6 +4152,14 @@ static int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
return -EINVAL; return -EINVAL;
} }
} }
if (info->attrs[DEVLINK_ATTR_NETNS_PID] ||
info->attrs[DEVLINK_ATTR_NETNS_FD] ||
info->attrs[DEVLINK_ATTR_NETNS_ID]) {
dest_net = devlink_netns_get(skb, info);
if (IS_ERR(dest_net))
return PTR_ERR(dest_net);
}
err = devlink_reload(devlink, dest_net, action, limit, &actions_performed, info->extack); err = devlink_reload(devlink, dest_net, action, limit, &actions_performed, info->extack);
if (dest_net) if (dest_net)

View file

@ -763,11 +763,10 @@ struct pneigh_entry * pneigh_lookup(struct neigh_table *tbl,
ASSERT_RTNL(); ASSERT_RTNL();
n = kmalloc(sizeof(*n) + key_len, GFP_KERNEL); n = kzalloc(sizeof(*n) + key_len, GFP_KERNEL);
if (!n) if (!n)
goto out; goto out;
n->protocol = 0;
write_pnet(&n->net, net); write_pnet(&n->net, net);
memcpy(n->key, pkey, key_len); memcpy(n->key, pkey, key_len);
n->dev = dev; n->dev = dev;

View file

@ -1124,6 +1124,8 @@ void sk_psock_start_strp(struct sock *sk, struct sk_psock *psock)
void sk_psock_stop_strp(struct sock *sk, struct sk_psock *psock) void sk_psock_stop_strp(struct sock *sk, struct sk_psock *psock)
{ {
psock_set_prog(&psock->progs.stream_parser, NULL);
if (!psock->saved_data_ready) if (!psock->saved_data_ready)
return; return;
@ -1212,6 +1214,9 @@ void sk_psock_start_verdict(struct sock *sk, struct sk_psock *psock)
void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock) void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock)
{ {
psock_set_prog(&psock->progs.stream_verdict, NULL);
psock_set_prog(&psock->progs.skb_verdict, NULL);
if (!psock->saved_data_ready) if (!psock->saved_data_ready)
return; return;

View file

@ -167,8 +167,11 @@ static void sock_map_del_link(struct sock *sk,
write_lock_bh(&sk->sk_callback_lock); write_lock_bh(&sk->sk_callback_lock);
if (strp_stop) if (strp_stop)
sk_psock_stop_strp(sk, psock); sk_psock_stop_strp(sk, psock);
else if (verdict_stop)
sk_psock_stop_verdict(sk, psock); sk_psock_stop_verdict(sk, psock);
if (psock->psock_update_sk_prot)
psock->psock_update_sk_prot(sk, psock, false);
write_unlock_bh(&sk->sk_callback_lock); write_unlock_bh(&sk->sk_callback_lock);
} }
} }
@ -282,6 +285,12 @@ static int sock_map_link(struct bpf_map *map, struct sock *sk)
if (msg_parser) if (msg_parser)
psock_set_prog(&psock->progs.msg_parser, msg_parser); psock_set_prog(&psock->progs.msg_parser, msg_parser);
if (stream_parser)
psock_set_prog(&psock->progs.stream_parser, stream_parser);
if (stream_verdict)
psock_set_prog(&psock->progs.stream_verdict, stream_verdict);
if (skb_verdict)
psock_set_prog(&psock->progs.skb_verdict, skb_verdict);
ret = sock_map_init_proto(sk, psock); ret = sock_map_init_proto(sk, psock);
if (ret < 0) if (ret < 0)
@ -292,14 +301,10 @@ static int sock_map_link(struct bpf_map *map, struct sock *sk)
ret = sk_psock_init_strp(sk, psock); ret = sk_psock_init_strp(sk, psock);
if (ret) if (ret)
goto out_unlock_drop; goto out_unlock_drop;
psock_set_prog(&psock->progs.stream_verdict, stream_verdict);
psock_set_prog(&psock->progs.stream_parser, stream_parser);
sk_psock_start_strp(sk, psock); sk_psock_start_strp(sk, psock);
} else if (!stream_parser && stream_verdict && !psock->saved_data_ready) { } else if (!stream_parser && stream_verdict && !psock->saved_data_ready) {
psock_set_prog(&psock->progs.stream_verdict, stream_verdict);
sk_psock_start_verdict(sk,psock); sk_psock_start_verdict(sk,psock);
} else if (!stream_verdict && skb_verdict && !psock->saved_data_ready) { } else if (!stream_verdict && skb_verdict && !psock->saved_data_ready) {
psock_set_prog(&psock->progs.skb_verdict, skb_verdict);
sk_psock_start_verdict(sk, psock); sk_psock_start_verdict(sk, psock);
} }
write_unlock_bh(&sk->sk_callback_lock); write_unlock_bh(&sk->sk_callback_lock);

View file

@ -40,7 +40,8 @@ int ethnl_ops_begin(struct net_device *dev)
if (dev->dev.parent) if (dev->dev.parent)
pm_runtime_get_sync(dev->dev.parent); pm_runtime_get_sync(dev->dev.parent);
if (!netif_device_present(dev)) { if (!netif_device_present(dev) ||
dev->reg_state == NETREG_UNREGISTERING) {
ret = -ENODEV; ret = -ENODEV;
goto err; goto err;
} }

View file

@ -721,7 +721,7 @@ static struct request_sock *inet_reqsk_clone(struct request_sock *req,
sk_node_init(&nreq_sk->sk_node); sk_node_init(&nreq_sk->sk_node);
nreq_sk->sk_tx_queue_mapping = req_sk->sk_tx_queue_mapping; nreq_sk->sk_tx_queue_mapping = req_sk->sk_tx_queue_mapping;
#ifdef CONFIG_XPS #ifdef CONFIG_SOCK_RX_QUEUE_MAPPING
nreq_sk->sk_rx_queue_mapping = req_sk->sk_rx_queue_mapping; nreq_sk->sk_rx_queue_mapping = req_sk->sk_rx_queue_mapping;
#endif #endif
nreq_sk->sk_incoming_cpu = req_sk->sk_incoming_cpu; nreq_sk->sk_incoming_cpu = req_sk->sk_incoming_cpu;

View file

@ -829,8 +829,8 @@ int tcp_child_process(struct sock *parent, struct sock *child,
int ret = 0; int ret = 0;
int state = child->sk_state; int state = child->sk_state;
/* record NAPI ID of child */ /* record sk_napi_id and sk_rx_queue_mapping of child. */
sk_mark_napi_id(child, skb); sk_mark_napi_id_set(child, skb);
tcp_segs_in(tcp_sk(child), skb); tcp_segs_in(tcp_sk(child), skb);
if (!sock_owned_by_user(child)) { if (!sock_owned_by_user(child)) {

View file

@ -916,7 +916,7 @@ static int udp_send_skb(struct sk_buff *skb, struct flowi4 *fl4,
kfree_skb(skb); kfree_skb(skb);
return -EINVAL; return -EINVAL;
} }
if (skb->len > cork->gso_size * UDP_MAX_SEGMENTS) { if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
kfree_skb(skb); kfree_skb(skb);
return -EINVAL; return -EINVAL;
} }

View file

@ -161,6 +161,14 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
hdr->hop_limit = ip6_dst_hoplimit(skb_dst(skb)); hdr->hop_limit = ip6_dst_hoplimit(skb_dst(skb));
memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
/* the control block has been erased, so we have to set the
* iif once again.
* We read the receiving interface index directly from the
* skb->skb_iif as it is done in the IPv4 receiving path (i.e.:
* ip_rcv_core(...)).
*/
IP6CB(skb)->iif = skb->skb_iif;
} }
hdr->nexthdr = NEXTHDR_ROUTING; hdr->nexthdr = NEXTHDR_ROUTING;

View file

@ -684,7 +684,7 @@ bool nf_ct_delete(struct nf_conn *ct, u32 portid, int report)
tstamp = nf_conn_tstamp_find(ct); tstamp = nf_conn_tstamp_find(ct);
if (tstamp) { if (tstamp) {
s32 timeout = ct->timeout - nfct_time_stamp; s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp;
tstamp->stop = ktime_get_real_ns(); tstamp->stop = ktime_get_real_ns();
if (timeout < 0) if (timeout < 0)
@ -1036,7 +1036,7 @@ static int nf_ct_resolve_clash_harder(struct sk_buff *skb, u32 repl_idx)
} }
/* We want the clashing entry to go away real soon: 1 second timeout. */ /* We want the clashing entry to go away real soon: 1 second timeout. */
loser_ct->timeout = nfct_time_stamp + HZ; WRITE_ONCE(loser_ct->timeout, nfct_time_stamp + HZ);
/* IPS_NAT_CLASH removes the entry automatically on the first /* IPS_NAT_CLASH removes the entry automatically on the first
* reply. Also prevents UDP tracker from moving the entry to * reply. Also prevents UDP tracker from moving the entry to
@ -1560,7 +1560,7 @@ __nf_conntrack_alloc(struct net *net,
/* save hash for reusing when confirming */ /* save hash for reusing when confirming */
*(unsigned long *)(&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev) = hash; *(unsigned long *)(&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev) = hash;
ct->status = 0; ct->status = 0;
ct->timeout = 0; WRITE_ONCE(ct->timeout, 0);
write_pnet(&ct->ct_net, net); write_pnet(&ct->ct_net, net);
memset(&ct->__nfct_init_offset, 0, memset(&ct->__nfct_init_offset, 0,
offsetof(struct nf_conn, proto) - offsetof(struct nf_conn, proto) -

View file

@ -1998,7 +1998,7 @@ static int ctnetlink_change_timeout(struct nf_conn *ct,
if (timeout > INT_MAX) if (timeout > INT_MAX)
timeout = INT_MAX; timeout = INT_MAX;
ct->timeout = nfct_time_stamp + (u32)timeout; WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout);
if (test_bit(IPS_DYING_BIT, &ct->status)) if (test_bit(IPS_DYING_BIT, &ct->status))
return -ETIME; return -ETIME;

View file

@ -201,8 +201,8 @@ static void flow_offload_fixup_ct_timeout(struct nf_conn *ct)
if (timeout < 0) if (timeout < 0)
timeout = 0; timeout = 0;
if (nf_flow_timeout_delta(ct->timeout) > (__s32)timeout) if (nf_flow_timeout_delta(READ_ONCE(ct->timeout)) > (__s32)timeout)
ct->timeout = nfct_time_stamp + timeout; WRITE_ONCE(ct->timeout, nfct_time_stamp + timeout);
} }
static void flow_offload_fixup_ct_state(struct nf_conn *ct) static void flow_offload_fixup_ct_state(struct nf_conn *ct)

View file

@ -387,7 +387,7 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
struct net_device *indev; struct net_device *indev;
struct net_device *outdev; struct net_device *outdev;
struct nf_conn *ct = NULL; struct nf_conn *ct = NULL;
enum ip_conntrack_info ctinfo; enum ip_conntrack_info ctinfo = 0;
struct nfnl_ct_hook *nfnl_ct; struct nfnl_ct_hook *nfnl_ct;
bool csum_verify; bool csum_verify;
char *secdata = NULL; char *secdata = NULL;

View file

@ -236,7 +236,7 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr,
tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, &tcphdr_len); tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, &tcphdr_len);
if (!tcph) if (!tcph)
return; goto err;
opt = (u8 *)tcph; opt = (u8 *)tcph;
for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) { for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) {
@ -251,16 +251,16 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr,
continue; continue;
if (i + optl > tcphdr_len || priv->len + priv->offset > optl) if (i + optl > tcphdr_len || priv->len + priv->offset > optl)
return; goto err;
if (skb_ensure_writable(pkt->skb, if (skb_ensure_writable(pkt->skb,
nft_thoff(pkt) + i + priv->len)) nft_thoff(pkt) + i + priv->len))
return; goto err;
tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff,
&tcphdr_len); &tcphdr_len);
if (!tcph) if (!tcph)
return; goto err;
offset = i + priv->offset; offset = i + priv->offset;
@ -303,6 +303,9 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr,
return; return;
} }
return;
err:
regs->verdict.code = NFT_BREAK;
} }
static void nft_exthdr_sctp_eval(const struct nft_expr *expr, static void nft_exthdr_sctp_eval(const struct nft_expr *expr,

View file

@ -886,7 +886,7 @@ static int nft_pipapo_avx2_lookup_8b_6(unsigned long *map, unsigned long *fill,
NFT_PIPAPO_AVX2_BUCKET_LOAD8(4, lt, 4, pkt[4], bsize); NFT_PIPAPO_AVX2_BUCKET_LOAD8(4, lt, 4, pkt[4], bsize);
NFT_PIPAPO_AVX2_AND(5, 0, 1); NFT_PIPAPO_AVX2_AND(5, 0, 1);
NFT_PIPAPO_AVX2_BUCKET_LOAD8(6, lt, 6, pkt[5], bsize); NFT_PIPAPO_AVX2_BUCKET_LOAD8(6, lt, 5, pkt[5], bsize);
NFT_PIPAPO_AVX2_AND(7, 2, 3); NFT_PIPAPO_AVX2_AND(7, 2, 3);
/* Stall */ /* Stall */

View file

@ -636,8 +636,10 @@ static int nfc_genl_dump_devices_done(struct netlink_callback *cb)
{ {
struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0]; struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0];
nfc_device_iter_exit(iter); if (iter) {
kfree(iter); nfc_device_iter_exit(iter);
kfree(iter);
}
return 0; return 0;
} }
@ -1392,8 +1394,10 @@ static int nfc_genl_dump_ses_done(struct netlink_callback *cb)
{ {
struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0]; struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0];
nfc_device_iter_exit(iter); if (iter) {
kfree(iter); nfc_device_iter_exit(iter);
kfree(iter);
}
return 0; return 0;
} }

View file

@ -531,6 +531,7 @@ static void fq_pie_destroy(struct Qdisc *sch)
struct fq_pie_sched_data *q = qdisc_priv(sch); struct fq_pie_sched_data *q = qdisc_priv(sch);
tcf_block_put(q->block); tcf_block_put(q->block);
q->p_params.tupdate = 0;
del_timer_sync(&q->adapt_timer); del_timer_sync(&q->adapt_timer);
kvfree(q->flows); kvfree(q->flows);
} }

View file

@ -83,6 +83,7 @@ struct btf_id {
int cnt; int cnt;
}; };
int addr_cnt; int addr_cnt;
bool is_set;
Elf64_Addr addr[ADDR_CNT]; Elf64_Addr addr[ADDR_CNT];
}; };
@ -451,8 +452,10 @@ static int symbols_collect(struct object *obj)
* in symbol's size, together with 'cnt' field hence * in symbol's size, together with 'cnt' field hence
* that - 1. * that - 1.
*/ */
if (id) if (id) {
id->cnt = sym.st_size / sizeof(int) - 1; id->cnt = sym.st_size / sizeof(int) - 1;
id->is_set = true;
}
} else { } else {
pr_err("FAILED unsupported prefix %s\n", prefix); pr_err("FAILED unsupported prefix %s\n", prefix);
return -1; return -1;
@ -568,9 +571,8 @@ static int id_patch(struct object *obj, struct btf_id *id)
int *ptr = data->d_buf; int *ptr = data->d_buf;
int i; int i;
if (!id->id) { if (!id->id && !id->is_set)
pr_err("WARN: resolve_btfids: unresolved symbol %s\n", id->name); pr_err("WARN: resolve_btfids: unresolved symbol %s\n", id->name);
}
for (i = 0; i < id->addr_cnt; i++) { for (i = 0; i < id->addr_cnt; i++) {
unsigned long addr = id->addr[i]; unsigned long addr = id->addr[i];

View file

@ -35,7 +35,7 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
}, },
{ {
"XDP pkt read, pkt_data' > pkt_end, good access", "XDP pkt read, pkt_data' > pkt_end, corner case, good access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
@ -87,6 +87,41 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_data' > pkt_end, corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data' > pkt_end, corner case -1, bad access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.errstr = "R1 offset is outside of the packet",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{ {
"XDP pkt read, pkt_end > pkt_data', good access", "XDP pkt read, pkt_end > pkt_data', good access",
.insns = { .insns = {
@ -106,16 +141,16 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_end > pkt_data', bad access 1", "XDP pkt read, pkt_end > pkt_data', corner case -1, bad access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)), offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1), BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
@ -142,6 +177,42 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_end > pkt_data', corner case, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_end > pkt_data', corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{ {
"XDP pkt read, pkt_data' < pkt_end, good access", "XDP pkt read, pkt_data' < pkt_end, good access",
.insns = { .insns = {
@ -161,16 +232,16 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_data' < pkt_end, bad access 1", "XDP pkt read, pkt_data' < pkt_end, corner case -1, bad access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)), offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1), BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
@ -198,7 +269,43 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_end < pkt_data', good access", "XDP pkt read, pkt_data' < pkt_end, corner case, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data' < pkt_end, corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_end < pkt_data', corner case, good access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
@ -250,6 +357,41 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_end < pkt_data', corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_end < pkt_data', corner case -1, bad access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.errstr = "R1 offset is outside of the packet",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{ {
"XDP pkt read, pkt_data' >= pkt_end, good access", "XDP pkt read, pkt_data' >= pkt_end, good access",
.insns = { .insns = {
@ -268,15 +410,15 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_data' >= pkt_end, bad access 1", "XDP pkt read, pkt_data' >= pkt_end, corner case -1, bad access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)), offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
@ -304,7 +446,41 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_end >= pkt_data', good access", "XDP pkt read, pkt_data' >= pkt_end, corner case, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data' >= pkt_end, corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_end >= pkt_data', corner case, good access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
@ -359,7 +535,44 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_data' <= pkt_end, good access", "XDP pkt read, pkt_end >= pkt_data', corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_end >= pkt_data', corner case -1, bad access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.errstr = "R1 offset is outside of the packet",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data' <= pkt_end, corner case, good access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
@ -413,6 +626,43 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_data' <= pkt_end, corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data' <= pkt_end, corner case -1, bad access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.errstr = "R1 offset is outside of the packet",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{ {
"XDP pkt read, pkt_end <= pkt_data', good access", "XDP pkt read, pkt_end <= pkt_data', good access",
.insns = { .insns = {
@ -431,15 +681,15 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_end <= pkt_data', bad access 1", "XDP pkt read, pkt_end <= pkt_data', corner case -1, bad access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)), offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
@ -467,7 +717,41 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_meta' > pkt_data, good access", "XDP pkt read, pkt_end <= pkt_data', corner case, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_end <= pkt_data', corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
offsetof(struct xdp_md, data_end)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_meta' > pkt_data, corner case, good access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)), offsetof(struct xdp_md, data_meta)),
@ -519,6 +803,41 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_meta' > pkt_data, corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_meta' > pkt_data, corner case -1, bad access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.errstr = "R1 offset is outside of the packet",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{ {
"XDP pkt read, pkt_data > pkt_meta', good access", "XDP pkt read, pkt_data > pkt_meta', good access",
.insns = { .insns = {
@ -538,16 +857,16 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_data > pkt_meta', bad access 1", "XDP pkt read, pkt_data > pkt_meta', corner case -1, bad access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)), offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1), BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
@ -574,6 +893,42 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_data > pkt_meta', corner case, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data > pkt_meta', corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{ {
"XDP pkt read, pkt_meta' < pkt_data, good access", "XDP pkt read, pkt_meta' < pkt_data, good access",
.insns = { .insns = {
@ -593,16 +948,16 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_meta' < pkt_data, bad access 1", "XDP pkt read, pkt_meta' < pkt_data, corner case -1, bad access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)), offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1), BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
@ -630,7 +985,43 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_data < pkt_meta', good access", "XDP pkt read, pkt_meta' < pkt_data, corner case, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_meta' < pkt_data, corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data < pkt_meta', corner case, good access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)), offsetof(struct xdp_md, data_meta)),
@ -682,6 +1073,41 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_data < pkt_meta', corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data < pkt_meta', corner case -1, bad access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.errstr = "R1 offset is outside of the packet",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{ {
"XDP pkt read, pkt_meta' >= pkt_data, good access", "XDP pkt read, pkt_meta' >= pkt_data, good access",
.insns = { .insns = {
@ -700,15 +1126,15 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_meta' >= pkt_data, bad access 1", "XDP pkt read, pkt_meta' >= pkt_data, corner case -1, bad access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)), offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
@ -736,7 +1162,41 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_data >= pkt_meta', good access", "XDP pkt read, pkt_meta' >= pkt_data, corner case, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_meta' >= pkt_data, corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data >= pkt_meta', corner case, good access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)), offsetof(struct xdp_md, data_meta)),
@ -791,7 +1251,44 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_meta' <= pkt_data, good access", "XDP pkt read, pkt_data >= pkt_meta', corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data >= pkt_meta', corner case -1, bad access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.errstr = "R1 offset is outside of the packet",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_meta' <= pkt_data, corner case, good access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)), offsetof(struct xdp_md, data_meta)),
@ -845,6 +1342,43 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_meta' <= pkt_data, corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_meta' <= pkt_data, corner case -1, bad access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.errstr = "R1 offset is outside of the packet",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{ {
"XDP pkt read, pkt_data <= pkt_meta', good access", "XDP pkt read, pkt_data <= pkt_meta', good access",
.insns = { .insns = {
@ -863,15 +1397,15 @@
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
"XDP pkt read, pkt_data <= pkt_meta', bad access 1", "XDP pkt read, pkt_data <= pkt_meta', corner case -1, bad access",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)), offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
@ -898,3 +1432,37 @@
.prog_type = BPF_PROG_TYPE_XDP, .prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"XDP pkt read, pkt_data <= pkt_meta', corner case, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"XDP pkt read, pkt_data <= pkt_meta', corner case +1, good access",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct xdp_md, data_meta)),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},

View file

@ -4077,3 +4077,11 @@ cleanup 2>/dev/null
printf "\nTests passed: %3d\n" ${nsuccess} printf "\nTests passed: %3d\n" ${nsuccess}
printf "Tests failed: %3d\n" ${nfail} printf "Tests failed: %3d\n" ${nfail}
if [ $nfail -ne 0 ]; then
exit 1 # KSFT_FAIL
elif [ $nsuccess -eq 0 ]; then
exit $ksft_skip
fi
exit 0 # KSFT_PASS

View file

@ -444,24 +444,63 @@ fib_rp_filter_test()
setup setup
set -e set -e
ip netns add ns2
ip netns set ns2 auto
ip -netns ns2 link set dev lo up
$IP link add name veth1 type veth peer name veth2
$IP link set dev veth2 netns ns2
$IP address add 192.0.2.1/24 dev veth1
ip -netns ns2 address add 192.0.2.1/24 dev veth2
$IP link set dev veth1 up
ip -netns ns2 link set dev veth2 up
$IP link set dev lo address 52:54:00:6a:c7:5e $IP link set dev lo address 52:54:00:6a:c7:5e
$IP link set dummy0 address 52:54:00:6a:c7:5e $IP link set dev veth1 address 52:54:00:6a:c7:5e
$IP link add dummy1 type dummy ip -netns ns2 link set dev lo address 52:54:00:6a:c7:5e
$IP link set dummy1 address 52:54:00:6a:c7:5e ip -netns ns2 link set dev veth2 address 52:54:00:6a:c7:5e
$IP link set dev dummy1 up
# 1. (ns2) redirect lo's egress to veth2's egress
ip netns exec ns2 tc qdisc add dev lo parent root handle 1: fq_codel
ip netns exec ns2 tc filter add dev lo parent 1: protocol arp basic \
action mirred egress redirect dev veth2
ip netns exec ns2 tc filter add dev lo parent 1: protocol ip basic \
action mirred egress redirect dev veth2
# 2. (ns1) redirect veth1's ingress to lo's ingress
$NS_EXEC tc qdisc add dev veth1 ingress
$NS_EXEC tc filter add dev veth1 ingress protocol arp basic \
action mirred ingress redirect dev lo
$NS_EXEC tc filter add dev veth1 ingress protocol ip basic \
action mirred ingress redirect dev lo
# 3. (ns1) redirect lo's egress to veth1's egress
$NS_EXEC tc qdisc add dev lo parent root handle 1: fq_codel
$NS_EXEC tc filter add dev lo parent 1: protocol arp basic \
action mirred egress redirect dev veth1
$NS_EXEC tc filter add dev lo parent 1: protocol ip basic \
action mirred egress redirect dev veth1
# 4. (ns2) redirect veth2's ingress to lo's ingress
ip netns exec ns2 tc qdisc add dev veth2 ingress
ip netns exec ns2 tc filter add dev veth2 ingress protocol arp basic \
action mirred ingress redirect dev lo
ip netns exec ns2 tc filter add dev veth2 ingress protocol ip basic \
action mirred ingress redirect dev lo
$NS_EXEC sysctl -qw net.ipv4.conf.all.rp_filter=1 $NS_EXEC sysctl -qw net.ipv4.conf.all.rp_filter=1
$NS_EXEC sysctl -qw net.ipv4.conf.all.accept_local=1 $NS_EXEC sysctl -qw net.ipv4.conf.all.accept_local=1
$NS_EXEC sysctl -qw net.ipv4.conf.all.route_localnet=1 $NS_EXEC sysctl -qw net.ipv4.conf.all.route_localnet=1
ip netns exec ns2 sysctl -qw net.ipv4.conf.all.rp_filter=1
$NS_EXEC tc qd add dev dummy1 parent root handle 1: fq_codel ip netns exec ns2 sysctl -qw net.ipv4.conf.all.accept_local=1
$NS_EXEC tc filter add dev dummy1 parent 1: protocol arp basic action mirred egress redirect dev lo ip netns exec ns2 sysctl -qw net.ipv4.conf.all.route_localnet=1
$NS_EXEC tc filter add dev dummy1 parent 1: protocol ip basic action mirred egress redirect dev lo
set +e set +e
run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 198.51.100.1" run_cmd "ip netns exec ns2 ping -w1 -c1 192.0.2.1"
log_test $? 0 "rp_filter passes local packets" log_test $? 0 "rp_filter passes local packets"
run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 127.0.0.1" run_cmd "ip netns exec ns2 ping -w1 -c1 127.0.0.1"
log_test $? 0 "rp_filter passes loopback packets" log_test $? 0 "rp_filter passes loopback packets"
cleanup cleanup

View file

@ -31,6 +31,8 @@ struct tls_crypto_info_keys {
struct tls12_crypto_info_chacha20_poly1305 chacha20; struct tls12_crypto_info_chacha20_poly1305 chacha20;
struct tls12_crypto_info_sm4_gcm sm4gcm; struct tls12_crypto_info_sm4_gcm sm4gcm;
struct tls12_crypto_info_sm4_ccm sm4ccm; struct tls12_crypto_info_sm4_ccm sm4ccm;
struct tls12_crypto_info_aes_ccm_128 aesccm128;
struct tls12_crypto_info_aes_gcm_256 aesgcm256;
}; };
size_t len; size_t len;
}; };
@ -61,6 +63,16 @@ static void tls_crypto_info_init(uint16_t tls_version, uint16_t cipher_type,
tls12->sm4ccm.info.version = tls_version; tls12->sm4ccm.info.version = tls_version;
tls12->sm4ccm.info.cipher_type = cipher_type; tls12->sm4ccm.info.cipher_type = cipher_type;
break; break;
case TLS_CIPHER_AES_CCM_128:
tls12->len = sizeof(struct tls12_crypto_info_aes_ccm_128);
tls12->aesccm128.info.version = tls_version;
tls12->aesccm128.info.cipher_type = cipher_type;
break;
case TLS_CIPHER_AES_GCM_256:
tls12->len = sizeof(struct tls12_crypto_info_aes_gcm_256);
tls12->aesgcm256.info.version = tls_version;
tls12->aesgcm256.info.cipher_type = cipher_type;
break;
default: default:
break; break;
} }
@ -261,6 +273,30 @@ FIXTURE_VARIANT_ADD(tls, 13_sm4_ccm)
.cipher_type = TLS_CIPHER_SM4_CCM, .cipher_type = TLS_CIPHER_SM4_CCM,
}; };
FIXTURE_VARIANT_ADD(tls, 12_aes_ccm)
{
.tls_version = TLS_1_2_VERSION,
.cipher_type = TLS_CIPHER_AES_CCM_128,
};
FIXTURE_VARIANT_ADD(tls, 13_aes_ccm)
{
.tls_version = TLS_1_3_VERSION,
.cipher_type = TLS_CIPHER_AES_CCM_128,
};
FIXTURE_VARIANT_ADD(tls, 12_aes_gcm_256)
{
.tls_version = TLS_1_2_VERSION,
.cipher_type = TLS_CIPHER_AES_GCM_256,
};
FIXTURE_VARIANT_ADD(tls, 13_aes_gcm_256)
{
.tls_version = TLS_1_3_VERSION,
.cipher_type = TLS_CIPHER_AES_GCM_256,
};
FIXTURE_SETUP(tls) FIXTURE_SETUP(tls)
{ {
struct tls_crypto_info_keys tls12; struct tls_crypto_info_keys tls12;

View file

@ -150,11 +150,27 @@ EOF
# oifname is the vrf device. # oifname is the vrf device.
test_masquerade_vrf() test_masquerade_vrf()
{ {
local qdisc=$1
if [ "$qdisc" != "default" ]; then
tc -net $ns0 qdisc add dev tvrf root $qdisc
fi
ip netns exec $ns0 conntrack -F 2>/dev/null ip netns exec $ns0 conntrack -F 2>/dev/null
ip netns exec $ns0 nft -f - <<EOF ip netns exec $ns0 nft -f - <<EOF
flush ruleset flush ruleset
table ip nat { table ip nat {
chain rawout {
type filter hook output priority raw;
oif tvrf ct state untracked counter
}
chain postrouting2 {
type filter hook postrouting priority mangle;
oif tvrf ct state untracked counter
}
chain postrouting { chain postrouting {
type nat hook postrouting priority 0; type nat hook postrouting priority 0;
# NB: masquerade should always be combined with 'oif(name) bla', # NB: masquerade should always be combined with 'oif(name) bla',
@ -171,13 +187,18 @@ EOF
fi fi
# must also check that nat table was evaluated on second (lower device) iteration. # must also check that nat table was evaluated on second (lower device) iteration.
ip netns exec $ns0 nft list table ip nat |grep -q 'counter packets 2' ip netns exec $ns0 nft list table ip nat |grep -q 'counter packets 2' &&
ip netns exec $ns0 nft list table ip nat |grep -q 'untracked counter packets [1-9]'
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
echo "PASS: iperf3 connect with masquerade + sport rewrite on vrf device" echo "PASS: iperf3 connect with masquerade + sport rewrite on vrf device ($qdisc qdisc)"
else else
echo "FAIL: vrf masq rule has unexpected counter value" echo "FAIL: vrf rules have unexpected counter value"
ret=1 ret=1
fi fi
if [ "$qdisc" != "default" ]; then
tc -net $ns0 qdisc del dev tvrf root
fi
} }
# add masq rule that gets evaluated w. outif set to veth device. # add masq rule that gets evaluated w. outif set to veth device.
@ -213,7 +234,8 @@ EOF
} }
test_ct_zone_in test_ct_zone_in
test_masquerade_vrf test_masquerade_vrf "default"
test_masquerade_vrf "pfifo"
test_masquerade_veth test_masquerade_veth
exit $ret exit $ret

View file

@ -23,8 +23,8 @@ TESTS="reported_issues correctness concurrency timeout"
# Set types, defined by TYPE_ variables below # Set types, defined by TYPE_ variables below
TYPES="net_port port_net net6_port port_proto net6_port_mac net6_port_mac_proto TYPES="net_port port_net net6_port port_proto net6_port_mac net6_port_mac_proto
net_port_net net_mac net_mac_icmp net6_mac_icmp net6_port_net6_port net_port_net net_mac mac_net net_mac_icmp net6_mac_icmp
net_port_mac_proto_net" net6_port_net6_port net_port_mac_proto_net"
# Reported bugs, also described by TYPE_ variables below # Reported bugs, also described by TYPE_ variables below
BUGS="flush_remove_add" BUGS="flush_remove_add"
@ -277,6 +277,23 @@ perf_entries 1000
perf_proto ipv4 perf_proto ipv4
" "
TYPE_mac_net="
display mac,net
type_spec ether_addr . ipv4_addr
chain_spec ether saddr . ip saddr
dst
src mac addr4
start 1
count 5
src_delta 2000
tools sendip nc bash
proto udp
race_repeat 0
perf_duration 0
"
TYPE_net_mac_icmp=" TYPE_net_mac_icmp="
display net,mac - ICMP display net,mac - ICMP
type_spec ipv4_addr . ether_addr type_spec ipv4_addr . ether_addr
@ -984,7 +1001,8 @@ format() {
fi fi
done done
for f in ${src}; do for f in ${src}; do
__expr="${__expr} . " [ "${__expr}" != "{ " ] && __expr="${__expr} . "
__start="$(eval format_"${f}" "${srcstart}")" __start="$(eval format_"${f}" "${srcstart}")"
__end="$(eval format_"${f}" "${srcend}")" __end="$(eval format_"${f}" "${srcend}")"

View file

@ -18,11 +18,17 @@ cleanup()
ip netns del $ns ip netns del $ns
} }
ip netns add $ns checktool (){
if [ $? -ne 0 ];then if ! $1 > /dev/null 2>&1; then
echo "SKIP: Could not create net namespace $gw" echo "SKIP: Could not $2"
exit $ksft_skip exit $ksft_skip
fi fi
}
checktool "nft --version" "run test without nft tool"
checktool "ip -Version" "run test without ip tool"
checktool "socat -V" "run test without socat tool"
checktool "ip netns add $ns" "create net namespace"
trap cleanup EXIT trap cleanup EXIT
@ -71,7 +77,8 @@ EOF
local start=$(date +%s%3N) local start=$(date +%s%3N)
i=$((i + 10000)) i=$((i + 10000))
j=$((j + 1)) j=$((j + 1))
dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" nc -w 1 -q 1 -u -p 12345 127.0.0.1 12345 > /dev/null # nft rule in output places each packet in a different zone.
dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" socat STDIN UDP:127.0.0.1:12345,sourceport=12345
if [ $? -ne 0 ] ;then if [ $? -ne 0 ] ;then
ret=1 ret=1
break break

View file

@ -60,6 +60,8 @@ CONFIG_NET_IFE_SKBTCINDEX=m
CONFIG_NET_SCH_FIFO=y CONFIG_NET_SCH_FIFO=y
CONFIG_NET_SCH_ETS=m CONFIG_NET_SCH_ETS=m
CONFIG_NET_SCH_RED=m CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_FQ_PIE=m
CONFIG_NETDEVSIM=m
# #
## Network testing ## Network testing

View file

@ -716,6 +716,7 @@ def set_operation_mode(pm, parser, args, remaining):
list_test_cases(alltests) list_test_cases(alltests)
exit(0) exit(0)
exit_code = 0 # KSFT_PASS
if len(alltests): if len(alltests):
req_plugins = pm.get_required_plugins(alltests) req_plugins = pm.get_required_plugins(alltests)
try: try:
@ -724,6 +725,8 @@ def set_operation_mode(pm, parser, args, remaining):
print('The following plugins were not found:') print('The following plugins were not found:')
print('{}'.format(pde.missing_pg)) print('{}'.format(pde.missing_pg))
catresults = test_runner(pm, args, alltests) catresults = test_runner(pm, args, alltests)
if catresults.count_failures() != 0:
exit_code = 1 # KSFT_FAIL
if args.format == 'none': if args.format == 'none':
print('Test results output suppression requested\n') print('Test results output suppression requested\n')
else: else:
@ -748,6 +751,8 @@ def set_operation_mode(pm, parser, args, remaining):
gid=int(os.getenv('SUDO_GID'))) gid=int(os.getenv('SUDO_GID')))
else: else:
print('No tests found\n') print('No tests found\n')
exit_code = 4 # KSFT_SKIP
exit(exit_code)
def main(): def main():
""" """
@ -767,8 +772,5 @@ def main():
set_operation_mode(pm, parser, args, remaining) set_operation_mode(pm, parser, args, remaining)
exit(0)
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View file

@ -1,5 +1,6 @@
#!/bin/sh #!/bin/sh
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
modprobe netdevsim
./tdc.py -c actions --nobuildebpf ./tdc.py -c actions --nobuildebpf
./tdc.py -c qdisc ./tdc.py -c qdisc