Logo
ICYMI summary of Day 1 MWC26 Barcelona CUBE Coverage

ICYMI summary of Day 1 MWC26 Barcelona CUBE Coverage

Episode 1
Mar 5, 202611 minutes
0:00/11:22

Summary

Today we pull together Day 1 CUBE coverage from MWC26 in Barcelona — conversations with Jeetu Patel, Stephen Rose, Eoin Coughlan, Jeff Aaron, Cole Crawford and others. The through‑line: how networking, telco CapEx, edge infrastructure, NVIDIA partnerships and RF sovereignty are shaping distributed AI — from chip and NIC co‑design to edge on‑ramps and AI RAN monetization.

Transcript

**Kore**: Today we pull together Day 1 CUBE coverage from MWC26 in Barcelona — conversations with Jeetu Patel, Stephen Rose, Eoin Coughlan, Jeff Aaron, Cole Crawford and others. The through‑line: how networking, telco CapEx, edge infrastructure, NVIDIA partnerships and RF sovereignty are shaping distributed AI — from chip and NIC co‑design to edge on‑ramps and AI RAN monetization.


**Kore**: Let’s kick off with networking as the connective tissue. Jeetu Patel lays out why the network — not just compute — must be redesigned for distributed AI.


**Achird**: On the show floor, Jeetu argued that true distributed AI requires chip‑to‑network co‑design: NICs and ASICs with deep buffering, RDMA‑like coherency across sites, and carrier‑grade control planes so multiple data centers behave like a single ultra‑cluster. He was blunt that loose interconnects will break the model. Here’s the excerpt where he makes that case and explains the technical priorities we need to focus on next.

> ## Networking as connective tissue/OS for AI factories: scale-up, scale-out, scale-across


> Okay.  Apply  that  networking  concept  to  MWC,  telecom  infrastructure,  carriers,  enterprises.  30 %  of  attendees  and  exhibitors  are  enterprise  here.  Okay,  that's  the  convergence  of  enterprise  and  telecom.  Networking  is  the  lifeblood.  What's  your  vision  on  that  product- wise?  Because  you  have  to  have  coherency,  you  mentioned  the  data  centers.  The  networking  is  going  to  be  the  connective  tissue.


> [Jeetu Patel] >> In  the  absence  of  having  the  GPUs  coherently  networked,  you  will  not  be  able  to  go  out  and  do  with  AI  what  you  need  to  have  done.  Now,  the  big  area  of  upside  for  the  telcos  and  for  networking  in  general,  it's  not  just  about  the  interconnect.  The  interconnect  actually  denotes  a  very  loose  connection  where  two  people  can  have  lightweight  data  that  can  go  back  and  forth.  What  we're  talking  about  is  ultra  clusters  that  get  built  out  in  a  scale- across  mode.  And  you  literally  have  to  start  from  the  chip  architecture.  The  chips  and  network  ASICs  we  build  for  scale  across  have  completely  different  technology  like  deep  buffering  that  allows  you  to  make  sure  that  you  can  have  very  large  volumes  of  data  go  from  one  data  center  to  the  other,  and  it  looks  virtually  like  a  single  data  center.


> [John Furrier] >> So,  to  inference,  because  it's  the  killer  app.

**Achird**: That clip sets a technical baseline: if you want a seamless multi‑site AI fabric, the network has to be built into the hardware and control plane from day one.


**Kore**: Building on that technical baseline, John Furrier and Stephen Rose shift the conversation to the economics and operational reality — the CapEx and logistical impact of that redesign.


**Achird**: Furrier and Rose warned of a telecom CapEx surge driven by massive edge refreshes and disaggregated, data‑centric architectures. They stressed that supply‑chain strain, coordination with power and utilities, and faster boardroom decision cycles will all be required to match AI deployment velocity. Listen to their exchange on what this actually means for operators’ balance sheets and timelines.

> ## AI Demand Sparks Telecom CapEx Surge: Edge Refresh, Data-Centric Networks, Disaggregated Architectures and Strained Supply Chains


> guys.


> [John Furrier] >> So  our  thesis  is  that  we're  going  to  see  a  massive  CapEx  build  out  in  edge,  which  is  the  telecom  infrastructure  and  the  carriers.  They  got  the  networks,  they  got  the  wireless,  they  got  the  wire  line,  they  got  the  facilities.  Kind  of  old  school  kind  of  voice  optimized,  going  to  data  centric  architectures.  So  it  kind  of  points  to  the  central  factories  being  built  in  AI  tokens  to  the  edge.  We  see  a  tsunami  of  like,  we  got  to  refresh,  upgrade  our  infrastructure.


> [Stephen Rose] >> Yeah.


> [John Furrier] >> What's  your  thoughts  on  that?  How  do  you  see  that?


> [Stephen Rose] >> Well,  I  mean,  if  you  just  think  about  some  of  the  incredible  statistics  that  are  out  there  right  now.  I  mean,  obviously,  telecom  is  going  to  spend  billions  over  the  next  few  years.  They've  been  thinking  about  atomizing  the  architecture  over  a  number  of  years  now  and  thinking  about  disaggregating  that  architecture  and  all  of  that  needs  supplying  and  feeding  somehow.  And  on  top  of  that,  of  course,  you've  got  the  AI  and  the  data  center  demand  that's  going  to  be  actually  using  that  disaggregated  architecture.  But  when  I  also  think  about  it,  I  think  the  velocity  of  the  decisions  that  have  been  making  in  the  boardroom  and  the  private  equity  firms  on  those  data  and  AI  centers  are  actually  going  to  require  telecom  actually  all  infrastructure,  whether  it  be  telecom  or  whether  it  be  the  grid  systems  or  whether  it  be  water,  all  of  it  needs  to  be  actually  working  in  tandem.  And  at  the  moment,  the  decisions  around  data  centers  and  AI,  the  velocity  around  those  is  way  quicker  than  the  infrastructure  industries  are  able  to  respond.

**Achird**: The takeaway there is practical: the technical vision demands real investment and cross‑enterprise coordination — it’s not just a software problem.


**Kore**: Next, we look at telcos as hosts of the edge opportunity. Eoin Coughlan explains why carriers are in a unique position to capture value.


**Achird**: Coughlan, joined by Fran Heeran in the session, argued carriers have a physical advantage — towers, central units, buildings and distributed sites that can host private edge AI workloads with low latency and enterprise‑grade reliability. He called out two revenue paths: internal AI to optimize operations and external AI RAN/platform services to monetize edge compute. Here’s the clip where he unpacks the platform play and converged edge/AI factory concept.

> ## Telcos as Edge AI Hosts: Local Infrastructure and AI RAN for Private, Optimized Wireless Workloads


>

> [Eoin Coughlan] >> When  I  look  at  the  edge  opportunity,  I  think  the  telcos  are  perfectly  positioned.  I  think  they  have  the  assets,  they  have  the  buildings,  the  infrastructure,  they  have  the  network.  And  in  those  locations,  right  across  the  country  where  people  might  want  to  run  these  more  private  AI  workloads  that  are  associated  with  their  own  enterprise.  And  I  think  that  gives  the  telcos  a  great  advantage  to  take  that  on.  And  when  we  look  at  their  5G  networks  and  we  look  at  where  they  have  their  CUs  based,  et  cetera,  they  tend  to  be  dotted  all  around  the  country.  So  they  have  the  infrastructure,  they  have  the  technology  capability,  they  just  need  to  grasp  the  opportunities  with  the  enterprises  that  need  this.


> [John Furrier] >> Eoin,  you  bring  up  a  good  point.  The  telecom  we've  been  following  over  multiple  decades,  they've  always  been  great  at  technology.  They  got  the  trillions  of  build  out,  but  now  that  they  start  to  talk  about  monetization,  networking  and  data  is  what's  key  at  the  edge  in  these  telecoms.  With  the  converged  edge  and  AI  factories  coming  soon,  high  performance  AI  workloads,  they  got  to  be  tied  in  a  distributed  manner.  That's  distributed  computing.


> [Fran Heeran] >> Well,  I  think  of  it  as  a  platform  company,  what  we're  seeing.  And  if  you  look  at  AI  at  the  very  highest  level,  we're  seeing,  I  think  the  vast  majority  of  use  cases  is  in  around  cost  savings,  optimization.  So  internal  use,  how  do  I  use  AI  to  optimize  my  network,  my  operations?  We're  now  starting  to  see  the  conversations  about  how  do  I  monetize  AI?  So  when  I've  built  the  infrastructure...  And  to  your  point,  they  do  have  this  very  unique  piece  of  real  estate,  which  is  your  radio  network,  the  far  edge.  Putting  AI  in  there  as  part  of  AI  RAN,  so  AI  for  radio  is  key.  Our  mission  in  Red  Hat  obviously  is  to  make  that  as  efficient  as  possible  with  the  platform.

**Achird**: In short: telcos aren’t just connectivity providers anymore — they can be the platform and marketplace for edge AI, if they move beyond legacy ops.


**Kore**: From platform plays to vendor alignment — Jeff Aaron maps the networking use cases that vendors need to solve to make these visions practical.


**Achird**: Aaron laid out four AI networking use cases — scale‑out, scale‑up (where solutions like NVIDIA Spectrum are relevant), scale‑across for ultra‑buffered routing, and edge on‑ramps through inference routers. He also announced an NVIDIA partnership aimed at aligning switches, fabrics and routers across AI factories, grids and D‑RAN. Listen to how he frames the engineering and partnership work required to stitch these layers together.

> ## NVIDIA Partnership Enables Scale-Out, Scale-Up, Scale-Across and Edge On‑Ramps for AI Factory Interconnect and Routing


>

> [Jeff Aaron] >> That's  a  great  question.  So,  it  comes  back  to  what  I  mentioned  earlier  is  that  the  way  we  view  it,  there's  four  networking  use  cases  for  AI  workloads,  right?  There's  scale  out  where  switches  talk  to  each  other.  There's  scale  up  where  within  the  switch  it  talks  to  each  other.  And  that's  where  NVIDIA  spectrum  really  plays,  right?  That's  our  primary  market  and  that's  where  they're  primary  going  out.  But  in  addition  to  that,  there's  scale  across  where  the  AI  factories  talk  to  each  other,  data  science  talks  to  each  other,  which  is  traditional  routing,  big  routings,  very  high  buffers,  very  low  loss,  big,  big  iron  there.  And  there's  the  edge  on- ramp,  how  do  you  actually  get  it  into  the  cloud?  Which  is  more  of  our  edge  routers,  inference  routers.  And  so,  that's  where  we  started  to  partner  with  NVIDIA  going  back  to  our  HV  Discover  in  December  and  there's  more  announcements  that  will  be  coming  with  these  guys.  But  again,  how  do  you  take  that  partnership  for  different  use  cases,  whether  it's  AI  factories,  AI  grid,  to  focus  on  more  D- RAN  and  those  environments,  but  it's  a  nice  synergy  there.

**Achird**: That’s the vendor playbook right there: different networking primitives for different scales, and ecosystem alignment to make them interoperable.


**Kore**: To close the set, Cole Crawford takes us to the far edge — where radio, sovereignty and AI meet.


**Achird**: Crawford painted a picture of an AI‑native far edge: hyper‑converged wireline, wireless and sensor fusion that needs mesh management, heterogeneous SLAs and RF boundary protection. He argued sovereignty will hinge on RF control — keeping inference and training close to the radio to cut latency and secure data. Here’s his closing clip.

> ## AI-Native Far Edge: Real-Time ML and RF Sovereignty at the Wireless Boundary


> coming  to  the  doorstep  of  the  edge.


> [Cole Crawford] >> Yeah.  And  I  think  we're  here  chatting  now  about  AI- native.  So,  it's  not  just  cloud- native,  it's  AI- native  as  well.  And  you  look  at  the  convergence  or  the  hyper- convergence  of  the  network,  the  wireline  side,  the  wireless  side,  you  have  a  convergence  of  sensor  fusion  that  is  happening  everywhere.  So,  you  need  to  not  just  manage  a  network,  you  need  to  manage  a  mesh  of  networks  to  manage  fleets  of  devices,  all  with  different  constraints,  all  with  different  profiles  that  have  different  security  requirements,  different  SLA  requirements.  And  frankly,  we  are,  I  think,  at  the  beginning  of  a  functional  step  in  technology  where  the  wireless  industry,  again,  has  an  opportunity  to  get  out  in  front  because  the  radio  access  network...  And  this  is  what  I  kind  of  say,  your  sovereign  edge  AI  factory  is  only  as  sovereign  as  the  RF  boundary.  So,  if  you  want  a  secure  edge  and  you  want  that  to  be  both  wireline  and  wireless,  you  have  to  protect  the  RF  and  you  have  to  inference  and  train  next  to  or  on  top  of  that  RF.

**Achird**: That clip underscores the point that geography and radio control aren’t peripheral concerns — they will shape where and how AI workloads run, especially for regulated or sovereign use cases.


**Kore**: Quick recap of what emerged across these conversations: networking needs a silicon‑to‑software rethink; operators face a CapEx tidal wave and must sync grids, power and supply chains; telcos can become edge AI platform providers and monetize AI RAN; vendor partnerships are trying to stitch scale‑up, scale‑out and scale‑across together; and RF boundaries will determine sovereign, low‑latency deployments.


**Achird**: If you take away one thing from Day 1 at MWC26: AI at scale isn’t just more compute — it’s the orchestration of chips, networks, sites and policy. Thanks for joining our highlights. Tune in for more ICYMI coverage — we’ll keep following these themes and the announcements that follow.