Logo
ICYMI: MWC26 Day 3 and 4 Coverage

ICYMI: MWC26 Day 3 and 4 Coverage

Episode 3
Mar 9, 202617 minutes
0:00/17:09

Summary

Highlights from day 3 and 4 coverage of MWC26 by theCUBE.

Transcript

**Kore**: The show floor at MWC26 felt electric — booths humming, demos lined up back to back. Today we stitch together five on‑the‑ground conversations from Verizon, Vodafone, Telus, Wind River, AT&T’s Mark Austin, and IBM’s Dave Vellante to trace the biggest threads: production vRAN/OpenRAN scale, the struggle to monetize 5G, telco‑specific LLM needs, edge AI demos like vehicle use cases, and the early imperative for quantum‑safe planning. We’ll play each clip and unpack what it means.


**Kore**: First up: production vRAN and emerging OpenRAN deployments. At a Day 3 session, Wind River described five years of moving vRAN onto COTS servers, deployments at scale with Verizon’s tens of thousands of nodes, and how OpenRAN’s open interfaces enable multi‑vendor stacks globally. That sets the technical baseline — here’s a short excerpt from that session.

> ## Production-scale vRAN/OpenRAN deployments with Verizon, Vodafone, Telus, Japan


>

> \>\> Yes,  that's  correct.  So  we've  been  in  deployment  for  the  better  part  of  five  years  now.  And  in  fact,  our  first  and  largest  customer,  Verizon,  has  tens  of  thousands  of  nodes  running  virtual  RAN.  And  there's  really  two  technologies  there.  One  is  vRAN,  where  you  have  the  cloud- based  disaggregation  of  the  functions.  You  move  from  proprietary  hardware  to  conventional  servers  with  cloud  technology  with  applications.  And  that  was  vRAN,  and  that's  been  deployed  and  proven  for  years  now.  And  then  OpenRAN  emerged  as  a  byproduct  of  that.  It's  kind  of  on  the  way  to  the  AI  RAN  that's  happening  at  this  show.  And  when  OpenRAN  came  out,  that  was  really  about  open  interfaces  between  the  vendors  so  that  you  could  interoperate  different  components  that  classically  didn't  used  to  interoperate.  Again,  running  on  an  infrastructure  like  Wind  River  provides.  And  so  now,  we  are  globally  deployed  in  North  America.  We're  doing  the  first  O- RAN  deployment  with  Telus  in  Canada,  deployed  in  Europe  with  Vodafone  and  in  Japan  and  many  other  countries.  So  it's  really  taken  off  and  it  is  in  full  production  scale  now.

**Achird**: That scale really lands — tens of thousands of nodes means this is no longer experimental. Which brings us straight to the hard question: are operators turning that technical progress into new revenue?


**Achird**: On March 5 a panel with Vodafone and Intel dug into monetization as 5G matures and 5G‑Advanced/6G loom. They pointed to network slicing and advanced SLAs as technical enablers, but warned commercial models are lagging — stadium and event premium services, for example, still struggle to find enough paying customers. Listen to that exchange now.

> ## Telco monetization challenges: 5G struggled, network slicing and premium services hard to sell


> So,  really,  just  I  think  three  or  four  key  themes  that  we've  been  seeing.  Obviously,  monetization.  We're  starting  to  get  towards  the  end  of  the  5G  rollout,  6G's  on  the  horizon,  5G  advanced.  So,  as  we  get  into  those  later  stages,  really  fascinating  conversation  with  Vodafone  yesterday.  We  picked  up  on  some  of  that  with  Intel  this  morning,  basically,  around  how  are  these  telcos  planning  to  monetize  their  huge  investment,  not  only  to  buy  the  spectrum,  but  then  to  build  out  the  infrastructure.  So,  still  struggling  with  that  monetization,  network  slicing,  some  of  the  advanced  SLA  features.  We've  seen  a  lot  of  stadium  and  event  use  cases.  


> Can  you  offer  premium  services?  Still  really  hard  to  get  end  users  to  pay,  get  a  stadium  to  pay  for  enhanced  5G  coverage  through  network  slicing,  that's  proving  to  be  difficult.  So,  I  think  they're  still  struggling  with  5G  monetization.

**Kore**: That clip underscores the gap between capability and commercial demand — the technology can do more than the market is ready to buy. That tension feeds directly into why operators are investing in AI: to wring operational value and new services from existing networks. Next is a concrete AT&T example showing how AI runs into telco complexity.


**Kore**: At an Ask AT&T session on March 5, Mark Austin recounted a demo where a colleague used Grok to untangle a complex RAN‑to‑core issue. AT&T runs Ask AT&T at roughly 27 billion tokens per day and found that frontier LLMs “don’t speak Telco” out of the box — which pushed them to open‑source thirty specialized models tailored to telecom problems. Here’s that moment.

> ## How a Grok demonstration exposed the need for Telco-specific LLMs at AT&T.


>

> [Mark Austin] >> So  I've  been  at  this  thing  for  years  as  well.  So  I  remember,  I  always  tell  a  story  12  years  ago,  I  was  working  on  self- optimizing  networks.  So  that  was  some  of  the  first  intelligence  we  were  kind  of  introducing  there.  It  was  phenomenal.  We  took  out  40 %  of  the  drop  calls  at  the  time.  But  I'll  tell  you  a  story  of  like  how  we  got  started  on  these  30  models  that  we  open  sourced  here.  A  guy  walked  in  my  office,  and  I  run  Ask  AT & T  at  AT & T.  It's  27  billion  tokens  a  day  that  are  processed  through  that.  It's  all  across  the  company.  So  HR,  finance,  network,  you  name  it,  we're  using  Ask  AT & T.  And  I  have  all  the  models,  and  I  thought  I  had  all  the  models.  I  have  OpenAI,  I  have  Claude,  I  have  all  the  open  source,  Llama,  Mistral,  Gemma,  you  name  it.  And  he  walks  in  and  he  goes,  " Mark,  I  need  another  model. "  I  go,  " What  do  you  mean  another  model? "  " I  need  Grok. " I  didn't  have  Grok  yet.  "So  why  do  you  need  


> Grok? "  " 'Cause  it  can't  answer  this  Telco  question,  and  I  don't  want  to  pay  to  give  this  to  the  vendor  to  kind  of...  So  I  want  to  solve  it  ourselves,  'cause  every  time  we  don't  solve,  we  have  to  give  it  to  somebody  else. "  He  goes,  " But  Grok  knows  the  answer. "  I  go,  " Show  me  that.  How  does  Grok  know  the  answer  to  that? "  And  he  showed  it  to  me.  It  was  pretty  complex  of  how  the  RAN  is  operating  with  the  core.  And  sure  enough,  it  knew  the  answer.  And  I  was  saying,  " You  know  what?  I've  heard  that  frontier  language  models  are  not  great  at  Telco,  and  GSMA  has  been  talking  that  for  some  time.  And  if  you  grade  them,  they  have  all  sorts  of  these-


> [John Furrier] >> They  don't  speak  Telco.


> [Mark Austin] >> They  don't  speak  Telco.  So  they're  like  60,  70%  they  get  the  answer  right.  And  sure  enough,  this  was  an  example  of  that.

**Achird**: When a general‑purpose model actually solves a telecom problem, it exposes how domain‑specific the work really is. That’s a clear signal: operators need bespoke models trained on telco data and terminology — not just bigger, generic LLMs.


**Achird**: To loop back to economics, Wind River’s March 5 interview with Dave Vellante explored how OpenRAN and vRAN break vendor lock‑in and let operators run COTS servers from Dell, HPE and others — lowering TCO through multi‑vendor competition. But Wind River also argued vendors must differentiate on performance optimization, management tools, and edge AI use cases — they highlighted Cellular V2X as a low‑latency, real‑world example. Here’s that clip.

> ## Open interfaces reshaping RAN economics, increasing vendor competition and lowering operator TCO.


> scale  now.


> [Dave Vellante] >> Yeah.  So  the  reliability  is  proven.  And  explain  the  real  benefits  that  you're  bringing  to  this  industry,  because  for  years,  it  was  proprietary  systems,  very  closed,  really  hard  to  change.  O- RAN  changes  that,  especially  when  you  bring  in  Kubernetes  and  all  the  development  capabilities  on  top  of  that.  But  can  you  explain  that?

> \>\> Yeah,  that's  absolutely  correct.  There's  kind  of  two  pieces  to  that.  The  first  piece  is  really  commercial,  where  in  the  legacy  approach,  you'd  have  competition  at  the  service  provider  for  their  business  at  the  beginning  of  the  network  build.  And  then  a  classical  vendor,  a  telecom  equipment  manufacturer  typically  would  get  selected  and  build  the  network.  And  for  the  next  10  or  15  years,  that  service  provider  is  tied  to  that  vendor.  Now,  you  have  a  massive  change  in  the  business  model.  With  OpenRAN  and  vRAN,  now  you've  got  hardware  servers  from  multiple  vendors,  vendors  like  Dell  and  HPE.  And  if  one  of  those  vendors  disappoints,  the  customer  can  switch  to  the  other  three,  four  years  into  the  deployment.  If  I  disappoint,  they  can  switch  to  my  competitor.  If  the  application  disappoints,  they  can  push  a  button  and  re- orchestrate  a  new  application.  So  now,  we've  moved  to  a  business  model  where  this  competition  for  the  life  of  the  network,  that  drives  TCO  down  for  the  operator.  It's  a  massive  business  transformation.  


> The  second  piece  of  it,  at  least  for  us  is  then,  all  right,  and  now  what  I've  just  described  as  a  highly  competitive  landscape.  How  do  I  differentiate  myself?  And  we've  done  a  lot  of  things  to  optimize  the  performance  of  the  system  and  make  it  manageable  and  operable  for  the  service  provider.  And  that's  where  Wind  River  has  come  from  a  relative  unknown  to  be  the  number  one  provider  of  that  technology.


> [Dave Vellante] >> Interesting.  So  it's  like  open  systems  comes  to  the  telco  industry,  but  the  risk  there  is  obviously  you  get  no  differentiation,  but  you're  bringing  in  value  on  top  of  that.

> \>\> That's  right.  That's  right.


> [Dave Vellante] >> Okay.  I  want  to  ask  you  something.  Let's  kind  of  move  to  the  shift  from  data  center  AI  to  edge.  Something  I  was  reading,  edge  AI  represents  a  fundamental  shift  from  data  center  AI.  It's  not  simply  cloud  AI  that's  closer  to  the  user,  but  it's  a  different  class  of  system  altogether.  What  does  that  actually  mean?

> \>\> Yeah,  there's  a  couple  of  things.  The  first  is  you  want  to  think  of  generative  AI  and  digital  AI.  When  you'd  normally,  and  many  of  us  have  used  things  like  ChatGPT  where  you  type  in  a  question,  it  gives  you  an  answer,  that's  off  a  static  model.  As  you  move  more  towards  the  edge,  you're  moving  more  towards  systems  that  interact  with  the  physical  world.  And  when  you  interact  with  the  physical  world,  edge  AI  and  physical  AI  start  to  emerge.  Instead  of  training  large  language  model  based  functions,  it's  now  inference.  It's  the  execution  of  an  AI  model  in  a  way  that  interacts  with  a  human  being.  So  for  example,  our  parent  company  is  an  industry  leader  in  self- driving  cars.  They  use  AI  with  radar  and  camera  processing  to  make  decisions  about  where  objects  are  and  drive  the  car.  We  work  with  industrial  manufacturing  and  robotics,  which  is  using  AI  camera  recognition  to  move  things  on  assembly  lines.  So  as  you  get  into  the  physical  world  where  you're  sensing  something  in  the  physical  world  and  then  taking  an  action  in  the  physical  world,  you're  now  in  the  land  of  edge  AI.


> [Dave Vellante] >> So  you  guys  have  this  kind  of  cool  vehicle  down  here.

> \>\> Yes.


> [Dave Vellante] >> It's  got  an  active  license  plate  and  of  course  Wind  River  is  the  software  layer  on  top  of  that,  connecting  all  these  sensors  and  devices.  One  of  the  first  autonomous  vehicle  interviews  I  ever  did,  I  texted  a  friend  of  mine  who's  an  expert  in  the  field  and  I  knew  nothing  about  it.  And  he  said,  " Ask  him  about  Byzantine  fault  tolerance. "  Which  is  this  concept  of  in  a  military  concept,  if  the  general  says,  " This  is  what  we're  doing, "  and  you're  in  the  fog  of  war  and  things  change,  it's  very  hard  to  communicate.  But  in  an  autonomous  vehicle  situation,  you  have  to  have  that  ability  to  communicate  across  these  disparate  vehicles.  Is  that  something  that  you  guys  can  actually  enable  to  drive  safety  from  a  technical  standpoint?

> \>\> Actually,  that's  exactly  what  the  demonstration's  about.  So  you  think  about  it  over  time,  the  automobile  has  moved  into  now,  as  we  sit  here  in  2026,  a  software  defined  vehicle.  They're  self- driving.  They  have  radar  and  camera  image  processing.  They  have  all  kinds  of  software  for  infotainment  in  the  vehicle.  There's  a  huge  amount  of  software  in  the  vehicle.  It's  become  a  computer  on  wheels.  And  so  naturally,  you  think  once  it's  a  computer  on  wheels,  these  computers  interact  with  each  other.  And  historically  up  to  date,  these  self- driving  capabilities  and  safety  functions  have  really  been  thought  of  with  the  vehicle  by  itself.  But  what  if  we  could  bring  awareness  of  the  environment  from  other  vehicles  to  that  vehicle?  And  that's  what  we're  showing  here  today.  It's  called  Cellular  V2X  or  cellular  vehicle  to  anything.  We  have  one  car  that's  sitting  there  sensing  the  environments  around  it,  all  the  people  walking  by  in  the  show,  sending  that  data  up  to  the  Verizon  5G  network  through  a  Verizon  ETX  service,  and  then  sending  that  down  to  a  destination  vehicle.  


> That  destination  vehicle,  it  may  be  behind  a  building  or  behind  a  wall,  and  now  it  can  see  the  person  walking  on  the  other  side  because  it's  getting  that  information  from  the  first  vehicle.  So  effectively,  we're  sharing  sensor  data  and  that  destination  car  sees  that  data  coming  in  real- time  because  of  the  performance  of  the  5G  network  as  if  it  were  just  new  sensors  that  it  had.  So  it  can  see  beyond  its  horizon  now.  And  now  in  the  demonstration  we  give,  instead  of  hitting  a  pedestrian,  it'll  actually  break  before  it  even  sees  the  pedestrian,  because  it's  getting  the  information  from  another  car.  So  safety,  comfort,  and  convenience,  there's  hundreds  of  applications  that  leverage  this  technology  and  we're  at  the  forefront  of  it.


> [Dave Vellante] >> This  is  exciting,  because  the  promise  of  autonomous  vehicles  is  it  will  be  much,  much  safer  than  human  vehicles.  I  know,  for  instance,  with  a  lot  of  drivers,  sometimes  you  see  red  lights  up  ahead  and  people  don't  slow  down.  They  keep  going.  What  you  just  described  is  the  system  would  sense  that  even  maybe  before  it's  visual.  So  that's  very  powerful.  From  a  technologist  perspective,  what  are  the  constraints  that  you  have  to  deal  with  at  the  edge,  whether  it's  connectivity  or  latency  or  determinism?  Can  you  take  us  through  that?

> \>\> The  primary  one  is  latency,  right?  Because  obviously,  if  you  think  about  that  safety  critical  application  we  just  gave  an  example  of,  you  can't  take  three  seconds  for  the  traffic  to  go  up  to  a  public  cloud  and  back.  So  you  have  to  deploy  the  application  on  the  edge  of  the  network.  So  in  the  case  of  that  Verizon  example  I  just  gave,  in  their  multi- access  edge  complete,  their  edge  cloud,  we  deploy  the  application  so  that  it  has  a  low  latency  connection  to  the  vehicle  in  tens  of  milliseconds.  This  allows  the  data  to  transit  the  network  and  enter  the  second  vehicle  in  time  to  affect  its  decisions.  If  you  can't  achieve  that  latency  and  performance,  you  can't  build  those  types  of  applications.  So  from  a  technology  perspective  and  safety  critical  applications,  low  latency  is  very  important.

**Kore**: That ties the technical and commercial threads together — open interfaces drive competition and cost savings, but meaningful vendor value comes from performance, manageability, and edge applications that actually require those upgrades.


**Kore**: Finally, a shift to security horizons. At IBM’s MWC booth on March 5, Dave Vellante summarized a roundtable with 23 executives on Quantum, AI and sovereignty. The core message: start the quantum‑safe journey now — discover hidden encryption keys in silicon and storage, use tooling to migrate to quantum‑safe cryptography, and prioritize early use cases in life sciences before tackling finance. Listen.

> ## TheCUBE’s Dave Vellante on why enterprises should begin the quantum-safe journey now.


>

> [Dave Vellante] >> Hi,  everybody,  I'm  Dave  Vellante.  We're  here  at  the  IBM  booth  at  MWC,  Mobile  World  Congress  2026.  And  behind  me  is  the  quantum  chandelier,  this  beautiful,  golden  chandelier  of  quantum.  This  is  actually  the  cooling  mechanism.  The  chip  is  actually  quite  small  down  below.  But  I  hosted  a  round  table  of  about  23  executives  on  Monday  that  was  organized  by  IBM  and  the  theme  was  around  bringing  together  Quantum  AI  and  Sovereign  into  a  new  era.  Now,  there  was  no  aha  moment  that  came  out  of  that  discussion,  but  I  will  say  this.  What  was  very  clear  is  a  couple  of  things.  One  is  the  sequencing  of  Quantum  and  how  it's  going  to  eventually  fit  in  with  AI.  And  what  I  mean  by  that  is  right  now,  organizations  are,  I  would  say,  frankly,  somewhat  overwhelmed  with  implementing  AI.  And  so  they  really  don't  have  a  lot  of  time  to  think  about  quantum.  But  the  one  thing  they  do  think  about  is  cryptography  and  quantum  safe.  In  other  words,  when  quantum  computing  actually  hits  the  mainstream,  let's  call  it  2029,  2030,  those  systems  will  be  able  to  break  existing  cryptography.  So  organizations  need  to  now  start  thinking  about  how  to  become  quantum  safe.  And  to  do  that,  IBM  has  actually  done  a  couple  of  things.  One  is  they  have  tools  to  allow  you  to  discover  where  encryption  takes  place  within  your  organization.  And  remember,  a  lot  of  this  encryption  can  be  hidden  down  into  the  hardware,  into  the  silicon,  into  the  storage.  And  so  you've  got  to  discover  where  that  is.  And  the  second  thing  is  IBM  has  developed  tooling,  four  sets  of  tools  to  be  able  to  both  discover  and  understand  and  then  apply  protection  against  potential  quantum  threats.  And  so  it's  a  journey.  People  need  to  start  thinking  about  that  journey  now.  And  then  ultimately  how  it  fits  into  AI,  not  only  as  a  protection  against  quantum  threats,  but  also  as  a  cybersecurity  defense  that's  much  more  sophisticated  than  anything  they  have  today.  And  then  eventually  use  cases.  It'll  start  with  life  sciences  and  other  material  sciences  eventually  go  into  financial  services.  And  so  other  algorithmic  based  computing  will  emerge  in  the  2030s.  But  right  now  you  want  to  be  thinking  about  how  to  make  your  infrastructure  quantum  safe.  All  right,  that's  it  from  here.  This  is  Dave  Vellante.  Thanks  for  watching  theCUBE.

**Achird**: That’s a strong call to action — quantum timelines may feel distant, but you need to inventory where encryption and keys live today, because hardware and storage can hide risks you don’t see until it’s too late.


**Kore**: Quick wrap: 1) production vRAN/OpenRAN is real and global; 2) COTS and multi‑vendor competition are reshaping RAN economics; 3) monetization lags — network slicing and premium services still need viable commercial models; 4) telco‑specific LLMs matter — general models won’t cover domain nuance without tailored training; 5) edge AI demos like Cellular V2X show the low‑latency applications operators are chasing; and 6) start planning quantum‑safe migration now.


**Achird**: Treat security as a journey: discover hidden encryption, plan migrations to quantum‑safe crypto, and align those efforts with AI and sovereignty priorities. Thanks for joining this ICYMI tour of MWC26 Day 3 and 4 highlights.


**Kore**: If you found this useful, follow for more show‑floor recaps and deeper dives.


**Achird**: Thanks for listening — stay curious and stay secure. We’ll catch you in the next episode.