📡 EntreConnect @ GTC Recap: Scaling LLMs, Building AGI Infrastructure, and Rethinking Robotics from the Ground Up
EntreConnect, March 2025
The future of AI isn’t coming — it’s scaling. At our EntreConnect @ GTC 2025 gathering, founders, investors, and tech operators unpacked what it actually takes to build scalable, intelligent systems—across GenAI, infrastructure, robotics, and neuromorphic computing. From analog chips to agentic workflows, the insights spanned first principles to frontier forecasts.
🎤 Featured Speakers
Jonathan Siddharth, CEO & Co-founder, Turing (Series E, "Double Unicorn"); former CEO & Co-founder, Rover App (acquired by Revcontent); Stanford Masters with Distinction in Research, Computer Science.
Roman Chernin, Co-founder & CBO at Nebius (NASDAQ: NBIS); ex-CEO Yandex Geo of Yandex
Howie Xu, Chief AI & Innovation Officer, Gen (NASDAQ: GEN); ex-SVP AI/ML at Palo Alto Networks, ex-VP AI/ML at Zscaler, ex-CEO / Co-founder at TrustPath, Stanford Guest Lecturer.
Lindon Gao, CEO & Co-founder at Stealth Robotics Startup; CEO & Co-founder of Caper (acquired by Instacart), ex-VP and GM at Instacart
Erik Norden, Chief AI Strategy Officer at Zyphra; ex-Lead AI Architect at Google
Ning Ge, CEO & Co-founder at TetraMem (AI Chip); ex-Master Technologist at HP, MBA Ross School of Busin
🔮 Keynote Highlights With Jonathan Siddharth: What’s Ahead for 2025 and Beyond
Jonathan Siddharth took the audience through AI’s rapid evolution and shared his predictions on what’s next. He highlighted the progression of AI across three major epochs:
Statistical ML (Pre-2012) — Simple models built on structured data and algorithms.
Deep Learning (2012–2022) — Neural networks unlocked unprecedented progress in computer vision, NLP, and more.
Generative AI (Post-GPT) — Models capable of generating content and reasoning across domains.
We are now entering the Agentic Phase, where AI is evolving from passive copilots to autonomous agents capable of executing complex workflows.
The Five Dimensions of AI Progress:
Intelligence: AI is improving in code generation, math, reasoning, and domain expertise.
Multimodality: Models now process and generate text, audio, video, images, and even navigate computer interfaces.
Multilinguality: AI supports numerous languages, making cross-border collaboration easier.
Capability Level: From basic tasks to superhuman abilities, AI is climbing the capability ladder.
Industry Impact: AI is already reshaping Retail, Healthcare, Life Sciences, Energy, and High Tech.
Scaling Laws and AGI Bottlenecks:
While scaling remains effective, Jonathan emphasized the challenges ahead:
Lack of Real-World Benchmarks: Models excel in controlled tests but struggle without data reflecting real-world complexity.
Reasoning Gaps: AI still falters in advanced reasoning tasks like coding, tool usage, and decision-making.
Future Focus Areas:
Jonathan identified three areas to watch:
Advanced Reasoning: AI will increasingly validate its outputs to ensure accuracy.
Agentic Workflows: Autonomous systems will decompose tasks, collaborate, and self-correct.
Full-Spectrum Multimodality: AI will master tasks across text, images, audio, and video, providing a seamless user experience.
“AGI won’t arrive all at once. It’s a phase shift—driven by compound learning, human-AI collaboration, and hard-earned reliability.”
💡 Keynote Highlights With Howie Xu: Predictions, Provocations & Pragmatic AI Advice
In a no-nonsense keynote, Howie Xu delivered sharp insights on the trajectory of AI and actionable advice for those building in this space. Here are the most striking takeaways:
Top Predictions:
AI-Written Code: 25% of startup code will soon be AI-generated, reshaping engineering workflows.
TQ (Token Quotient): Just like IQ and EQ, TQ — the ability to understand and work with AI tokens — will become a core skill.
Multipreneurship: With AI agents taking over operations, GTM (Go-to-Market), and growth tasks, founders will run multiple ventures simultaneously.
AGI’s Economic Impact: Expect AGI to drive a measurable GDP impact.
Citizen Programmer Boom: The number of programmers will skyrocket from 30 million to 3 billion, as AI-powered coding lowers the barrier to entry.
“Moore’s Law is alive—just not in chips. It’s scaling in abstraction.”
Operator Advice:
Use What’s Available: Waiting for GPT-5 or the next big breakthrough is a mistake. Start building with today’s AI tools — progress comes from iteration, not perfection.
Your Biggest AI Hurdle? Humans. Trust, user adoption, and feedback are the hardest challenges in implementing AI systems.
Embrace the Messy Loop: Successful AI products live in constant evolution. The cycle of demo → production → iteration is the only reliable path forward.
Howie left the audience with a sobering but empowering reminder:
“History repeats. Human nature rules.”
The core lesson? Despite AI’s rapid evolution, human decision-making, emotions, and behaviors will continue to shape the outcomes of this technological wave. Builders who stay grounded in these truths will navigate the AI era most effectively.
🧱 Infra & AGI Panel Discussion Highlights: Scaling Real-World AI, Infra & LLM
🧠 Unlocking AGI: Key Challenges and Breakthroughs Needed
The panel explored the milestones needed to achieve AGI (Artificial General Intelligence). While scaling remains a factor, the next leap will depend on innovations in model architecture, data quality, and system design.
Jonathan: Modular, context-aware systems are essential. Monolithic models won’t cut it. He advocated for real-world benchmarks to test resilience, reasoning, and task decomposition.
Erik: Co-designing across silicon, systems, algorithms, and data will drive compute efficiency. Multi-agent orchestration and improved inference-time reasoning will also be crucial.
Roman: Real-world feedback loops are key to closing the gap between lab models and production systems. Enhanced interconnectivity across compute clusters will be necessary to support agentic workloads.
Howie: Transformer models lit the spark, but adaptive architectures with long-term memory and deeper reasoning will define AGI’s future.
Key Takeaways:
AGI progress requires architectural innovation and robust feedback loops.
Real-world benchmarks will strengthen model resilience.
Modular systems are likely to surpass traditional monolithic models.
⚖️ Scaling vs. Innovation: The Two AGI Camps
The panel acknowledged the divide in the AI community between those relying on scaling laws and those pushing for fundamentally new architectures.
Erik: The debate is not binary. While scaling existing models has shown remarkable results, it has limitations in deployment costs and practical usability. Innovations in compound systems that integrate large and small models, memory, and routing are gaining traction.
Jonathan: Scaling alone is insufficient. The key to AGI is developing adaptive models capable of deep reasoning and contextual understanding.
Roman: Feedback loops will continue to play a critical role. Real-world data will drive model improvements, no matter which camp prevails.
Key Takeaways:
Scaling existing models and creating new ones are not mutually exclusive.
Compound systems are emerging as a promising middle ground.
🤔 Managing Inference Costs: The Future of AI Compute
While inference costs have dropped significantly, sustaining this trend will demand smarter models and greater efficiency.
Roman: Inference is cheaper, but we need continued progress to maintain affordability.
Erik: AI silicon, system and model advancements will drive further cost reductions.
Jonathan: Expect unpredictable "DeepSeek moments" — breakthroughs that drastically reduce compute costs.
Tips for Founders:
Start small with lightweight models and fine-tune using prompt engineering.
Experiment with in-context and few-shot learning before scaling up.
Stay informed on silicon and system innovations.
🐋 DeepSeek's Impact and Staying Competitive
Jonathan: “If you work in AGI, you need to be comfortable with DeepSeek moments.” He recommended reading Anthropic’s Dario Amodei: Machine of Loving Grace to gain perspective on how AI breakthroughs can reshape the industry.
Erik: Now it’s the era of model and system innovations. This will drive further cost reductions. DeepSeek is a good example. There are lots of opportunities.
Strategies to Stay Ahead
💡 Embrace Compound Systems & Jevons’ Paradox: Modular architectures integrating large and small models, memory, and routing will outperform monoliths. However, Jevons’ Paradox suggests that as compute becomes cheaper, demand will rise.
⏰ Commit to Continuous Learning: Dedicate at least 5 hours per week to AGI research. Passive learning won’t suffice. Stay engaged with papers, benchmarks, and tools.
📚 Use GPT Summaries, But Verify the Source: While GPT-generated summaries are efficient, always read the original research papers for critical insights and context.
🧩 Designing, Scaling, and Governing AI Systems
As AI agents become more autonomous, managing their interactions will become increasingly complex.
LLM-Based Agents: Reinforcement Learning from Human Feedback (RLHF), constraint systems, and regulatory compliance measures are essential to ensure safe, reliable AI.
Multi-Agent Orchestration: Expect new infrastructure layers for monitoring, coordination, and governance as agentic systems grow.
Infrastructure Bottlenecks
Compute costs remain a challenge, especially for startups.
Managing voice data poses legal and regulatory complexities.
Real-world data feedback is essential for model robustness.
Practical Advice for Startups
Start with lightweight models and optimize using prompt engineering.
Use in-context learning to minimize reliance on expensive supervised fine-tuning.
Build continuous feedback loops with real-world data to improve model performance.
Addressing AI Data Pollution
The panel raised concerns about AI data pollution, where models pre-trained on overexposed datasets experience diminishing returns.
Solutions:
Prioritize real-world user data for more accurate training.
Incorporate domain-specific feedback.
Be cautious with synthetic data generation to avoid compounding data pollution.
Use data curation tools.
🚀 Future Outlook: AGI in 2025 and Beyond
The panelists shared their predictions for AI advancements in the coming years:
Agentic Systems: AI agents will take on increasingly autonomous roles across industries.
Real-Time Video Generation: AI-generated video will become more realistic and instantaneous.
AI-Native Programming Paradigms: Expect AI-first development frameworks to redefine software engineering.
Automation: Repetitive tasks will be fully automated, driving significant productivity gains.
🤖 Robotics & Deep Tech Panel Discussion
🦾 What Is the Frontier for Robotics?
In the robotics panel, Lindon challenged the industry's fascination with humanoid generalists, calling it more of a fantasy than a practical future. While humanoid robots often capture public imagination, they frequently fall short in real-world deployment.
Key Challenges with Humanoid Robotics:
Speed and Reliability: Robots typically operate at 10-30% of human-level speed. Latency issues, limited dexterity, and frequent failures hinder their effectiveness at scale.
Lack of Generalization: Even minor environmental changes can significantly degrade performance, requiring costly retraining and adjustments.
High Costs and Fragility: Replicating human limbs and movement leads to excessive complexity, resulting in fragile systems with poor ROI.
Lindon argued that rather than pursuing humanoid robots, companies should focus on task-specific designs optimized for real-world needs. Robots built for specific industrial applications often deliver greater reliability, speed, and cost-efficiency.
Key Takeaways:
Robots designed for specialized tasks outperform humanoids in both reliability and ROI.
Simpler, more rugged systems are often preferable to complex human-mimicking robots.
Functionality and uptime are better indicators of success than aesthetics.
“Mastery of one beats mediocrity in many.”
“You know you’ve got product-market fit when your robot looks like crap but customers still love it.”
🕳️ The Future of AGI Hardware: Analog In-Memory Computing
The panel also examined advancements in AI hardware, with Dr. Glenn Ge from TetraMem offering insights into how innovations like Analog In-Memory Computing (AIMC) could transform AI infrastructure.
Why AIMC Matters
AIMC addresses a major challenge in AI systems — energy inefficiency. Unlike traditional hardware that constantly shuttles data between memory and processors, AIMC performs computations directly within memory. This reduces energy consumption and latency, unlocking new levels of efficiency.
Hardware Evolution: The Three Waves
Dr. Ge outlined three distinct waves shaping AI hardware evolution:
GPUs / TPUs: Today’s standard for AI workloads, though increasingly constrained by power and heat limits.
Analog In-Memory Computing: Emerging as a viable solution to AI’s energy demands, with significant improvements in speed and efficiency.
Neuromorphic Computing: A long-term vision inspired by the biological behavior of neurons, still in the experimental stage.
Accelerating AGI with AIMC
Dr. Ge projected that AIMC could make AGI economically viable by 2035 instead of 2045, lowering infrastructure costs and making large-scale AI systems more practical. The panel emphasized that breakthroughs in hardware will be as critical as advancements in AI models to achieve AGI.
The takeaway? While software innovation gets most of the spotlight, the path to AGI will also be paved by hardware capable of supporting its immense demands.
A big shoutout to our incredible partners like Plug and Play, Hat-Trick Capital, and Alibaba Global Initiatives for helping bring this event to life! And a huge thank you to everyone who attended — your energy and enthusiasm made it unforgettable. For those who couldn’t make it, we can’t wait to see you at our next event! Let’s keep learning, building, and pushing the boundaries of what’s possible. 🚀
👉 For more event photos and takeaways, catch the highlights here
💬 LinkedIn Challenge: Share, Learn, Connect
Thank you to everyone who participated in our LinkedIn Challenge! We're thrilled to feature the most engaging and inspiring post (link here), giving our community a chance to celebrate and learn from the experience. We also truly appreciate everyone who shared their best moments and insights with us!
🗓️ Upcoming Events
Follow us on Linkedin and Luma to stay updated on future events!
🧳 Career Opportunities
Dyna Robotics | Redwood City, CA | AI Research Engineer | Staff Full Stack Engineer | Staff Machine Learning Infrastructure Engineer | Staff Robotics Software Engineer | Research Scientist
Arcadia, CA | Data Collection Technician | Founding Account Executive | Systems Support Engineer | Technical Sourcer
Palmstreet | Brand Ambassador | E-Commerce Campaign Manager | Account Success Manager | Product Designer | Product Manager (E-Commerce)
Turing AI | Sr Staff SWE LLM Products | Sr Director / Director - Solution Architecture (USA)
Tofu | Founding Engineer | Founding Growth Lead