🌀 Event Recap: “AI Inference x Context Engineering” with Uber, Wisdom AI, EvenUp and Datastrato
🗓️ Upcoming Events
Oct 07, San Francisco, CA | OpenAI DevDay Recap. Join us for a post-DevDay deep dive with AI trailblazers — Jeff Wang (Exa, Series B), Harry Qi (Motion, Series C), and Wen Sang (Genspark, Series A) — as they unpack key announcements, share insider insights, and explore what’s next for AI.
Oct 08, San Francisco, CA | Tech Week Side Event - Scaling Innovation in AI Infrastructure & Applications. Join us for a Tech Week featured side event with AI trailblazers — featuring unicorn founders, pioneering researchers, and top executives from Connectly.ai (Series B+), Strava, Snap, Higgsfield AI, Memories.ai, and more. Proudly sponsored by Nebius, Frontier Tower, Protopia AI, and Influcio.
Oct 19, Mountain View, CA | Startup Pitch Salon & Investor Feedback + Audience Vote. Calling all founders! Join our monthly WeShine Startup Pitch — where top early-stage founders, seasoned investors, and an engaged audience come together for live pitches, real-time feedback, audience voting, and a $1,000 prize plus coaching from the WeShine Advisor Group (Stanford coaches + VCs).
Oct 29, San Francisco, CA | TechCrunch: The Future of eCommerce, Ads Tech, Marketplaces & Consumer Tech. Join us with leaders from Shopline ($208M+ raised), AliExpress (Alibaba Group), MAI Agents ($25M seed), and Spotlize as they unpack how AI, cross-border platforms, and short-form engagement are redefining how products are discovered, sold, and scaled.
Oct 10, San Francisco, CA | AI+ MultiModal Day. We’re excited to partner with AI+ for Multimodal Day, a full-day summit where leading founders, researchers, and builders explore how AI beyond text is reshaping how we work, create, and live — with bold ideas, sharp debates, and powerful connections.
Oct 11, San Francisco, CA | Agents Everywhere: From Workflow to Connectivity. We’re excited to partner with Composio and AI+ for an inside look at the latest in agent integrations, MCP servers, and the evolving AI agent ecosystem.
AI innovation no longer ends at generating text. The real breakthroughs are unfolding in how systems interpret, infer, and act on context — from powering smarter enterprise agents to enabling domain-specific reasoning.
At our Sep 29th event at AWS Loft San Francisco, founders, engineers, and investors gathered to explore how context engineering and inference systems are redefining what “intelligence” really means.
🤝 Shout out to our partner Datastrato and AWS for helping make this event possible!
Datastrato is the company behind Apache Gravitino™, helping organizations unify and manage data and AI assets across multi-cloud and hybrid environments. With a focus on metadata management, governance, and interoperability, Datastrato empowers developers and enterprises to build scalable and intelligent data platforms.
🌟 Featured Speakers
Jack Song – Head of Data Platforms & AI, Engineering Director at Uber – Leads scalable data infrastructure and enterprise AI systems powering global mobility.
Kapil Chhabra – Co-Founder & CPO, WisdomAI ($38M raised) – Building AI-driven analytics that democratize access to insights across enterprises.
Matt Chen – Head of ML, Document AI, EvenUp (AI Unicorn, ~$240M raised)
Shaofeng Shi – Apache Gravitino committer & VP of Engineering, Datastrato – Pioneer of open-source data catalogs and metadata-driven AI infrastructure.
Moderator: Oana, Founder, Motive Force Ventures – Former ML engineer turned investor focused on early-stage enterprise AI innovation.
⚡ TL;DR : Key Takeaways
Context is the new feature: The next generation of AI isn’t trained to respond — it’s built to reason.
Trust beats scale: 95% of AI agents fail not from lack of power, but from lack of user confidence.
Metadata is destiny: Governance and lineage are now core to AI success, not compliance checkboxes.
Interfaces will merge: The best AI tools blend natural conversation with intuitive design.
🧠 1. Setting the Stage: The New Frontier of Context-Aware AI
AI innovation has moved well beyond generating text. The real breakthroughs now lie in how models understand, infer, and act on context.
Moderator Oana opened with a pointed observation: the AI world is fixated on “context engineering,” yet few can define it. Why is it suddenly central to intelligent systems—and what makes it the next leap forward?
Her framing set the tone. This isn’t about larger models or clever prompts. It’s about how AI interprets ambiguity, integrates memory, and infers intent—much like humans do.
The panel quickly converged on one idea: context engineering is reshaping how intelligence itself is built. As Oana put it, “We’re entering an era where models don’t just respond—they reason.”
⚙️ 2. Context Engineering — What It Is and Why It Matters
If 2023 was the year of prompt engineering, 2025 marks the rise of context engineering — the discipline of giving models the right information, at the right time, in the right form.
Kapil opened with a clear definition:
“Context engineering means extracting the relevant pieces of knowledge an engineer would use—and applying that in the prompt so AI can produce high-quality output.”
In short, it’s how AI systems reason like experts by replicating the hidden steps of human thought.
Matt described it as the engineering labor behind scalable AI. Early retrieval systems were simple, he noted, but today’s architectures involve memory layers, tool orchestration, and structured outputs — “all critical to scaling products that think and act autonomously.”
Shaofeng added that context is more than information — it’s data selection and curation. “Too much data and the model gets confused; too little and it misses key context,” he said.
Jack bridged the concept to traditional ML:
“In machine learning, you select and validate features. In AI, you do the same with context — deciding what goes into the model’s window.”
The panel agreed: context engineering is no longer a buzzword but a foundational discipline that blends data science, retrieval, and reasoning design.
🔬 3. From Feature Engineering to Context Engineering
“If you talk to an AI model, you’re really just feeding it features. They’re called context now, but mathematically, it’s the same idea.”
Jack explained context engineering as the natural evolution of feature engineering — the discipline that drove machine learning’s early breakthroughs. The process is strikingly familiar:
Feature selection becomes context selection — choosing what matters most for a model’s understanding.
Feature validation becomes context quality checking — ensuring accuracy and relevance before input.
Feature drift monitoring becomes context observability — tracking how changes affect performance over time.
This lens reframed context engineering not as an abstract new term, but as the continuation of data science fundamentals, scaled for an AI-first world. Jack’s clarity grounded the conversation, reminding the audience that amid all the buzzwords, the basics of data discipline still reign supreme.
🧭 4. Translating Human Ambiguity into Machine Understanding
“LLMs are like great drivers,” said Kapil with a grin, “but without context, they don’t know where to go.”
He compared AI reasoning to a GPS: the model is the car, the prompt is the destination, and context provides the map and rules. Too much context, and the model gets lost; too little, and it makes wrong turns. The real skill lies in giving just enough information for a clear path forward.
Matt added another metaphor:
“The foundational model is the soil; the context is the seed. What you plant, and where, determines what grows.”
His point underscored how domain-specific AI depends on structured, reliable knowledge to reason effectively.
Shaofeng then pulled the lens wider. In enterprise settings, he noted, questions often draw from dozens of data sources — structured, semi-structured, and unstructured. Without strong metadata and governance, even the best models struggle to connect the dots.
Together, their insights made one thing clear: AI doesn’t fail because it can’t think — it fails because it doesn’t understand the question, or it’s missing the context to answer it well.
🧩 5. Metadata, Governance & the Invisible Backbone of AI
For all the hype around generative models, enterprise AI still runs on something far less glamorous: metadata management.
Drawing on his experience in open-source infrastructure, Shaofeng emphasized that data lineage, governance, and security are the real foundations of scalable AI.
“Before you build any agent, your metadata — the rules, permissions, and lineage — must be unified and auditable. Otherwise, you’re building trust on sand.”
Matt reinforced the point, noting that lineage is the key to context quality. His team now treats citation validation and traceability as part of model evaluation, not an afterthought.
Together, they made a clear case: building enterprise AI isn’t just about clever retrieval or model tuning — it’s about compliance, auditability, and control.
Shaofeng explained that organizations must classify data as public, private, or personal before connecting it to an LLM. Without this groundwork, he warned, “you risk leaking sensitive information or generating unauthorized outputs.”
As Matt summed it up, clean context starts with clean governance.
🔒 6. Scaling Trust: Why 95% of AI Agents Fail
Few moments landed harder than Jack’s remark: “Ninety-five percent of AI agents fail to scale.”
The room went quiet. Why do so many promising prototypes collapse at scale? According to Jack, it comes down to one word: trust.
He argued that the hardest part of deploying AI agents isn’t technical — it’s psychological.
“You can build an agent that automates tasks beautifully, but if people don’t trust its output — if they don’t believe it’s safe, accurate, or aligned — it fails.”
Jack called for a human-in-the-loop approach that blends machine intelligence with human judgment. The best systems, he said, “treat AI as a partner, not a replacement.”
He also emphasized human–AI co-evolution — systems that learn not only from data, but from human feedback and correction. Those adaptive loops, he noted, are what turn fragile prototypes into resilient, self-improving tools.
🧠 7. Memory Systems & Personalization — AI That Remembers You (But Not Too Much)
What does it mean for AI to “remember”? That simple question sparked one of the evening’s most engaging — and entertaining — debates.
Kapil opened by breaking memory into layers: institutional, team, and individual. In his view, the “context layer” helps models recall not just data, but how people think and work. “Some users like bar charts, others prefer pie charts,” he joked. “Our job is to remember that — but only just enough.”
Matt took the idea further, describing memory as personalization with purpose. His team designs agents that learn writing style, reasoning flow, and tone — essential for high-precision work like legal drafting. The next frontier, he said, is proactive memory:
“We’re exploring how systems can anticipate user needs based on events, not just prompts. That’s the next level of intelligence.”
Jack brought a note of humor and caution. Drawing on real-world AI interactions, he explained how memory can make systems feel personal — but also intrusive.
“When AI remembers your kids’ names from a past chat, it stops being friendly and starts being creepy.”
The line drew laughter, but the message stuck: personalization must respect boundaries. The audience nodded — a reminder that even as AI grows smarter, empathy and restraint remain part of good design.
💬 8. The Future Interface — Chat vs GUI
By the end of the evening, the discussion shifted from systems to surfaces. If AI can understand and reason like humans, how should people actually interact with it?
A sharp question from the audience set things in motion: “Is natural language the future interface — or just another fad?”
Kapil offered a measured view:
“Conversation isn’t replacing interfaces — it’s reducing the learning curve. Where users once had to learn dashboards or code, now they can just ask. But chat and GUI should work together.”
He described how users can talk to their data naturally, then refine results through simple visual controls — “a conversation that ends in a click.”
Jack pointed to two areas where chat truly excels: customer service and creative exploration.
“When emotion or imagination drives the task, language feels natural. But when precision matters — like ordering a car or booking a flight — structure wins.”
It was a fitting close to the night: thoughtful, practical, and balanced. The future interface isn’t about replacing humans or screens — it’s about meeting users where they are.
🚀 A huge thank you
To all who joined us for AI Beyond the Prompt — and to our incredible speakers and sponsors at AWS Loft and Datastrato for powering this unforgettable night.
For those who couldn’t make it, we can’t wait to see you at our next event. Let’s keep learning, building, and pushing the boundaries of what’s possible.
🔗 LinkedIn: EntreConnect
📅 Luma: Join Future Events
💬 LinkedIn Challenge: Share, Learn, Connect
Thank you to everyone who participated in our LinkedIn Challenge! We’re thrilled to feature the most engaging and inspiring post (link here), giving our community a chance to celebrate and learn from the experience. We also truly appreciate everyone who shared their best moments and insights with us!











