Semantic understanding, not just vectors: How Intuit’s data architecture powers agentic AI with measurable ROI - Related to most, model, how, gpt-4.5, hallucinations
Billions.Network launches universally accessible verification platform for humans and AI

Billions. Network, intended to be a universal human and AI network, is launching to transform digital identity verification.
More than 9,000 projects across Web2 and Web3 are already using Billions’ zero-knowledge verification technology. Known as Circom. Zero-knowledge proofs are a way to verify the veracity of someone’s identification or trustworthiness without having to see their private information at every single encounter. It’s a bit like how you can be trusted via TSA precheck in the airport safe areas such as airport gates or security checkpoints. If you’re in there, you can be trusted and don’t have to show your fingerprints.
The Billions. Network projects include work for global platforms like TikTok, Top Doctors, and WorldCoin, a crypto verification project cofounded by OpenAI CEO Sam Altman.
Billions. Network says it can deliver a verification system that is scalable, private, and secure. Billions is launching as a direct response to consumer demand and growing regulatory pressure for more accessible, secure, and mobile-first verification systems.
Deutsche Bank and. HSBC have already tested Billions’ institutional-grade verification system in several tests, demonstrating its capability and viability at scale.
“The fundamental mission of Billions. Network is to enable individuals to prove their humanity, access a plethora of benefits, and provide legal, safe proof-of-uniqueness anytime, anywhere. And for everyone,” noted Evin McMullen, CEO of Billions, in a statement. “Billions is creating a world where each human can be uniquely identified and unlock personalized benefits in the age of AI with verification solutions that are universally accessible and scalable.”.
In a world where AI advancements occur almost daily, with the recent reveal and meteoric rise of DeepSeek as a prime example, it is becoming increasingly more challenging to verify the authenticity of people’s digital identities.
Mounting global issues such as AI deepfakes, pig butchering scams, botting, Sybil attacks, unfair airdrop distributions, and others repeatedly highlight the pressing need for a secure and frictionless digital identity and. Proof-of-uniqueness solution that preserves clients’ privacy and personal information.
Billions addresses this by providing attestations that verify human identities, while also validating AI agents through cryptographically proving their training models and data findings—critical information for establishing trust in AI interactions.
Many of the existing and forthcoming decentralized identity solutions still face significant challenges, including concerns over intrusiveness due to the collection of detailed biometric data, as well as limitations in terms of geography and technology. Additionally, issues such as a lack of accessibility, privacy, and scalability remain present in the space, along with broader concerns around the dystopian nature of extensive data aggregation.
The Billions ecosystem was designed to tackle each of these challenges from the ground up, resulting in a global and. Universally accessible network that doesn’t require proprietary hardware of any kind. Billions aims to provide a highly interoperable digital verification service that respects people’s right to privacy, allows people complete control over their data, and rewards them for engagement via token airdrops and varied loyalty programs.
From the team behind Privado ID, Billions builds on the firm’s proven track record in institutional verification, including several joint proof-of-concepts with major financial institutions like Deutsche Bank, HSBC. Focusing on advanced KYC solutions that offer significant cost savings with efficient user onboarding.
Privado ID works alongside Billions to onboard organizations and governmental partners, and runs on any EVM-compatible chain. The project grew based on market demand for identity verification solutions, including age confirmations, content authenticity, protection against Sybil and bot attacks, and the ability to counteract the rapidly growing volume of AI-generated content.
Following its initial development as PolygonID as a core Polygon core solution, and subsequently branching out as Privado ID to serve the broader ecosystem, the team quickly identified the need for a community-first, universally accessible verification network.
The project leverages the team’s deep expertise in zero-knowledge proofs and identity solutions, building on Billions’ development of Circom as a technology and part of the Iden3 protocol — technology that has become the de facto standard for zero-knowledge programming.
Leveraging Privado ID’s years of experience, research, and. Development in the digital identity sphere, Billions will not only offer frictionless credential interoperability and reusability across the world but will also take into account local laws, regulations, and standards. The platform is working with the Government of India to make Aadhar — India’s official government identification system — secure, universally accessible, and. Optimized for AI-based systems.
LG’s QNED lineup is an impressive family of LED LCDs with enhanced colors and brightness courtesy of the quantum dots living behind each panel. Usuall...
Earlier in the day, Byju Raveendran alleged that he had received a document with “conclusive evidence of criminal collusion” between EY India, the len...
He further pointed out that areas like healthcare, agriculture. And disaster prediction can reflect AI drive large-scale impact.
OpenAI releases ‘largest, most knowledgable’ model GPT-4.5 with reduced hallucinations and high API price

It’s here: OpenAI has introduced the release of , a research preview of its latest and most powerful large language model (LLM) for chat applications. Unfortunately, it’s far-and-away OpenAI’s most expensive model (more on that below).
It’s also not a “reasoning model,” or the new class of models offered by OpenAI, DeepSeek, Anthropic and. Many others that produce “chains-of-thought,” (CoT) or stream-of-consciousness-like text blocks in which they reflect on their own assumptions and conclusions to try and catch errors before serving up responses/outputs to clients. It’s still more of a classical LLM.
Nonetheless, acording to OpenAI co-founder and CEO Sam Altman’s post on the social network X. Is: “The first model that feels like talking to a thoughtful person to me. I have had several moments where I’ve sat back in my chair and been astonished at getting actually good advice from an AI.”.
However, he cautioned that the business is bumping up against the upper end of its supply of graphics processing units (GPUs) and. Has had to limit access as a result:
“Bad news: It is a giant, expensive model. We really wanted to launch it to plus and pro at the same time, but we’ve been growing a lot and are out of GPUs. We will add tens of thousands of GPUs next week and roll it out to the plus tier then. (Hundreds of thousands coming soon, and I’m pretty sure y’all will use every one we can rack up.) This isn’t how we want to operate, but it’s hard to perfectly predict growth surges that lead to GPU shortages.“.
GPT‑ is able to access search and OpenAI’s ChatGPT Canvas mode, and clients can upload files and images to it, but it doesn’t have other multimodal functions like voice mode, video and screensharing — yet.
represents a step forward in AI training, particularly in unsupervised learning, which enhances the model’s ability to recognize patterns, draw connections and generate creative insights.
During a livestream demonstration, OpenAI researchers noted that the model was trained on data generated by smaller models and that this improved its “world model.” They also mentioned it was pre-trained across multiple data centers concurrently, suggesting a decentralized approach similar to that of rival lab Nous Research.
The model builds on OpenAI’s previous work in AI scaling. Reinforcing the idea that increasing data and compute power leads to more effective AI performance.
Compared to its predecessors and contemporaries, is expected to produce far fewer hallucinations ( instead of for GPT-4o), making it more reliable across a broad range of topics.
, is designed to create warm, intuitive and naturally flowing conversations. It has a stronger grasp of nuance and context, enabling more human-like interactions and a greater ability to collaborate effectively with clients.
Additionally, the model’s expanded knowledge base and improved ability to interpret subtle cues allow it to excel in various applications, including:
Writing assistance: Refining content, improving clarity and. Generating creative ideas.
Refining content, improving clarity and generating creative ideas. Programming support: Debugging, suggesting code improvements and automating workflows.
Debugging, suggesting code improvements and automating workflows. Problem-solving: Providing detailed explanations and assisting in practical decision-making.
also incorporates new alignment techniques that enhance its ability to understand human preferences and intent, further improving user experience.
ChatGPT Pro individuals can select in the model picker on web. Mobile and desktop. Next week, OpenAI will begin rolling it out to Plus and Team individuals.
For developers, is available through OpenAI’s API, including the chat completions API. Assistants API, and batch API. It supports key aspects like function calling, structured outputs, streaming, system messages and image inputs, making it a versatile tool for various AI-driven applications. However, it currently does not support multimodal capabilities such as voice mode, video or screen sharing.
Pricing and implications for enterprise decision-makers.
Enterprises and team leaders stand to benefit significantly from the capabilities introduced with With its lower hallucination rate, enhanced reliability and natural conversational abilities. Can support a wide range of business functions:
Improved customer engagement: Businesses can integrate into support systems for faster, more natural interactions with fewer errors.
Businesses can integrate into support systems for faster, more natural interactions with fewer errors. Enhanced content generation: Marketing and communications teams can produce high-quality, on-brand content efficiently.
Marketing and communications teams can produce high-quality, on-brand content efficiently. Streamlined operations: AI-powered automation can assist in debugging, workflow optimization and strategic decision-making.
AI-powered automation can assist in debugging, workflow optimization and strategic decision-making. Scalability and customization: The API allows for tailored implementations, enabling enterprises to build AI-driven solutions suited to their needs.
At the same time, the pricing for through OpenAI’s API for third-party developers looking to build applications on the model appears shockingly high, at $75/$180 per million input/output tokens compared to $$10 for GPT-4o.
And with other rival models released not long ago — from Anthropic’s Claude , to Google’s Gemini 2 Pro, to OpenAI’s own reasoning “o” series (o1, o3-mini high, o3) — the question will become if ’s value is worth the relatively high cost, especially through the API.
Early reactions from fellow AI researchers and power people vary widely.
Additionally, the release of has sparked mixed reactions from AI researchers and tech enthusiasts on the social network X. Particularly after a version of the model’s “system card” (a technical document outlining its training and evaluations) was leaked, revealing a variety of benchmark results ahead of the official announcement.
The actual final system card , including the removal of a line that “ is not a frontier model, but it is OpenAI’s largest LLM, improving on GPT-4’s computational efficiency by more than 10x,” which an OpenAI spokesperson revealed turned out to be not accurate. The official system card can be found here on OpenAI’s website, while the leaked version is attached below.
Teknium (@Teknium1), the pseudonymous co-founder of rival AI model provider Nous Research, expressed disappointment in the new model, pointing out minimal improvements in measuring massive multitask language understanding (MMLU) scores and real-world coding benchmarks compared to other leading LLMs.
“It’s been 2+ years and 1,000s of times more capital has been deployed since GPT-4… what happened?” he asked.
Others noted that underperformed relative to OpenAI’s o3-mini model in software engineering benchmarks, raising questions about whether this release represents significant progress.
However, some individuals defended the model’s potential beyond raw benchmarks.
Software developer Haider (@slow_developer) highlighted ’s 10x computational efficiency improvement over GPT-4 and its stronger general-purpose capabilities compared to OpenAI’s STEM-focused o-series models.
AI news poster Andrew Curran (@AndrewCurran_) took a more qualitative view, predicting that would set new standards in writing and creative thought. Calling it OpenAI’s “Opus.”.
Moving to another aspect, these discussions underscore a broader debate in AI: Should progress be measured purely in benchmarks, or do qualitative improvements in reasoning, creativity and human-like interactions hold greater value?
OpenAI is positioning as a research preview to gain deeper insights into its strengths and limitations. The organization remains committed to understanding how clients interact with the model and identifying unexpected use cases.
“Scaling unsupervised learning continues to drive AI progress, improving accuracy, fluency and reliability,” OpenAI states.
As the organization continues to refine its models. Serves as a foundation for future AI advancements, particularly in reasoning and tool-using agents. While is already demonstrating impressive capabilities, OpenAI is actively evaluating its long-term role within its ecosystem.
With its broader knowledge base, improved emotional intelligence and. More natural conversational abilities, is set to offer significant improvements for people across various domains. OpenAI is keen to see how developers, businesses and enterprises integrate the model into their workflows and applications.
As AI continues to evolve, marks another milestone in OpenAI’s pursuit of more capable, reliable and user-aligned language models. Promising new opportunities for innovation in the enterprise landscape.
Since the start of this year, the stock has lost in value.
The market capitalisation of the firm currently stands at INR Cr.
Marvel Rivals has been continuing to dominate the free-to-play hero shooter ever since its explosive launch in December 2024. With a roster full of 37...
Scientists have found ancient brains before—some are thought to be at least 10,000 years old. But this is the only time they’ve seen a brain turn to g...
Semantic understanding, not just vectors: How Intuit’s data architecture powers agentic AI with measurable ROI

Intuit — the financial software giant behind products like TurboTax and QuickBooks — is making significant strides using generative AI to enhance its offerings for small business consumers.
In a tech landscape flooded with AI promises. Intuit has built an agent-based AI architecture that’s delivering tangible business outcomes for small businesses. The firm has deployed what it calls “done for you” experiences that autonomously handle entire workflows and deliver quantifiable business impact.
Intuit has been building out its own AI layer. Which it calls a generative AI operating system (GenOS). The firm detailed some of the ways it is using gen AI to improve personalization at VB Transform 2024. In Sept. 2024, Intuit added agentic AI workflows, an effort that has improved operations for both the firm and its people.
. QuickBooks Online individuals are getting paid an average of five days faster, with overdue invoices 10% more likely to be paid in full. For small businesses where cash flow is king, these aren’t just incremental improvements — they’re potentially business-saving innovations.
The technical trinity: How Intuit’s data architecture enables true agentic AI.
What separates Intuit’s approach from competitors is its sophisticated data architecture designed specifically to enable agent-based AI experiences.
Moving to another aspect, the firm has built what CDO Ashok Srivastava calls “a trinity” of data systems:
Data lake: The foundational repository for all data. Customer data cloud (CDC): A specialized serving layer for AI experiences. “Event bus“: A streaming data system enabling real-time operations.
“CDC provides a serving layer for AI experiences. Then the data lake is kind of the repository for all such data,” Srivastava told VentureBeat. “The agent is going to be interacting with data, and it has a set of data that it could look at in order to pull information.”.
Going beyond vector embeddings to power agentic AI.
Building on these developments, the Intuit architecture diverges from the typical vector database approach many enterprises are hastily implementing. While vector databases and embeddings are key for powering AI models, Intuit recognizes that true semantic understanding requires a more holistic approach.
“Where the key issue continues to be is essentially in ensuring that we have a good, logical and. Semantic understanding of the data,” stated Srivastava.
To achieve this semantic understanding, Intuit is building out a semantic data layer on top of its core data infrastructure. The semantic data layer helps provide context and meaning around the data, beyond just the raw data itself or its vector representations. It allows Intuit’s AI agents to improved comprehend the relationships and connections between different data data and elements.
By building this semantic data layer. Intuit is able to augment the capabilities of its vector-based systems with a deeper, more contextual understanding of data. This allows AI agents to make more informed and meaningful decisions for consumers.
Beyond basic automation: How agentic AI completes entire business processes autonomously.
Unlike enterprises implementing AI for basic workflow automation or customer service chatbots. Intuit has focused on creating fully agentic “done for you” experiences. These are applications that handle complex, multi-step tasks while requiring only final human approval.
For QuickBooks individuals, the agentic system analyzes client payment history and invoice status to automatically draft personalized reminder messages. Allowing business owners to simply review and approve before sending. The system’s ability to personalize based on relationship context and payment patterns has directly contributed to measurably faster payments.
Intuit is applying identical agentic principles internally, developing autonomous procurement systems and HR assistants.
“We have the ability to have an internal agentic procurement process that employees can use to purchase supplies and book travel,” Srivastava explained. Demonstrating how the business is eating its own AI dog food.
What potentially gives Intuit a competitive advantage over other enterprise AI implementations is how the system was designed with foresight about the emergence of advanced reasoning models like DeepSeek.
“We built gen runtime in anticipation of reasoning models coming up,” Ashok revealed. “We’re not behind the eight ball … we’re ahead of it. We built the capabilities assuming that reasoning would exist.”.
Additionally, this forward-thinking design means Intuit can rapidly incorporate new reasoning capabilities into their agentic experiences as they emerge. Without requiring architectural overhauls. , Intuit’s engineering teams are already using these capabilities to enable agents to reason across a large number of tools and data in ways that weren’t previously possible.
Shifting from AI hype to business impact.
Perhaps most significantly, Intuit’s approach reveals a clear focus on business outcomes rather than technological showmanship.
“There’s a lot of work and a lot of fanfare going on these days on AI itself, that it’s going to revolutionize the world. And all of that, which I think is good,” introduced Srivastava. “But I think what’s a lot improved is to show that it’s actually helping real people do improved.”.
The organization believes deeper reasoning capabilities will enable even more comprehensive “done for you” experiences that cover more customer needs with greater depth. Each experience combines multiple atomic experiences or discrete operations that together create a complete workflow solution.
What this means for enterprises adopting AI.
For enterprises looking to implement AI effectively. Intuit’s approach offers several valuable lessons for enterprises:
Focus on outcomes over technology : Rather than showcasing AI for its own sake target specific business pain points with measurable improvement goals.
: Rather than showcasing AI for its own sake target specific business pain points with measurable improvement goals. Build with future models in mind : Design architecture that can incorporate emerging reasoning capabilities without requiring a complete rebuild.
: Design architecture that can incorporate emerging reasoning capabilities without requiring a complete rebuild. Address data challenges first : Before rushing to implement agents, ensure your data foundation can support semantic understanding and cross-system reasoning.
: Before rushing to implement agents. Ensure your data foundation can support semantic understanding and cross-system reasoning. Create complete experiences: Look beyond simple automation to create end-to-end “done for you” workflows that deliver complete solutions.
As agentic AI continues to mature, enterprises that follow Intuit’s example by focusing on complete solutions rather than isolated AI capabilities may find themselves achieving similar concrete business results rather than simply generating tech buzz.
The Indian equities markets, in tandem with the global markets. Crashed on Friday (February 28) after US President Donald Trump doubled down on his ta...
Since this weekend is also the start of March, Amazon Prime Video is going to have a number of new movies available on the platform. But with everyone...
Founded in 2004, Virtuos has grown into a big firm when it comes to the production of games. As an external developer, Virtuos‘ team has crossed 4,...
Market Impact Analysis
Market Growth Trend
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 |
---|---|---|---|---|---|---|
12.0% | 14.4% | 15.2% | 16.8% | 17.8% | 18.3% | 18.5% |
Quarterly Growth Rate
Q1 2024 | Q2 2024 | Q3 2024 | Q4 2024 |
---|---|---|---|
16.8% | 17.5% | 18.2% | 18.5% |
Market Segments and Growth Drivers
Segment | Market Share | Growth Rate |
---|---|---|
Digital Transformation | 31% | 22.5% |
IoT Solutions | 24% | 19.8% |
Blockchain | 13% | 24.9% |
AR/VR Applications | 18% | 29.5% |
Other Innovations | 14% | 15.7% |
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity:
Competitive Landscape Analysis
Company | Market Share |
---|---|
Amazon Web Services | 16.3% |
Microsoft Azure | 14.7% |
Google Cloud | 9.8% |
IBM Digital | 8.5% |
Salesforce | 7.9% |
Future Outlook and Predictions
The Billions Network Launches landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:
Year-by-Year Technology Evolution
Based on current trajectory and expert analyses, we can project the following development timeline:
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:
Innovation Trigger
- Generative AI for specialized domains
- Blockchain for supply chain verification
Peak of Inflated Expectations
- Digital twins for business processes
- Quantum-resistant cryptography
Trough of Disillusionment
- Consumer AR/VR applications
- General-purpose blockchain
Slope of Enlightenment
- AI-driven analytics
- Edge computing
Plateau of Productivity
- Cloud infrastructure
- Mobile applications
Technology Evolution Timeline
- Technology adoption accelerating across industries
- digital transformation initiatives becoming mainstream
- Significant transformation of business processes through advanced technologies
- new digital business models emerging
- Fundamental shifts in how technology integrates with business and society
- emergence of new technology paradigms
Expert Perspectives
Leading experts in the digital innovation sector provide diverse perspectives on how the landscape will evolve over the coming years:
"Technology transformation will continue to accelerate, creating both challenges and opportunities."
— Industry Expert
"Organizations must balance innovation with practical implementation to achieve meaningful results."
— Technology Analyst
"The most successful adopters will focus on business outcomes rather than technology for its own sake."
— Research Director
Areas of Expert Consensus
- Acceleration of Innovation: The pace of technological evolution will continue to increase
- Practical Integration: Focus will shift from proof-of-concept to operational deployment
- Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
- Regulatory Influence: Regulatory frameworks will increasingly shape technology development
Short-Term Outlook (1-2 Years)
In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing digital innovation challenges:
- Technology adoption accelerating across industries
- digital transformation initiatives becoming mainstream
These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.
Mid-Term Outlook (3-5 Years)
As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:
- Significant transformation of business processes through advanced technologies
- new digital business models emerging
This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.
Long-Term Outlook (5+ Years)
Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:
- Fundamental shifts in how technology integrates with business and society
- emergence of new technology paradigms
These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.
Key Risk Factors and Uncertainties
Several critical factors could significantly impact the trajectory of digital innovation evolution:
Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.
Alternative Future Scenarios
The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:
Optimistic Scenario
Rapid adoption of advanced technologies with significant business impact
Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.
Probability: 25-30%
Base Case Scenario
Measured implementation with incremental improvements
Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.
Probability: 50-60%
Conservative Scenario
Technical and organizational barriers limiting effective adoption
Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.
Probability: 15-20%
Scenario Comparison Matrix
Factor | Optimistic | Base Case | Conservative |
---|---|---|---|
Implementation Timeline | Accelerated | Steady | Delayed |
Market Adoption | Widespread | Selective | Limited |
Technology Evolution | Rapid | Progressive | Incremental |
Regulatory Environment | Supportive | Balanced | Restrictive |
Business Impact | Transformative | Significant | Modest |
Transformational Impact
Technology becoming increasingly embedded in all aspects of business operations. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.
The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.
Implementation Challenges
Technical complexity and organizational readiness remain key challenges. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.
Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.
Key Innovations to Watch
Artificial intelligence, distributed systems, and automation technologies leading innovation. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.
Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.
Technical Glossary
Key technical terms and definitions to help understand the technologies discussed in this article.
Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.