Technology News from Around the World, Instantly on Oracnoos!

Semantic understanding, not just vectors: How Intuit’s data architecture powers agentic AI with measurable ROI - Related to launches, releases, powers, intuit’s, ‘hi

OpenAI releases ‘largest, most knowledgable’ model GPT-4.5 with reduced hallucinations and high API price

OpenAI releases ‘largest, most knowledgable’ model GPT-4.5 with reduced hallucinations and high API price

It’s here: OpenAI has presented the release of [website], a research preview of its latest and most powerful large language model (LLM) for chat applications. Unfortunately, it’s far-and-away OpenAI’s most expensive model (more on that below).

It’s also not a “reasoning model,” or the new class of models offered by OpenAI, DeepSeek, Anthropic and many others that produce “chains-of-thought,” (CoT) or stream-of-consciousness-like text blocks in which they reflect on their own assumptions and conclusions to try and catch errors before serving up responses/outputs to consumers. It’s still more of a classical LLM.

Nonetheless, acording to OpenAI co-founder and CEO Sam Altman’s post on the social network X, [website] is: “The first model that feels like talking to a thoughtful person to me. I have had several moments where I’ve sat back in my chair and been astonished at getting actually good advice from an AI.”.

However, he cautioned that the business is bumping up against the upper end of its supply of graphics processing units (GPUs) and has had to limit access as a result:

“Bad news: It is a giant, expensive model. We really wanted to launch it to plus and pro at the same time, but we’ve been growing a lot and are out of GPUs. We will add tens of thousands of GPUs next week and roll it out to the plus tier then. (Hundreds of thousands coming soon, and I’m pretty sure y’all will use every one we can rack up.) This isn’t how we want to operate, but it’s hard to perfectly predict growth surges that lead to GPU shortages.“.

GPT‑[website] is able to access search and OpenAI’s ChatGPT Canvas mode, and people can upload files and images to it, but it doesn’t have other multimodal attributes like voice mode, video and screensharing — yet.

[website] represents a step forward in AI training, particularly in unsupervised learning, which enhances the model’s ability to recognize patterns, draw connections and generate creative insights.

During a livestream demonstration, OpenAI researchers noted that the model was trained on data generated by smaller models and that this improved its “world model.” They also noted it was pre-trained across multiple data centers concurrently, suggesting a decentralized approach similar to that of rival lab Nous Research.

The model builds on OpenAI’s previous work in AI scaling, reinforcing the idea that increasing data and compute power leads to improved AI performance.

Compared to its predecessors and contemporaries, [website] is expected to produce far fewer hallucinations ([website] instead of [website] for GPT-4o), making it more reliable across a broad range of topics.

, [website] is designed to create warm, intuitive and naturally flowing conversations. It has a stronger grasp of nuance and context, enabling more human-like interactions and a greater ability to collaborate effectively with people.

The model’s expanded knowledge base and improved ability to interpret subtle cues allow it to excel in various applications, including:

Writing assistance: Refining content, improving clarity and generating creative ideas.

Refining content, improving clarity and generating creative ideas. Programming support: Debugging, suggesting code improvements and automating workflows.

Debugging, suggesting code improvements and automating workflows. Problem-solving: Providing detailed explanations and assisting in practical decision-making.

[website] also incorporates new alignment techniques that enhance its ability to understand human preferences and intent, further improving user experience.

ChatGPT Pro customers can select [website] in the model picker on web, mobile and desktop. Next week, OpenAI will begin rolling it out to Plus and Team customers.

For developers, [website] is available through OpenAI’s API, including the chat completions API, assistants API, and batch API. It supports key functions like function calling, structured outputs, streaming, system messages and image inputs, making it a versatile tool for various AI-driven applications. However, it currently does not support multimodal capabilities such as voice mode, video or screen sharing.

Pricing and implications for enterprise decision-makers.

Enterprises and team leaders stand to benefit significantly from the capabilities introduced with [website] With its lower hallucination rate, enhanced reliability and natural conversational abilities, [website] can support a wide range of business functions:

Improved customer engagement: Businesses can integrate [website] into support systems for faster, more natural interactions with fewer errors.

Businesses can integrate [website] into support systems for faster, more natural interactions with fewer errors. Enhanced content generation: Marketing and communications teams can produce high-quality, on-brand content efficiently.

Marketing and communications teams can produce high-quality, on-brand content efficiently. Streamlined operations: AI-powered automation can assist in debugging, workflow optimization and strategic decision-making.

AI-powered automation can assist in debugging, workflow optimization and strategic decision-making. Scalability and customization: The API allows for tailored implementations, enabling enterprises to build AI-driven solutions suited to their needs.

At the same time, the pricing for [website] through OpenAI’s API for third-party developers looking to build applications on the model appears shockingly high, at $75/$180 per million input/output tokens compared to $[website]$10 for GPT-4o.

And with other rival models released not long ago — from Anthropic’s Claude [website], to Google’s Gemini 2 Pro, to OpenAI’s own reasoning “o” series (o1, o3-mini high, o3) — the question will become if [website]’s value is worth the relatively high cost, especially through the API.

Early reactions from fellow AI researchers and power individuals vary widely.

The release of [website] has sparked mixed reactions from AI researchers and tech enthusiasts on the social network X, particularly after a version of the model’s “system card” (a technical document outlining its training and evaluations) was leaked, revealing a variety of benchmark results ahead of the official announcement.

The actual final system card , including the removal of a line that “[website] is not a frontier model, but it is OpenAI’s largest LLM, improving on GPT-4’s computational efficiency by more than 10x,” which an OpenAI spokesperson showcased turned out to be not accurate. The official system card can be found here on OpenAI’s website, while the leaked version is attached below.

Teknium (@Teknium1), the pseudonymous co-founder of rival AI model provider Nous Research, expressed disappointment in the new model, pointing out minimal improvements in measuring massive multitask language understanding (MMLU) scores and real-world coding benchmarks compared to other leading LLMs.

“It’s been 2+ years and 1,000s of times more capital has been deployed since GPT-4… what happened?” he asked.

Others noted that [website] underperformed relative to OpenAI’s o3-mini model in software engineering benchmarks, raising questions about whether this release represents significant progress.

However, some consumers defended the model’s potential beyond raw benchmarks.

Software developer Haider (@slow_developer) highlighted [website]’s 10x computational efficiency improvement over GPT-4 and its stronger general-purpose capabilities compared to OpenAI’s STEM-focused o-series models.

AI news poster Andrew Curran (@AndrewCurran_) took a more qualitative view, predicting that [website] would set new standards in writing and creative thought, calling it OpenAI’s “Opus.”.

These discussions underscore a broader debate in AI: Should progress be measured purely in benchmarks, or do qualitative improvements in reasoning, creativity and human-like interactions hold greater value?

OpenAI is positioning [website] as a research preview to gain deeper insights into its strengths and limitations. The firm remains committed to understanding how clients interact with the model and identifying unexpected use cases.

“Scaling unsupervised learning continues to drive AI progress, improving accuracy, fluency and reliability,” OpenAI states.

As the corporation continues to refine its models, [website] serves as a foundation for future AI advancements, particularly in reasoning and tool-using agents. While [website] is already demonstrating impressive capabilities, OpenAI is actively evaluating its long-term role within its ecosystem.

With its broader knowledge base, improved emotional intelligence and more natural conversational abilities, [website] is set to offer significant improvements for people across various domains. OpenAI is keen to see how developers, businesses and enterprises integrate the model into their workflows and applications.

As AI continues to evolve, [website] marks another milestone in OpenAI’s pursuit of more capable, reliable and user-aligned language models, promising new opportunities for innovation in the enterprise landscape.

Amblyopia, often referred to as ‘lazy eye’, is a prevalent yet frequently overlooked vision disorder that affects 1-5% of the global population. Its p......

Apple held its annual iPhone event back in September 2024 and debuted the iPhone 16 series. Much of the presenta......

Researchers at Physical Intelligence, an AI robotics enterprise, have developed a system called the Hierarchical Interactive Robot (Hi Robot). This syste......

Physical Intelligence Launches ‘Hi Robot’, Helps Robots Think Through Actions

Physical Intelligence Launches ‘Hi Robot’, Helps Robots Think Through Actions

Researchers at Physical Intelligence, an AI robotics business, have developed a system called the Hierarchical Interactive Robot (Hi Robot). This system enables robots to process complex instructions and feedback using vision-language models (VLMs) in a hierarchical structure.

Vision-language models can control robots, but what if the prompt is too complex for the robot to follow directly?

We developed a way to get robots to “think through” complex instructions, feedback, and interjections. We call it the Hierarchical Interactive Robot (Hi Robot). [website] — Physical Intelligence (@physical_int) February 26, 2025.

The system allows robots to break down intricate tasks into simpler steps, similar to how humans reason through complex problems using Daniel Kahneman’s ‘System 1’ and ‘System 2’ approaches.

In this context, Hi Robot uses a high-level VLM to reason through complex prompts and a low-level VLM to execute actions.

Testing and Training Using Synthetic Data.

Researchers used synthetic data to train robots to follow complex instructions. Relying solely on real-life examples and atomic commands wasn’t enough to teach robots to handle multi-step tasks.

To address this, they created synthetic datasets by pairing robot observations with hypothetical scenarios and human feedback. This approach helps the model learn how to interpret and respond to complex commands.

It outdid other methods, including GPT-4o and a flat Very Large Array (VLA) policy, by improved following instructions and adapting to real-time corrections. It achieves a 40% higher instruction-following accuracy than GPT-4o. Hence, it demonstrates improved alignment with user prompts and real-time observations.

In real-world tests, Hi Robot performed tasks like clearing tables, making sandwiches, and grocery shopping. It effectively handled multi-stage instructions, adapted to real-time corrections, and respected constraints.

Synthetic data, in this context, highlights potential in robotics to efficiently simulate diverse scenarios, reducing the need for extensive real-world data collection.

As seen in an example below, a robot is trained to clean a table by disposing of trash and placing dishes in a bin. It can be directed to follow more intricate commands through Hi Robot.

This system allows the robot to reason through modified commands provided in natural language, enabling it to “talk to itself” as it performs tasks. Moreover, Hi Robot can interpret user contextual comments, incorporating real-time feedback into its actions, such as handling complex prompts.

This setup allows the robot to incorporate real-time feedback, such as when a user says “that’s not trash”, and adjust its actions accordingly.

The system has been tested on various robotic platforms, including single-arm, dual-arm, and mobile robots, performing tasks like cleaning tables and making sandwiches.

“Can we get our robots to ‘think’ the same way, with a little ‘voice’ that tells them what to do when presented with a complex task?” the researchers expressed in the business’s official blog. This advancement could lead to more intuitive and flexible robot capabilities in real-world applications.

Researchers plan to refine the system in the future by combining the high-level and low-level models, allowing for more adaptive processing of complex tasks.

We are looking for writers to propose up-to-date content focused on data science, machine learning, artificia......

I've been experimenting with using ChatGPT to help turbocharge my programming output for over two years. When ChatGPT helped me fi......

Semantic understanding, not just vectors: How Intuit’s data architecture powers agentic AI with measurable ROI

Semantic understanding, not just vectors: How Intuit’s data architecture powers agentic AI with measurable ROI

Intuit — the financial software giant behind products like TurboTax and QuickBooks — is making significant strides using generative AI to enhance its offerings for small business consumers.

In a tech landscape flooded with AI promises, Intuit has built an agent-based AI architecture that’s delivering tangible business outcomes for small businesses. The organization has deployed what it calls “done for you” experiences that autonomously handle entire workflows and deliver quantifiable business impact.

Intuit has been building out its own AI layer, which it calls a generative AI operating system (GenOS). The corporation detailed some of the ways it is using gen AI to improve personalization at VB Transform 2024. In Sept. 2024, Intuit added agentic AI workflows, an effort that has improved operations for both the corporation and its people.

, QuickBooks Online end-customers are getting paid an average of five days faster, with overdue invoices 10% more likely to be paid in full. For small businesses where cash flow is king, these aren’t just incremental improvements — they’re potentially business-saving innovations.

The technical trinity: How Intuit’s data architecture enables true agentic AI.

What separates Intuit’s approach from competitors is its sophisticated data architecture designed specifically to enable agent-based AI experiences.

The enterprise has built what CDO Ashok Srivastava calls “a trinity” of data systems:

Data lake: The foundational repository for all data. Customer data cloud (CDC): A specialized serving layer for AI experiences. “Event bus“: A streaming data system enabling real-time operations.

“CDC provides a serving layer for AI experiences, then the data lake is kind of the repository for all such data,” Srivastava told VentureBeat. “The agent is going to be interacting with data, and it has a set of data that it could look at in order to pull information.”.

Going beyond vector embeddings to power agentic AI.

The Intuit architecture diverges from the typical vector database approach many enterprises are hastily implementing. While vector databases and embeddings are significant for powering AI models, Intuit recognizes that true semantic understanding requires a more holistic approach.

“Where the key issue continues to be is essentially in ensuring that we have a good, logical and semantic understanding of the data,” showcased Srivastava.

To achieve this semantic understanding, Intuit is building out a semantic data layer on top of its core data infrastructure. The semantic data layer helps provide context and meaning around the data, beyond just the raw data itself or its vector representations. It allows Intuit’s AI agents to superior comprehend the relationships and connections between different data data and elements.

By building this semantic data layer, Intuit is able to augment the capabilities of its vector-based systems with a deeper, more contextual understanding of data. This allows AI agents to make more informed and meaningful decisions for clients.

Beyond basic automation: How agentic AI completes entire business processes autonomously.

Unlike enterprises implementing AI for basic workflow automation or customer service chatbots, Intuit has focused on creating fully agentic “done for you” experiences. These are applications that handle complex, multi-step tasks while requiring only final human approval.

For QuickBooks consumers, the agentic system analyzes client payment history and invoice status to automatically draft personalized reminder messages, allowing business owners to simply review and approve before sending. The system’s ability to personalize based on relationship context and payment patterns has directly contributed to measurably faster payments.

Intuit is applying identical agentic principles internally, developing autonomous procurement systems and HR assistants.

“We have the ability to have an internal agentic procurement process that employees can use to purchase supplies and book travel,” Srivastava explained, demonstrating how the organization is eating its own AI dog food.

What potentially gives Intuit a competitive advantage over other enterprise AI implementations is how the system was designed with foresight about the emergence of advanced reasoning models like DeepSeek.

“We built gen runtime in anticipation of reasoning models coming up,” Ashok revealed. “We’re not behind the eight ball … we’re ahead of it. We built the capabilities assuming that reasoning would exist.”.

This forward-thinking design means Intuit can rapidly incorporate new reasoning capabilities into their agentic experiences as they emerge, without requiring architectural overhauls. , Intuit’s engineering teams are already using these capabilities to enable agents to reason across a large number of tools and data in ways that weren’t previously possible.

Shifting from AI hype to business impact.

Perhaps most significantly, Intuit’s approach exhibits a clear focus on business outcomes rather than technological showmanship.

“There’s a lot of work and a lot of fanfare going on these days on AI itself, that it’s going to revolutionize the world, and all of that, which I think is good,” noted Srivastava. “But I think what’s a lot superior is to show that it’s actually helping real people do superior.”.

The enterprise believes deeper reasoning capabilities will enable even more comprehensive “done for you” experiences that cover more customer needs with greater depth. Each experience combines multiple atomic experiences or discrete operations that together create a complete workflow solution.

What this means for enterprises adopting AI.

For enterprises looking to implement AI effectively, Intuit’s approach offers several valuable lessons for enterprises:

Focus on outcomes over technology : Rather than showcasing AI for its own sake target specific business pain points with measurable improvement goals.

: Rather than showcasing AI for its own sake target specific business pain points with measurable improvement goals. Build with future models in mind : Design architecture that can incorporate emerging reasoning capabilities without requiring a complete rebuild.

: Design architecture that can incorporate emerging reasoning capabilities without requiring a complete rebuild. Address data challenges first : Before rushing to implement agents, ensure your data foundation can support semantic understanding and cross-system reasoning.

: Before rushing to implement agents, ensure your data foundation can support semantic understanding and cross-system reasoning. Create complete experiences: Look beyond simple automation to create end-to-end “done for you” workflows that deliver complete solutions.

As agentic AI continues to mature, enterprises that follow Intuit’s example by focusing on complete solutions rather than isolated AI aspects may find themselves achieving similar concrete business results rather than simply generating tech buzz.

At TDS, we see value in every article we publish and recognize that authors share their work with us for a wide range of reasons — some wish to spread......

Project EKA, spearheaded by AI startup Soket Labs has emerged as India’s ambitious initiative to develop state-of-the-art foundation models that rival......

Ford Business Solutions (FBS), the technology and business services hub of Ford, unveiled on Thursday that it has expanded its presence in India with......

Market Impact Analysis

Market Growth Trend

2018201920202021202220232024
23.1%27.8%29.2%32.4%34.2%35.2%35.6%
23.1%27.8%29.2%32.4%34.2%35.2%35.6% 2018201920202021202220232024

Quarterly Growth Rate

Q1 2024 Q2 2024 Q3 2024 Q4 2024
32.5% 34.8% 36.2% 35.6%
32.5% Q1 34.8% Q2 36.2% Q3 35.6% Q4

Market Segments and Growth Drivers

Segment Market Share Growth Rate
Machine Learning29%38.4%
Computer Vision18%35.7%
Natural Language Processing24%41.5%
Robotics15%22.3%
Other AI Technologies14%31.8%
Machine Learning29.0%Computer Vision18.0%Natural Language Processing24.0%Robotics15.0%Other AI Technologies14.0%

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity:

Innovation Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity AI/ML Blockchain VR/AR Cloud Mobile

Competitive Landscape Analysis

Company Market Share
Google AI18.3%
Microsoft AI15.7%
IBM Watson11.2%
Amazon AI9.8%
OpenAI8.4%

Future Outlook and Predictions

The Openai Releases Largest landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:

Year-by-Year Technology Evolution

Based on current trajectory and expert analyses, we can project the following development timeline:

2024Early adopters begin implementing specialized solutions with measurable results
2025Industry standards emerging to facilitate broader adoption and integration
2026Mainstream adoption begins as technical barriers are addressed
2027Integration with adjacent technologies creates new capabilities
2028Business models transform as capabilities mature
2029Technology becomes embedded in core infrastructure and processes
2030New paradigms emerge as the technology reaches full maturity

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:

Time / Development Stage Adoption / Maturity Innovation Early Adoption Growth Maturity Decline/Legacy Emerging Tech Current Focus Established Tech Mature Solutions (Interactive diagram available in full report)

Innovation Trigger

  • Generative AI for specialized domains
  • Blockchain for supply chain verification

Peak of Inflated Expectations

  • Digital twins for business processes
  • Quantum-resistant cryptography

Trough of Disillusionment

  • Consumer AR/VR applications
  • General-purpose blockchain

Slope of Enlightenment

  • AI-driven analytics
  • Edge computing

Plateau of Productivity

  • Cloud infrastructure
  • Mobile applications

Technology Evolution Timeline

1-2 Years
  • Improved generative models
  • specialized AI applications
3-5 Years
  • AI-human collaboration systems
  • multimodal AI platforms
5+ Years
  • General AI capabilities
  • AI-driven scientific breakthroughs

Expert Perspectives

Leading experts in the ai tech sector provide diverse perspectives on how the landscape will evolve over the coming years:

"The next frontier is AI systems that can reason across modalities and domains with minimal human guidance."

— AI Researcher

"Organizations that develop effective AI governance frameworks will gain competitive advantage."

— Industry Analyst

"The AI talent gap remains a critical barrier to implementation for most enterprises."

— Chief AI Officer

Areas of Expert Consensus

  • Acceleration of Innovation: The pace of technological evolution will continue to increase
  • Practical Integration: Focus will shift from proof-of-concept to operational deployment
  • Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
  • Regulatory Influence: Regulatory frameworks will increasingly shape technology development

Short-Term Outlook (1-2 Years)

In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing ai tech challenges:

  • Improved generative models
  • specialized AI applications
  • enhanced AI ethics frameworks

These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.

Mid-Term Outlook (3-5 Years)

As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:

  • AI-human collaboration systems
  • multimodal AI platforms
  • democratized AI development

This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.

Long-Term Outlook (5+ Years)

Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:

  • General AI capabilities
  • AI-driven scientific breakthroughs
  • new computing paradigms

These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.

Key Risk Factors and Uncertainties

Several critical factors could significantly impact the trajectory of ai tech evolution:

Ethical concerns about AI decision-making
Data privacy regulations
Algorithm bias

Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.

Alternative Future Scenarios

The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:

Optimistic Scenario

Responsible AI driving innovation while minimizing societal disruption

Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.

Probability: 25-30%

Base Case Scenario

Incremental adoption with mixed societal impacts and ongoing ethical challenges

Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.

Probability: 50-60%

Conservative Scenario

Technical and ethical barriers creating significant implementation challenges

Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.

Probability: 15-20%

Scenario Comparison Matrix

FactorOptimisticBase CaseConservative
Implementation TimelineAcceleratedSteadyDelayed
Market AdoptionWidespreadSelectiveLimited
Technology EvolutionRapidProgressiveIncremental
Regulatory EnvironmentSupportiveBalancedRestrictive
Business ImpactTransformativeSignificantModest

Transformational Impact

Redefinition of knowledge work, automation of creative processes. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.

The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.

Implementation Challenges

Ethical concerns, computing resource limitations, talent shortages. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.

Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.

Key Innovations to Watch

Multimodal learning, resource-efficient AI, transparent decision systems. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.

Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.

Technical Glossary

Key technical terms and definitions to help understand the technologies discussed in this article.

Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.

Filter by difficulty:

embeddings intermediate

algorithm

machine learning intermediate

interface

platform intermediate

platform Platforms provide standardized environments that reduce development complexity and enable ecosystem growth through shared functionality and integration capabilities.

API beginner

encryption APIs serve as the connective tissue in modern software architectures, enabling different applications and services to communicate and share data according to defined protocols and data formats.
API concept visualizationHow APIs enable communication between different software systems
Example: Cloud service providers like AWS, Google Cloud, and Azure offer extensive APIs that allow organizations to programmatically provision and manage infrastructure and services.

synthetic data intermediate

API

generative AI intermediate

cloud computing

large language model intermediate

middleware

scalability intermediate

scalability