Technology News from Around the World, Instantly on Oracnoos!

The best robot vacuum deals of February 2025: Save on Roomba, Roborock, Eufy, and more - Related to roborock,, february, fix, dominance', creates

The best robot vacuum deals of February 2025: Save on Roomba, Roborock, Eufy, and more

The best robot vacuum deals of February 2025: Save on Roomba, Roborock, Eufy, and more

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available insights, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or , we will correct or clarify the article. If you see inaccuracies in our content, please findings the mistake via this form.

Can you jailbreak Anthropic's latest AI safety measure? Researchers want you to try -- and are offering up to $20,000 if you suc......

There I was, yet again, scrolling on TikTok before it was banned and then unbanned, and I came across this insane caption.

Metric collection is an essential part of every machine learning project, enabling us to track model performance and monitor training progress. Ideall......

Using AI to Fix Problems It Creates

Using AI to Fix Problems It Creates

Hallucinations continue to be one of the most critical challenges with AI. Although databases and a few other methods can mitigate this, they’re not the only solution. Some AI frameworks, like CTGT, have aimed to nearly eradicate hallucinations, but their effectiveness has yet to be universally felt. Notably, this approach involves another form of AI.

At MLDS 2025, India’s largest GenAI summit for developers organised by AIM, Ratnesh Singh Parihar, principal architect at Talentica Software, noted his team uses AI to fix AI hallucinations.

Parihar discussed the difficulty of handling over 10 million stock-keeping units (SKUs) in e-commerce search with AI and how AI is used to fix AI-related problems.

Ritesh Agarwal, solution architect at Talentica Software, explained that when clients search for items, such as ‘pink t-shirt for toddlers’, traditional AI methods convert queries into embeddings and use cosine similarity to find relevant products.

However, hallucinations occur when irrelevant items like jeans or wristbands appear in search results. To combat this, his team integrated AI-powered validation checks using OpenAI to flag inaccurate results that stemmed from a hallucination with inconsistencies generated by AI tools.

, Talentica Software ran test queries to retrieve results from its comparison models, whether based on semantics or cosine similarity. The system provided a simple true or false flag, which they stored in our database.

Parihar further expanded on this idea, “Let’s say you want to generate a research paper using ChatGPT. It will generate good content, but the conferences will reject it because they can figure out what you say is machine-generated. No? But, some other AI tools can take that content and humanise it. So, AI has generated one problem, but you can use another AI tool to solve it. That’s how you can go about it.”.

While he mentioned that it is advanced to build things manually unless you absolutely need to, Parihar also stressed the need to minimise humans when building AI. “You cannot say…’I will create XYZ’, and then some people will come and verify the XYZ. You need to build the bots.”.

“You might need a person who knows a lot of AI tools. So they can use the tools. And, most importantly, you require someone who can convert those tools into bots, human-like,” he added.

Combining AI Tools Not Just to Fix Problems But To Save Cost.

The team at Talentica Software mentioned specifically that they used both Llama3 and OpenAI to combine with one another for advanced results, maintain efficiency and reduce the percentage of errors.

In particular, Llama 3 was used for large-scale product categorisation and tagging, significantly reducing costs compared to OpenAI, .

In combination, OpenAI was used as a validation mechanism to identify errors and hallucinations in search results for the e-commerce system. The AI system compared search results against image descriptions and product details, flagging inaccuracies through a true/false validation system.

Parihar also provided insights into why they switched to Llama 3 for tag or category generation and how they saved 97% in the cost required when dealing with SQL queries. With OpenAI, it cost them around $500 for 1 million SQLs and, in comparison, the Llama 3 model cost them just $15 for the same.

It is intriguing to see that the AI we built comes in handy in solving errors generated by the same family and reducing costs. We have come a long way in a short span of time, from relying on HTML to reduce hallucination overdose to trusting another AI. Fortunately, we do not need extra human efforts to find solutions to problems, ultimately making good use of AI.

People use tables every day to organize and interpret complex information in a structured, easily accessible format. Due to the ubiquity of such table...

The AI industry is accelerating rapidly, and this is evident in the introduction and application of AI agents. A few years back, AI was just an LLM wr...

As part of his global tour, OpenAI CEO Sam Altman is in Delhi today for the corporation’s DevDay, joined by India’s IT minister Ashwini Vaishnaw and OpenA...

US sets AI safety aside in favor of 'AI dominance'

US sets AI safety aside in favor of 'AI dominance'

In October 2023, former president Joe Biden signed an executive order that included several measures for regulating AI. On his first day in office, President Trump overturned it, replacing it a few days later with his own order on AI in the US.

This week, some government agencies that enforce AI regulation were told to halt their work, while the director of the US AI Safety Institute (AISI) stepped down.

Also: ChatGPT's Deep Research just identified 20 jobs it will replace. Is yours on the list?

So what does this mean practically for the future of AI regulation? Here's what you need to know.

What Biden's order accomplished - and didn't.

In addition to naming several initiatives around protecting civil rights, jobs, and privacy as AI accelerates, Biden's order focused on responsible development and compliance. However, as ZDNET's Tiernan Ray wrote at the time, the order could have been more specific, leaving loopholes available in much of the guidance. Though it required companies to investigation on any safety testing efforts, it didn't make red-teaming itself a requirement, or clarify any standards for testing. Ray pointed out that because AI as a discipline is very broad, regulating it needs -- but is also hampered by -- specificity.

A Brookings findings noted in November that because federal agencies absorbed many of the directives in Biden's order, they may protect them from Trump's repeal. But that protection is looking less and less likely.

Biden's order established the US AI Safety Institute (AISI), which is part of the National Institute of Standards and Technology (NIST). The AISI conducted AI model testing and worked with developers to improve safety measures, among other regulatory initiatives. In August, AISI signed agreements with Anthropic and OpenAI to collaborate on safety testing and research; in November, it established a testing and national security task force.

On Wednesday, likely due to Trump administration shifts, AISI director Elizabeth Kelly introduced her departure from the institute via LinkedIn. The fate of both initiatives, and the institute itself, is now unclear.

The Consumer Financial Protection Bureau (CFPB) also carried out many of the Biden order's objectives. For example, a June 2023 CFPB study on chatbots in consumer finance noted that they "may provide incorrect information, fail to provide meaningful dispute resolution, and raise privacy and security risks." CFPB guidance states lenders have to provide reasons for denying someone credit regardless of whether or not their use of AI makes this difficult or opaque. In June 2024, CFPB approved a new rule to ensure algorithmic home appraisals are fair, accurate, and comply with nondiscrimination law.

This week, the Trump administration halted work at CFPB, signaling that it may be on the chopping block -- which would severely undermine the enforcement of these efforts.

Also: How AI can help you manage your finances (and what to watch out for).

CFPB is in charge of ensuring companies comply with anti-discrimination measures like the Equal Credit Opportunity Act and the Consumer Financial Protection Act, and has noted that AI adoption can exacerbate discrimination and bias. In an August 2024 comment, CFPB noted it was "focused on monitoring the market for consumer financial products and services to identify risks to consumers and ensure that companies using emerging technologies, including those marketed as 'artificial intelligence' or 'AI,' do not violate federal consumer financial protection laws." It also stated it was monitoring "the future of consumer finance" and "novel uses of consumer data."

"Firms must comply with consumer financial protection laws when adopting emerging technology," the comment continues. It's unclear what body would enforce this if CFPB radically changes course or ceases to exist under new leadership.

On January 23rd, President Trump signed his own executive order on AI. In terms of policy, the single-line directive says only that the US must "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security."

Unlike Biden's order, terms like "safety," "consumer," "data," and "privacy" don't appear at all. There are no mentions of whether the Trump administration plans to prioritize safeguarding individual protections or address bias in the face of AI development. Instead, it focuses on removing what the White House called "unnecessarily burdensome requirements for companies developing and deploying AI," seemingly focusing on industry advancement.

Also: If you're not working on quantum-safe encryption now, it's already too late.

The order goes on to direct officials to find and remove "inconsistencies" with it in government agencies -- that is to say, remnants of Biden's order that have been or are still being carried out.

In March 2024, the Biden administration released an additional memo stating government agencies using AI would have to prove those tools weren't harmful to the public. Like other Biden-era executive orders and related directives, it emphasized responsible deployment, centering AI's impact on individual citizens. Trump's executive order notes that it will review (and likely dismantle) much of this memo by March 24th.

This is especially concerning given that last week, OpenAI released ChatGPT Gov, a version of OpenAI's chatbot optimized for security and government systems. It's unclear when government agencies will get access to the chatbot or whether there will be parameters around how it can be used, though OpenAI says government workers already use ChatGPT. If the Biden memo -- which has since been removed from the White House website -- is gutted, it's hard to say whether ChatGPT Gov will be held to any similar standards that account for harm.

Trump's executive order gave his staff 180 days to come up with an AI policy, meaning its deadline to materialize is July 22nd. On Wednesday, the Trump administration put out a call for public comment to inform that action plan.

The Trump administration is disrupting AISI and CFPB -- two key bodies that carry out Biden's protections -- without a formal policy in place to catch fallout. That leaves AI oversight and compliance in a murky state for at least the next six months (millennia in AI development timelines, given the rate at which the technology evolves), all while tech giants become even more entrenched in government partnerships and initiatives like Project Stargate.

Also: How AI will transform cybersecurity in 2025 - and supercharge cybercrime.

Considering global AI regulation is still far behind the rate of advancement, perhaps it was improved to have something rather than nothing.

"While Biden's AI executive order may have been mostly symbolic, its rollback signals the Trump administration's willingness to overlook the potential dangers of AI," mentioned Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "This could prove to be shortsighted: a high-profile failure -- what we might call a 'Chernobyl moment' -- could spark a crisis of public confidence, slowing the progress that the administration hopes to accelerate."

"We don't want advanced AI that is unsafe, untrustworthy, or unreliable -- no one is improved off in that scenario," he added.

Minimum cost flow optimization minimizes the cost of moving flow through a network of nodes and edges. Nodes include findings (supply) and sinks (deman......

Remember Nokia? Back before smartphones, many of us carried Nokia's nearly indestructible cell phones. They no longer make p......

Can you jailbreak Anthropic's latest AI safety measure? Researchers want you to try -- and are offering up to $20,000 if you suc......

Market Impact Analysis

Market Growth Trend

2018201920202021202220232024
23.1%27.8%29.2%32.4%34.2%35.2%35.6%
23.1%27.8%29.2%32.4%34.2%35.2%35.6% 2018201920202021202220232024

Quarterly Growth Rate

Q1 2024 Q2 2024 Q3 2024 Q4 2024
32.5% 34.8% 36.2% 35.6%
32.5% Q1 34.8% Q2 36.2% Q3 35.6% Q4

Market Segments and Growth Drivers

Segment Market Share Growth Rate
Machine Learning29%38.4%
Computer Vision18%35.7%
Natural Language Processing24%41.5%
Robotics15%22.3%
Other AI Technologies14%31.8%
Machine Learning29.0%Computer Vision18.0%Natural Language Processing24.0%Robotics15.0%Other AI Technologies14.0%

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity:

Innovation Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity AI/ML Blockchain VR/AR Cloud Mobile

Competitive Landscape Analysis

Company Market Share
Google AI18.3%
Microsoft AI15.7%
IBM Watson11.2%
Amazon AI9.8%
OpenAI8.4%

Future Outlook and Predictions

The Best Robot Vacuum landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:

Year-by-Year Technology Evolution

Based on current trajectory and expert analyses, we can project the following development timeline:

2024Early adopters begin implementing specialized solutions with measurable results
2025Industry standards emerging to facilitate broader adoption and integration
2026Mainstream adoption begins as technical barriers are addressed
2027Integration with adjacent technologies creates new capabilities
2028Business models transform as capabilities mature
2029Technology becomes embedded in core infrastructure and processes
2030New paradigms emerge as the technology reaches full maturity

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:

Time / Development Stage Adoption / Maturity Innovation Early Adoption Growth Maturity Decline/Legacy Emerging Tech Current Focus Established Tech Mature Solutions (Interactive diagram available in full report)

Innovation Trigger

  • Generative AI for specialized domains
  • Blockchain for supply chain verification

Peak of Inflated Expectations

  • Digital twins for business processes
  • Quantum-resistant cryptography

Trough of Disillusionment

  • Consumer AR/VR applications
  • General-purpose blockchain

Slope of Enlightenment

  • AI-driven analytics
  • Edge computing

Plateau of Productivity

  • Cloud infrastructure
  • Mobile applications

Technology Evolution Timeline

1-2 Years
  • Improved generative models
  • specialized AI applications
3-5 Years
  • AI-human collaboration systems
  • multimodal AI platforms
5+ Years
  • General AI capabilities
  • AI-driven scientific breakthroughs

Expert Perspectives

Leading experts in the ai tech sector provide diverse perspectives on how the landscape will evolve over the coming years:

"The next frontier is AI systems that can reason across modalities and domains with minimal human guidance."

— AI Researcher

"Organizations that develop effective AI governance frameworks will gain competitive advantage."

— Industry Analyst

"The AI talent gap remains a critical barrier to implementation for most enterprises."

— Chief AI Officer

Areas of Expert Consensus

  • Acceleration of Innovation: The pace of technological evolution will continue to increase
  • Practical Integration: Focus will shift from proof-of-concept to operational deployment
  • Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
  • Regulatory Influence: Regulatory frameworks will increasingly shape technology development

Short-Term Outlook (1-2 Years)

In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing ai tech challenges:

  • Improved generative models
  • specialized AI applications
  • enhanced AI ethics frameworks

These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.

Mid-Term Outlook (3-5 Years)

As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:

  • AI-human collaboration systems
  • multimodal AI platforms
  • democratized AI development

This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.

Long-Term Outlook (5+ Years)

Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:

  • General AI capabilities
  • AI-driven scientific breakthroughs
  • new computing paradigms

These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.

Key Risk Factors and Uncertainties

Several critical factors could significantly impact the trajectory of ai tech evolution:

Ethical concerns about AI decision-making
Data privacy regulations
Algorithm bias

Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.

Alternative Future Scenarios

The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:

Optimistic Scenario

Responsible AI driving innovation while minimizing societal disruption

Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.

Probability: 25-30%

Base Case Scenario

Incremental adoption with mixed societal impacts and ongoing ethical challenges

Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.

Probability: 50-60%

Conservative Scenario

Technical and ethical barriers creating significant implementation challenges

Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.

Probability: 15-20%

Scenario Comparison Matrix

FactorOptimisticBase CaseConservative
Implementation TimelineAcceleratedSteadyDelayed
Market AdoptionWidespreadSelectiveLimited
Technology EvolutionRapidProgressiveIncremental
Regulatory EnvironmentSupportiveBalancedRestrictive
Business ImpactTransformativeSignificantModest

Transformational Impact

Redefinition of knowledge work, automation of creative processes. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.

The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.

Implementation Challenges

Ethical concerns, computing resource limitations, talent shortages. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.

Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.

Key Innovations to Watch

Multimodal learning, resource-efficient AI, transparent decision systems. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.

Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.

Technical Glossary

Key technical terms and definitions to help understand the technologies discussed in this article.

Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.

Filter by difficulty:

platform intermediate

algorithm Platforms provide standardized environments that reduce development complexity and enable ecosystem growth through shared functionality and integration capabilities.

algorithm intermediate

interface

embeddings intermediate

platform

API beginner

encryption APIs serve as the connective tissue in modern software architectures, enabling different applications and services to communicate and share data according to defined protocols and data formats.
API concept visualizationHow APIs enable communication between different software systems
Example: Cloud service providers like AWS, Google Cloud, and Azure offer extensive APIs that allow organizations to programmatically provision and manage infrastructure and services.

encryption intermediate

API Modern encryption uses complex mathematical algorithms to convert readable data into encoded formats that can only be accessed with the correct decryption keys, forming the foundation of data security.
Encryption process diagramBasic encryption process showing plaintext conversion to ciphertext via encryption key

machine learning intermediate

cloud computing