Technology News from Around the World, Instantly on Oracnoos!

Rebuilding Alexa: How Amazon is mixing models, agents and browser-use for smarter AI - Related to agents, workers, change, i, reason

I Won’t Change Unless You Do

I Won’t Change Unless You Do

In Game Theory, how can players ever come to an end if there still might be a advanced option to decide for? Maybe one player still wants to change their decision. But if they do, maybe the other player wants to change too. How can they ever hope to escape from this vicious circle? To solve this problem, the concept of a Nash equilibrium, which I will explain in this article, is fundamental to game theory.

This article is the second part of a four-chapter series on game theory. If you haven’t checked out the first chapter yet, I’d encourage you to do that to get familiar with the main terms and concepts of game theory. If you did so, you are prepared for the next steps of our journey through game theory. Let’s go!

Finding a solution to a game in game theory can be tricky sometimes. Photo by Mel Poole on Unsplash.

We will now try to find a solution for a game in game theory. A solution is a set of actions, where each player maximizes their utility and therefore behaves rationally. That does not necessarily mean, that each player wins the game, but that they do the best they can do, given that they don’t know what the other players will do. Let’s consider the following game:

If you are unfamiliar with this matrix-notation, you might want to take a look back at Chapter 1 and refresh your memory. Do you remember that this matrix gives you the reward for each player given a specific pair of actions? For example, if player 1 chooses action Y and player 2 chooses action B, player 1 will get a reward of 1 and player 2 will get a reward of 3.

Okay, what actions should the players decide for now? Player 1 does not know what player 2 will do, but they can still try to find out what would be the best action depending on player 2’s choice. If we compare the utilities of actions Y and Z (indicated by the blue and red boxes in the next figure), we notice something interesting: If player 2 chooses action A (first column of the matrix), player 1 will get a reward of 3, if they choose action Y and a reward of 2, if they choose action Z, so action Y is superior in that case. But what happens, if player 2 decides for action B (second column)? In that case, action Y gives a reward of 1 and action Z gives a reward of 0, so Y is superior than Z again. And if player 2 chooses action C (third column), Y is still superior than Z (reward of 2 vs. reward of 1). That means, that player 1 should never use action Z, because action Y is always superior.

We compare the rewards for player 1for actions Y and Z.

With the aforementioned considerations, player 2 can anticipate, that player 1 would never use action Z and hence player 2 doesn’t have to care about the rewards that belong to action Z. This makes the game much smaller, because now there are only two options left for player 1, and this also helps player 2 decide for their action.

We found out, that for player 1 Y is always superior than Z, so we don’t consider Z anymore.

If we look at the truncated game, we see, that for player 2, option B is always enhanced than action A. If player 1 chooses X, action B (with a reward of 2) is enhanced than option A (with a reward of 1), and the same applies if player 1 chooses action Y. Note that this would not be the case if action Z was still in the game. However, we already saw that action Z will never be played by player 1 anyway.

We compare the rewards for player 2 for actions A and B.

As a consequence, player 2 would never use action A. Now if player 1 anticipates that player 2 never uses action A, the game becomes smaller again and fewer options have to be considered.

We saw, that for player 2 action B is always improved than action A, so we don’t have to consider A anymore.

We can easily continue in a likewise fashion and see that for player 1, X is now always better than Y (2>1 and 4>2). Finally, if player 1 chooses action A, player 2 will choose action B, which is better than C (2>0). In the end, only the action X (for player 1) and B (for player 2) are left. That is the solution of our game:

In the end, only one option remains, namely player 1 using X and player 2 using B.

It would be rational for player 1 to choose action X and for player 2 to choose action B. Note that we came to that conclusion without exactly knowing what the other player would do. We just anticipated that some actions would never be taken, because they are always worse than other actions. Such actions are called strictly dominated. For example, action Z is strictly dominated by action Y, because Y is always more effective than Z.

Scrabble is one of those games, where searching for the best answer can take ages. Photo by Freysteinn G. Jonsson on Unsplash.

Such strictly dominated actions do not always exist, but there is a similar concept that is of importance for us and is called a best answer. Say we know which action the other player chooses. In that case, deciding on an action becomes very easy: We just take the action that has the highest reward. If player 1 knew that player 2 chose option A, the best answer for player 1 would be Y, because Y has the highest reward in that column. Do you see how we always searched for the best answers before? For each possible action of the other player we searched for the best answer, if the other player chose that action. More formally, player i’s best answer to a given set of actions of all other players is the action of player 1 which maximises the utility given the other players’ actions. Also be aware, that a strictly dominated action can never be a best answer.

Let us come back to a game we introduced in the first chapter: The prisoners’ dilemma. What are the best answers here?

How should player 1 decide, if player 2 confesses or denies? If player 2 confesses, player 1 should confess as well, because a reward of -3 is advanced than a reward of -6. And what happens, if player 2 denies? In that case, confessing is advanced again, because it would give a reward of 0, which is advanced than a reward of -1 for denying. That means, for player 1 confessing is the best answer for both actions of player 2. Player 1 doesn’t have to worry about the other player’s actions at all but should always confess. Because of the game’s symmetry, the same applies to player 2. For them, confessing is also the best answer, no matter what player 1 does.

The Nash equilibrium is somewhat like the master key that allows us to solve game-theoretic problems. Researchers were very happy when they found it. Photo by [website] NFT gallery on Unsplash.

If all players play their best answer, we have reached a solution of the game that is called a Nash Equilibrium. This is a key concept in game theory, because of an essential property: In a Nash Equilibrium, no player has any reason to change their action, unless any other player does. That means all players are as happy as they can be in the situation and they wouldn’t change, even if they could. Consider the prisoner’s dilemma from above: The Nash equilibrium is reached when both confess. In this case, no player would change his action without the other. They could become advanced if both changed their action and decided to deny, but since they can’t communicate, they don’t expect any change from the other player and so they don’t change themselves either.

You may wonder if there is always a single Nash equilibrium for each game. Let me tell you there can also be multiple ones, as in the Bach vs. Stravinsky game that we already got to know in Chapter 1:

This game has two Nash equilibria: (Bach, Bach) and (Stravinsky, Stravinsky). In both scenarios, you can easily imagine that there is no reason for any player to change their action in isolation. If you sit in the Bach concerto with your friend, you would not leave your seat to go to the Stravinsky concerto alone, even if you favour Stravinsky over Bach. In a likewise fashion, the Bach fan wouldn’t go away from the Stravinsky concerto if that meant leaving his friend alone. In the remaining two scenarios, you would think differently though: If you were in the Stravinsky concerto alone, you would want to get out there and join your friend in the Bach concerto. That is, you would change your action even if the other player doesn’t change theirs. This tells you, that the scenario you have been in was not a Nash equilibrium.

However, there can also be games that have no Nash equilibrium at all. Imagine you are a soccer keeper during a penalty shot. For simplicity, we assume you can jump to the left or to the right. The soccer player of the opposing team can also shoot in the left or right corner, and we assume, that you catch the ball if you decide for the same corner as they do and that you don’t catch it if you decide for opposing corners. We can display this game as follows:

You won’t find any Nash equilibrium here. Each scenario has a clear winner (reward 1) and a clear loser (reward -1), and hence one of the players will always want to change. If you jump to the right and catch the ball, your opponent will wish to change to the left corner. But then you again will want to change your decision, which will make your opponent choose the other corner again and so on.

We learned about finding a point of balance, where nobody wants to change anymore. That is a Nash equilibrium. Photo by Eran Menashri on Unsplash.

This chapter showed how to find solutions for games by using the concept of a Nash equilibrium. Let us summarize, what we have learned so far:

A solution of a game in game theory maximizes every player’s utility or reward.

An action is called strictly dominated if there is another action that is always enhanced. In this case, it would be irrational to ever play the strictly dominated action.

if there is another action that is always more effective. In this case, it would be irrational to ever play the strictly dominated action. The action that yields the highest reward given the actions taken by the other players is called a best answer .

. A Nash equilibrium is a state where every player plays their best answer.

is a state where every player plays their best answer. In a Nash Equilibrium, no player wants to change their action unless any other play does. In that sense, Nash equilibria are optimal states.

Some games have multiple Nash equilibria and some games have none.

If you were saddened by the fact that there is no Nash equilibrium in some games, don’t despair! In the next chapter, we will introduce probabilities of actions and this will allow us to find more equilibria. Stay tuned!

The topics introduced here are typically covered in standard textbooks on game theory. I mainly used this one, which is written in German though:

Bartholomae, F., & Wiens, M. (2016). Spieltheorie. Ein anwendungsorientiertes Lehrbuch. Wiesbaden: Springer Fachmedien Wiesbaden.

An alternative in English language could be this one:

Espinola-Arredondo, A., & Muñoz-Garcia, F. (2023). Game Theory: An Introduction with Step-by-step Examples. Springer Nature.

Game theory is a rather young field of research, with the first main textbook being this one:

Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior.

Like this article? Follow me to be notified of my future posts.

Since the launch of OpenAI's ChatGPT in 2022, artificial intelligence (AI) technologies have become increasingly entrenc......

La lutte contre l’utilisation abusive de l’IA se renforce, alors que Microsoft identifie plusieurs développeurs impliqués dans un réseau criminel. Mic......

Welcome to part 2 of my LLM deep dive. If you’ve not read Part 1, I highly encourage you to check it out first.

Previously, we covered the first two ......

Most US workers don't use AI at work yet. This study suggests a reason why

Most US workers don't use AI at work yet. This study suggests a reason why

Since the launch of OpenAI's ChatGPT in 2022, artificial intelligence (AI) technologies have become increasingly entrenched in our daily lives. However, a new study hints at that the American workforce seems largely uninterested in adopting AI en masse.

, around 80% of Americans don't generally use AI at work, while those who do use AI seem unenthusiastic about its benefits. Moreover, fewer than one-third of those surveyed noted they're "excited" about using AI in future workplaces. Only 6% of workers say workplace AI use will lead to more job opportunities in the long run.

Also: 15 ways AI has saved me time at work - and how I plan to use it now.

For this study, Pew surveyed 5,273 US adults -- ranging from 18 to 65+ -- who are employed either part-time or full-time and have one or more jobs but consider one of those to be their primary job. The participants were asked questions that explored "how workers see the use of AI in the workplace overall, as well as their own experience with AI in their jobs."

The study explored how class, age, and education informed participants' answers to questions concerning AI use and job opportunities. For example, when asked whether workers are more worried than hopeful about the future of AI use in the workplace, the respondents expressed that they are far more "worried" at 52% of respondents than "hopeful" or "excited" at 36% and 29%, respectively, .

Knowledge workers in information and technology, banking, finance, accounting, real estate, and insurance are "among the most likely to say that the use of AI will lead to more job opportunities for them in the long run."

Workers with lower and middle incomes are more likely than those with higher incomes to be pessimistic about AI use in the workplace and convey sentiments that AI will lead to fewer job opportunities for them. In contrast, upper-income workers are more likely to say workplace AI use won't make much difference in their job opportunities.

While 51% of AI clients surveyed have at least a bachelor's degree, compared to 39% of non-AI clients within the non-AI clients camp, "31% say at least some of their work can be done with AI." Younger workers are also more likely to say they "feel overwhelmed" about how AI will be adopted in the workplace in the future.

Workers between 18 and 29 are most likely to use AI chatbots at work "at least a few times a month" to research, summarize, and edit content. However, few stated these technologies "were very or extremely helpful" regarding increased productivity and higher-quality work.

Most workers (69%) do not use AI chatbots in their workplace. Among non-AI chatbot individuals, 36% stated they have never used AI chatbots for work purposes because "there isn't any use for them in their job." Another 22% just stated they're not interested in using AI chatbots.

, most workers -- across all age and education groups -- say that any workplace training they received was unrelated to AI use. Only a quarter (24%) noted they received training pertaining to AI use.

Also: OpenAI's Deep Research can save you hours of work - and now it's a lot cheaper to access.

The lack of effective and adequate AI training feeds into AI pessimism in the workplace, and this has much to do with corporation leaders' lack of a clear vision regarding how AI can increase workplace productivity. "Employees are legitimately scared that the organization may justify laying them off by saying AI can do this job," notes Hatim Rahman, an associate professor at Northwestern University's Kellogg School of Management.

In Game Theory, how can players ever come to an end if there still might be a enhanced option to decide for? Maybe one player still wants to change thei......

En quelques jours, un homme a perdu son emploi, vu ses informations personnelles exposées et ses finances ravagées, tout ça à cause d’un fichier appar......

Rebuilding Alexa: How Amazon is mixing models, agents and browser-use for smarter AI

Rebuilding Alexa: How Amazon is mixing models, agents and browser-use for smarter AI

Amazon is betting on agent interoperability and model mixing to make its new Alexa voice assistant more effective, retooling its flagship voice assistant with agentic capabilities and browser-use tasks.

This new Alexa has been rebranded to Alexa+, and Amazon is emphasizing that this version “does more.” For instance, it can now proactively tell consumers if a new book from their favorite author is available, or that their favorite artist is in town — and even offer to buy a ticket. Alexa+ reasons through instructions and taps “experts” in different knowledge bases to answer user questions and complete tasks like “Where is the nearest pizza place to the office? Will my coworkers like it? — Make a reservation if you think they will.”.

In other words, Alexa+ blends AI agents, computer use capabilities and knowledge it learns from the larger Amazon ecosystem to be what Amazon hopes is a more capable and smarter home voice assistant.

Alexa+ currently runs on Amazon’s Nova models and models from Anthropic. However, Daniel Rausch, Amazon’s VP of Alexa and Echo, told VentureBeat that the device will remain “model agnostic” and that the enterprise could introduce other models (at least models available on Amazon Bedrock) to find the best one for accomplishing tasks.

“[It’s about] choosing the right integrations to complete a task, figuring out the right sort of instructions, what it takes to actually complete the task, then orchestrating the whole thing,” stated Rausch. “The big thing to understand about it is that Alexa will continue to evolve with the best models available anywhere on Bedrock.”.

Model mixing or model routing lets enterprises and other clients choose the appropriate AI model to tap on a query-by-query basis. Developers increasingly turn to model mixing to cut costs. After all, not every prompt needs to be answered by a reasoning model; some models perform certain tasks more effective.

Amazon’s cloud and AI unit, AWS, has long been a proponent of model mixing. lately, it unveiled a feature on Bedrock called Intelligent Prompt Routing, which directs prompts to the best model and model size to resolve the query.

And, it could be working. “I can tell you that I cannot say for any given response from Alexa on any given task what model it’s using,” expressed Rausch.

Agentic interoperability and orchestration.

Rausch stated Alexa+ brings agents together in three different ways. The first is the traditional API; the second is deploying agents that can navigate websites and apps like Anthropic’s Computer Use; the third is connecting agents to other agents.

“But at the center of it all, orchestrating across all those different kinds of experiences are these baseline, very capable, state-of-the-art LLMs,” expressed Rausch.

He added that if a third-party application already has its own agent, that agent can still talk to the agents working inside Alexa+ even if the external agent was built using a different model.

Rausch emphasized that the Alexa team used Bedrock’s tools and technology, including new multi-agent orchestration tools.

Anthropic CPO Mike Krieger told VentureBeat that even earlier versions of Claude won’t be able to accomplish what Alexa+ wants.

“A really interesting ‘Why now?’ moment is apparent in the demo, because, of course, the models have gotten superior,” noted Krieger. “But if you tried to do this with [website] Sonnet or our [website] level models, I think you’d struggle in a lot of ways to use a lot of different tools all at once.”.

Although neither Rausch nor Krieger would confirm which specific Anthropic model Amazon used to build Alexa+, it’s worth pointing out that Anthropic released Claude [website] Sonnet on Monday, and it is available on Bedrock.

Many user’s first brush with AI came through AI voice assistants like Alexa, Google Home or even Apple’s Siri. Those let people outsource some tasks, like turning on lights. I do not own an Alexa or Google Home device, but I learned how convenient having one could be when staying at a hotel not long ago. I could tell the Alexa to stop the alarm, turn on the lights and open a curtain while still under the covers.

But while Alexa, Google Home devices, and Siri became ubiquitous in people’s lives, they began showing their age when generative AI became popular. Suddenly, people wanted more real-time answers from AI assistants and demanded smarter task resolutions, such as adding multiple meetings to calendars without the need for much prompting.

Amazon admitted that the rise of gen AI, especially agents, has made it possible for Alexa to finally meet its potential.

“Until this moment, we were limited by the technology in what Alexa could be,” Panos Panay, Amazon’s devices and services SVP, expressed during a demo.

Rausch noted the hope is that Alexa+ continues to improve, add new models and hopefully make more people comfortable with what the technology can do.

We are looking for writers to propose up-to-date content focused on data science, machine learning, artificia......

Welcome to part 2 of my LLM deep dive. If you’ve not read Part 1, I highly encourage you to check it out first.

Previously, we covered the first two ......

Smartphone sales will grow in fits and starts, while tablet demand will wane. Large language models (LLMs) will boom, and de......

Market Impact Analysis

Market Growth Trend

2018201920202021202220232024
23.1%27.8%29.2%32.4%34.2%35.2%35.6%
23.1%27.8%29.2%32.4%34.2%35.2%35.6% 2018201920202021202220232024

Quarterly Growth Rate

Q1 2024 Q2 2024 Q3 2024 Q4 2024
32.5% 34.8% 36.2% 35.6%
32.5% Q1 34.8% Q2 36.2% Q3 35.6% Q4

Market Segments and Growth Drivers

Segment Market Share Growth Rate
Machine Learning29%38.4%
Computer Vision18%35.7%
Natural Language Processing24%41.5%
Robotics15%22.3%
Other AI Technologies14%31.8%
Machine Learning29.0%Computer Vision18.0%Natural Language Processing24.0%Robotics15.0%Other AI Technologies14.0%

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity:

Innovation Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity AI/ML Blockchain VR/AR Cloud Mobile

Competitive Landscape Analysis

Company Market Share
Google AI18.3%
Microsoft AI15.7%
IBM Watson11.2%
Amazon AI9.8%
OpenAI8.4%

Future Outlook and Predictions

The Change Unless Most landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:

Year-by-Year Technology Evolution

Based on current trajectory and expert analyses, we can project the following development timeline:

2024Early adopters begin implementing specialized solutions with measurable results
2025Industry standards emerging to facilitate broader adoption and integration
2026Mainstream adoption begins as technical barriers are addressed
2027Integration with adjacent technologies creates new capabilities
2028Business models transform as capabilities mature
2029Technology becomes embedded in core infrastructure and processes
2030New paradigms emerge as the technology reaches full maturity

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:

Time / Development Stage Adoption / Maturity Innovation Early Adoption Growth Maturity Decline/Legacy Emerging Tech Current Focus Established Tech Mature Solutions (Interactive diagram available in full report)

Innovation Trigger

  • Generative AI for specialized domains
  • Blockchain for supply chain verification

Peak of Inflated Expectations

  • Digital twins for business processes
  • Quantum-resistant cryptography

Trough of Disillusionment

  • Consumer AR/VR applications
  • General-purpose blockchain

Slope of Enlightenment

  • AI-driven analytics
  • Edge computing

Plateau of Productivity

  • Cloud infrastructure
  • Mobile applications

Technology Evolution Timeline

1-2 Years
  • Improved generative models
  • specialized AI applications
3-5 Years
  • AI-human collaboration systems
  • multimodal AI platforms
5+ Years
  • General AI capabilities
  • AI-driven scientific breakthroughs

Expert Perspectives

Leading experts in the ai tech sector provide diverse perspectives on how the landscape will evolve over the coming years:

"The next frontier is AI systems that can reason across modalities and domains with minimal human guidance."

— AI Researcher

"Organizations that develop effective AI governance frameworks will gain competitive advantage."

— Industry Analyst

"The AI talent gap remains a critical barrier to implementation for most enterprises."

— Chief AI Officer

Areas of Expert Consensus

  • Acceleration of Innovation: The pace of technological evolution will continue to increase
  • Practical Integration: Focus will shift from proof-of-concept to operational deployment
  • Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
  • Regulatory Influence: Regulatory frameworks will increasingly shape technology development

Short-Term Outlook (1-2 Years)

In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing ai tech challenges:

  • Improved generative models
  • specialized AI applications
  • enhanced AI ethics frameworks

These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.

Mid-Term Outlook (3-5 Years)

As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:

  • AI-human collaboration systems
  • multimodal AI platforms
  • democratized AI development

This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.

Long-Term Outlook (5+ Years)

Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:

  • General AI capabilities
  • AI-driven scientific breakthroughs
  • new computing paradigms

These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.

Key Risk Factors and Uncertainties

Several critical factors could significantly impact the trajectory of ai tech evolution:

Ethical concerns about AI decision-making
Data privacy regulations
Algorithm bias

Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.

Alternative Future Scenarios

The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:

Optimistic Scenario

Responsible AI driving innovation while minimizing societal disruption

Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.

Probability: 25-30%

Base Case Scenario

Incremental adoption with mixed societal impacts and ongoing ethical challenges

Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.

Probability: 50-60%

Conservative Scenario

Technical and ethical barriers creating significant implementation challenges

Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.

Probability: 15-20%

Scenario Comparison Matrix

FactorOptimisticBase CaseConservative
Implementation TimelineAcceleratedSteadyDelayed
Market AdoptionWidespreadSelectiveLimited
Technology EvolutionRapidProgressiveIncremental
Regulatory EnvironmentSupportiveBalancedRestrictive
Business ImpactTransformativeSignificantModest

Transformational Impact

Redefinition of knowledge work, automation of creative processes. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.

The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.

Implementation Challenges

Ethical concerns, computing resource limitations, talent shortages. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.

Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.

Key Innovations to Watch

Multimodal learning, resource-efficient AI, transparent decision systems. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.

Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.

Technical Glossary

Key technical terms and definitions to help understand the technologies discussed in this article.

Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.

Filter by difficulty:

machine learning intermediate

algorithm

platform intermediate

interface Platforms provide standardized environments that reduce development complexity and enable ecosystem growth through shared functionality and integration capabilities.

API beginner

platform APIs serve as the connective tissue in modern software architectures, enabling different applications and services to communicate and share data according to defined protocols and data formats.
API concept visualizationHow APIs enable communication between different software systems
Example: Cloud service providers like AWS, Google Cloud, and Azure offer extensive APIs that allow organizations to programmatically provision and manage infrastructure and services.

generative AI intermediate

encryption

large language model intermediate

API

reinforcement learning intermediate

cloud computing