The billion-dollar AI company no one is talking about - and why you should care - Related to one, 4-dimensional, manage, billion-dollar, you
4-Dimensional Data Visualization: Time in Bubble Charts

Bubble Charts elegantly compress large amounts of information into a single visualization, with bubble size adding a third dimension. However, comparing “before” and “after” states is often crucial. To address this, we propose adding a transition between these states, creating an intuitive user experience.
Since we couldn’t find a ready-made solution, we developed our own. The challenge turned out to be fascinating and required refreshing some mathematical concepts.
Without a doubt, the most challenging part of the visualization is the transition between two circles — before and after states. To simplify, we focus on solving a single case, which can then be extended in a loop to generate the necessary number of transitions.
To build such a figure, let’s first decompose it into three parts: two circles and a polygon that connects them (in gray).
Base element decomposition, image by Author.
Building two circles is quite simple — we know their centers and radii. The remaining task is to construct a quadrilateral polygon, which has the following form:
The construction of this polygon reduces to finding the coordinates of its vertices. This is the most interesting task, and we will solve it further.
From polygon to tangent lines, image by Author.
To calculate the distance from a point (x1, y1) to the line ax+y+b=0, the formula is:
Distance from point to a line, image by Author.
In our case, distance (d) is equal to circle radius (r). Hence,.
After multiplying both sides of the equation by a**2+1, we get:
After moving everything to one side and setting the equation equal to zero, we get:
Since we have two circles and need to find a tangent to both, we have the following system of equations:
This works great, but the problem is that we have 4 possible tangent lines in reality:
All possible tangent lines, image by Author.
And we need to choose just 2 of them — external ones.
To do this we need to check each tangent and each circle center and determine if the line is above or below the point:
Check if line is above or below the point, image by Author.
We need the two lines that both pass above or both pass below the centers of the circles.
Now, let’s translate all these steps into code:
import [website] as plt import numpy as np import pandas as pd import sympy as sp from scipy.spatial import ConvexHull import math from matplotlib import rcParams import matplotlib.patches as patches def check_position_relative_to_line(a, b, x0, y0): y_line = a * x0 + b if y0 > y_line: return 1 # line is above the point elif y0 < y_line: return -1 def find_tangent_equations(x1, y1, r1, x2, y2, r2): a, b = sp.symbols('a b') tangent_1 = (a*x1 + b - y1)**2 - r1**2 * (a**2 + 1) tangent_2 = (a*x2 + b - y2)**2 - r2**2 * (a**2 + 1) eqs_1 = [tangent_2, tangent_1] solution = [website], (a, b)) parameters = [(float(e[0]), float(e[1])) for e in solution] # filter just external tangents parameters_filtered = [] for tangent in parameters: a = tangent[0] b = tangent[1] if abs(check_position_relative_to_line(a, b, x1, y1) + check_position_relative_to_line(a, b, x2, y2)) == 2: [website] return parameters_filtered.
Now, we just need to find the intersections of the tangents with the circles. These 4 points will be the vertices of the desired polygon.
Substitute the line equation y=ax+b into the circle equation:
Solution of the equation is the x of the intersection.
Then, calculate y from the line equation:
def find_circle_line_intersection(circle_x, circle_y, circle_r, line_a, line_b): x, y = sp.symbols('x y') circle_eq = (x - circle_x)**2 + (y - circle_y)**2 - circle_r**2 intersection_eq = [website], line_a * x + line_b) sol_x_raw = [website], x)[0] try: sol_x = float(sol_x_raw) except: sol_x = sol_x_raw.as_real_imag()[0] sol_y = line_a * sol_x + line_b return sol_x, sol_y.
Now we want to generate sample data to demonstrate the whole chart compositions.
Imagine we have 4 people on our platform. We know how many purchases they made, generated revenue and activity on the platform. All these metrics are calculated for 2 periods (let’s call them pre and post period).
# data generation df = pd.DataFrame({'user': ['Emily', 'Emily', 'James', 'James', 'Tony', 'Tony', 'Olivia', 'Olivia'], 'period': ['pre', 'post', 'pre', 'post', 'pre', 'post', 'pre', 'post'], 'num_purchases': [10, 9, 3, 5, 2, 4, 8, 7], 'revenue': [70, 60, 80, 90, 20, 15, 80, 76], 'activity': [100, 80, 50, 90, 210, 170, 60, 55]}).
Let’s assume that “activity” is the area of the bubble. Now, let’s convert it into the radius of the bubble. We will also scale the y-axis.
def area_to_radius(area): radius = [website] / [website] return radius x_alias, y_alias, a_alias = 'num_purchases', 'revenue', 'activity' # scaling metrics radius_scaler = [website] df['radius'] = df[a_alias].apply(area_to_radius) * radius_scaler df['y_scaled'] = df[y_alias] / df[x_alias].max().
Now let’s build the chart — 2 circles and the polygon.
def draw_polygon(plt, points): hull = ConvexHull(points) convex_points = [points[i] for i in hull.vertices] x, y = zip(*convex_points) x += (x[0],) y += (y[0],) [website], y, color='#99d8e1', alpha=1, zorder=1) # bubble pre for _, row in df[[website]'pre'].iterrows(): x = row[x_alias] y = row.y_scaled r = [website] circle = [website], y), r, facecolor='#99d8e1', edgecolor='none', linewidth=0, zorder=2) [website] # transition area for user in [website] user_pre = df[([website] & ([website]'pre')] x1, y1, r1 = user_pre[x_alias].values[0], [website][0], [website][0] user_post = df[([website] & ([website]'post')] x2, y2, r2 = user_post[x_alias].values[0], [website][0], [website][0] tangent_equations = find_tangent_equations(x1, y1, r1, x2, y2, r2) circle_1_line_intersections = [find_circle_line_intersection(x1, y1, r1, eq[0], eq[1]) for eq in tangent_equations] circle_2_line_intersections = [find_circle_line_intersection(x2, y2, r2, eq[0], eq[1]) for eq in tangent_equations] polygon_points = circle_1_line_intersections + circle_2_line_intersections draw_polygon(plt, polygon_points) # bubble post for _, row in df[[website]'post'].iterrows(): x = row[x_alias] y = row.y_scaled r = [website] label = [website] circle = [website], y), r, facecolor='#2d699f', edgecolor='none', linewidth=0, zorder=2) [website] [website], y - r - [website], label, fontsize=12, ha='center').
# plot parameters plt.subplots(figsize=(10, 10)) rcParams['[website]'] = 'DejaVu Sans' rcParams['[website]'] = 14 [website]"gray", linestyle=(0, (10, 10)), [website], [website], zorder=1) plt.axvline(x=0, color='white', linewidth=2) [website]'white') [website]'white') # spines formatting [website]["top"].set_visible(False) [website]["right"].set_visible(False) [website]["bottom"].set_visible(False) [website]["left"].set_visible(False) [website]"both", which="both", length=0) # plot labels [website]"Number purchases") [website]"Revenue, $") [website]"Product individuals performance", fontsize=18, color="black") # axis limits axis_lim = df[x_alias].max() * [website] [website], axis_lim) [website], axis_lim).
Pre-post legend in the right bottom corner to give viewer a hint, how to read the chart:
## pre-post legend # circle 1 legend_position, r1 = (11, [website], [website] x1, y1 = legend_position[0], legend_position[1] circle = [website], y1), r1, facecolor='#99d8e1', edgecolor='none', linewidth=0, zorder=2) [website] [website], y1 + r1 + [website], 'Pre', fontsize=12, ha='center', va='center') # circle 2 x2, y2 = legend_position[0], legend_position[1] - r1*3 r2 = r1*[website] circle = [website], y2), r2, facecolor='#2d699f', edgecolor='none', linewidth=0, zorder=2) [website] [website], y2 - r2 - [website], 'Post', fontsize=12, ha='center', va='center') # tangents tangent_equations = find_tangent_equations(x1, y1, r1, x2, y2, r2) circle_1_line_intersections = [find_circle_line_intersection(x1, y1, r1, eq[0], eq[1]) for eq in tangent_equations] circle_2_line_intersections = [find_circle_line_intersection(x2, y2, r2, eq[0], eq[1]) for eq in tangent_equations] polygon_points = circle_1_line_intersections + circle_2_line_intersections draw_polygon(plt, polygon_points) # small arrow plt.annotate('', xytext=(x1, y1), xy=(x2, y1 - r1*2), arrowprops=dict(edgecolor='black', arrowstyle='->', lw=1)).
Adding styling and legend, image by Author.
# bubble size legend legend_areas_original = [150, 50] legend_position = (11, [website] for i in legend_areas_original: i_r = area_to_radius(i) * radius_scaler circle = [website][0], legend_position[1] + i_r), i_r, color='black', fill=False, [website], facecolor='none') [website] [website][0], legend_position[1] + 2*i_r, str(i), fontsize=12, ha='center', va='center', bbox=dict(facecolor='white', edgecolor='none', boxstyle='round,[website]')) legend_label_r = area_to_radius([website] * radius_scaler [website][0], legend_position[1] + 2*legend_label_r + [website], 'Activity, hours', fontsize=12, ha='center', va='center').
The visualization looks very stylish and concentrates quite a lot of information in a compact form.
import [website] as plt import numpy as np import pandas as pd import sympy as sp from scipy.spatial import ConvexHull import math from matplotlib import rcParams import matplotlib.patches as patches def check_position_relative_to_line(a, b, x0, y0): y_line = a * x0 + b if y0 > y_line: return 1 # line is above the point elif y0 < y_line: return -1 def find_tangent_equations(x1, y1, r1, x2, y2, r2): a, b = sp.symbols('a b') tangent_1 = (a*x1 + b - y1)**2 - r1**2 * (a**2 + 1) tangent_2 = (a*x2 + b - y2)**2 - r2**2 * (a**2 + 1) eqs_1 = [tangent_2, tangent_1] solution = [website], (a, b)) parameters = [(float(e[0]), float(e[1])) for e in solution] # filter just external tangents parameters_filtered = [] for tangent in parameters: a = tangent[0] b = tangent[1] if abs(check_position_relative_to_line(a, b, x1, y1) + check_position_relative_to_line(a, b, x2, y2)) == 2: [website] return parameters_filtered def find_circle_line_intersection(circle_x, circle_y, circle_r, line_a, line_b): x, y = sp.symbols('x y') circle_eq = (x - circle_x)**2 + (y - circle_y)**2 - circle_r**2 intersection_eq = [website], line_a * x + line_b) sol_x_raw = [website], x)[0] try: sol_x = float(sol_x_raw) except: sol_x = sol_x_raw.as_real_imag()[0] sol_y = line_a * sol_x + line_b return sol_x, sol_y def draw_polygon(plt, points): hull = ConvexHull(points) convex_points = [points[i] for i in hull.vertices] x, y = zip(*convex_points) x += (x[0],) y += (y[0],) [website], y, color='#99d8e1', alpha=1, zorder=1) def area_to_radius(area): radius = [website] / [website] return radius # data generation df = pd.DataFrame({'user': ['Emily', 'Emily', 'James', 'James', 'Tony', 'Tony', 'Olivia', 'Olivia', 'Oliver', 'Oliver', 'Benjamin', 'Benjamin'], 'period': ['pre', 'post', 'pre', 'post', 'pre', 'post', 'pre', 'post', 'pre', 'post', 'pre', 'post'], 'num_purchases': [10, 9, 3, 5, 2, 4, 8, 7, 6, 7, 4, 6], 'revenue': [70, 60, 80, 90, 20, 15, 80, 76, 17, 19, 45, 55], 'activity': [100, 80, 50, 90, 210, 170, 60, 55, 30, 20, 200, 120]}) x_alias, y_alias, a_alias = 'num_purchases', 'revenue', 'activity' # scaling metrics radius_scaler = [website] df['radius'] = df[a_alias].apply(area_to_radius) * radius_scaler df['y_scaled'] = df[y_alias] / df[x_alias].max() # plot parameters plt.subplots(figsize=(10, 10)) rcParams['[website]'] = 'DejaVu Sans' rcParams['[website]'] = 14 [website]"gray", linestyle=(0, (10, 10)), [website], [website], zorder=1) plt.axvline(x=0, color='white', linewidth=2) [website]'white') [website]'white') # spines formatting [website]["top"].set_visible(False) [website]["right"].set_visible(False) [website]["bottom"].set_visible(False) [website]["left"].set_visible(False) [website]"both", which="both", length=0) # plot labels [website]"Number purchases") [website]"Revenue, $") [website]"Product users performance", fontsize=18, color="black") # axis limits axis_lim = df[x_alias].max() * [website] [website], axis_lim) [website], axis_lim) # bubble pre for _, row in df[[website]'pre'].iterrows(): x = row[x_alias] y = row.y_scaled r = [website] circle = [website], y), r, facecolor='#99d8e1', edgecolor='none', linewidth=0, zorder=2) [website] # transition area for user in [website] user_pre = df[([website] & ([website]'pre')] x1, y1, r1 = user_pre[x_alias].values[0], [website][0], [website][0] user_post = df[([website] & ([website]'post')] x2, y2, r2 = user_post[x_alias].values[0], [website][0], [website][0] tangent_equations = find_tangent_equations(x1, y1, r1, x2, y2, r2) circle_1_line_intersections = [find_circle_line_intersection(x1, y1, r1, eq[0], eq[1]) for eq in tangent_equations] circle_2_line_intersections = [find_circle_line_intersection(x2, y2, r2, eq[0], eq[1]) for eq in tangent_equations] polygon_points = circle_1_line_intersections + circle_2_line_intersections draw_polygon(plt, polygon_points) # bubble post for _, row in df[[website]'post'].iterrows(): x = row[x_alias] y = row.y_scaled r = [website] label = [website] circle = [website], y), r, facecolor='#2d699f', edgecolor='none', linewidth=0, zorder=2) [website] [website], y - r - [website], label, fontsize=12, ha='center') # bubble size legend legend_areas_original = [150, 50] legend_position = (11, [website] for i in legend_areas_original: i_r = area_to_radius(i) * radius_scaler circle = [website][0], legend_position[1] + i_r), i_r, color='black', fill=False, [website], facecolor='none') [website] [website][0], legend_position[1] + 2*i_r, str(i), fontsize=12, ha='center', va='center', bbox=dict(facecolor='white', edgecolor='none', boxstyle='round,[website]')) legend_label_r = area_to_radius([website] * radius_scaler [website][0], legend_position[1] + 2*legend_label_r + [website], 'Activity, hours', fontsize=12, ha='center', va='center') ## pre-post legend # circle 1 legend_position, r1 = (11, [website], [website] x1, y1 = legend_position[0], legend_position[1] circle = [website], y1), r1, facecolor='#99d8e1', edgecolor='none', linewidth=0, zorder=2) [website] [website], y1 + r1 + [website], 'Pre', fontsize=12, ha='center', va='center') # circle 2 x2, y2 = legend_position[0], legend_position[1] - r1*3 r2 = r1*[website] circle = [website], y2), r2, facecolor='#2d699f', edgecolor='none', linewidth=0, zorder=2) [website] [website], y2 - r2 - [website], 'Post', fontsize=12, ha='center', va='center') # tangents tangent_equations = find_tangent_equations(x1, y1, r1, x2, y2, r2) circle_1_line_intersections = [find_circle_line_intersection(x1, y1, r1, eq[0], eq[1]) for eq in tangent_equations] circle_2_line_intersections = [find_circle_line_intersection(x2, y2, r2, eq[0], eq[1]) for eq in tangent_equations] polygon_points = circle_1_line_intersections + circle_2_line_intersections draw_polygon(plt, polygon_points) # small arrow plt.annotate('', xytext=(x1, y1), xy=(x2, y1 - r1*2), arrowprops=dict(edgecolor='black', arrowstyle='->', lw=1)) # y axis formatting max_y = df[y_alias].max() nearest_power_of_10 = 10 ** [website] ticks = [round(nearest_power_of_10/5 * i, 2) for i in range(0, 6)] yticks_scaled = ticks / df[x_alias].max() yticklabels = [str(i) for i in ticks] yticklabels[0] = '' [website], yticklabels) plt.savefig("[website]", bbox_inches='tight', dpi=300).
Adding a time dimension to bubble charts enhances their ability to convey dynamic data changes intuitively. By implementing smooth transitions between “before” and “after” states, individuals can superior understand trends and comparisons over time.
While no ready-made solutions were available, developing a custom approach proved both challenging and rewarding, requiring mathematical insights and careful animation techniques. The proposed method can be easily extended to various datasets, making it a valuable tool for Data Visualization in business, science, and analytics.
Speed is critical when dealing with large amounts of data. If you are handling data in a cloud data war......
OpenAI CEO Sam Altman revealed in an interview with British media outlet Sky News on Tuesday that the business would like to work with China. Altman ma......
Developers work on applications that are supposed to be deployed on some server in order to allow anyone to use those. Typically in the ......
The billion-dollar AI company no one is talking about - and why you should care

What if I told you that the biggest winner in this AI arms race isn't OpenAI, Meta, Google… or even DeepSeek?
This enterprise is quietly winning, and nobody's talking about it. 🫣.
Also: From zero to millions? How regular people are cashing in on AI.
And when I say winning, I don't mean hype.
"One day, a magical enterprise will change the world and revolutionize AI blah blah blah…"
They're getting paid. Today. Money in the bank. Not "potentially" winning. They are already winning.
But they're chilling in the cut like peroxide.
It ain't Nvidia, either. When I say zero hype, I MEAN ZERO. It's giving diamond in the rough vibes.
Also: 5 ways AI can help with your taxes - and 10 major mistakes to avoid.
The wildest part is they have been around for 20+ years and only now started making real money.
In this article, we're gonna break it all down:
Buckle up. This is gonna be a good one. Like a Martin Scorsese movie, I PROMISE YOU… you won't see this plot twist coming. 🤞.
To understand how I discovered this juggernaut hiding in plain sight, first, you need to know who I am and what I do for a living.
My name is Lester, but feel free to call me Les. 👋.
I'm a founder with a successful exit under my belt. These days, I'm the exec chair for a group of ecom brands, but at my core, I'm an award-winning performance marketer.
Also: AI isn't the next big thing - here's what is.
Needless to say, data and insights are my jam. We operate more like a data firm than an ecom brand. Our secret sauce? Pairing data and insights with ideas that generate revenue.
Before I jump into the who, I need to bring you up to speed.
I know I need to come out with it, but this context is critical for you to spot the next trend on your own.
As you know, there is so much hype and speculation about AI that DeepSeek made an announcement and the market crashed by trillions of dollars. Yes, trillion with a T. 🤯.
There is no denying that AI is here and will play a role in our future, but how do you identify the real from the fake?
Also: 3 lucrative side hustles you can start right now with OpenAI's Sora video generator.
Who got rich during the gold rush? The guy selling the shovel.
Who got rich during the dot-com bubble? The telecom companies.
When investing, whether financially or allocating resources, I prefer to bet on the industry rather than a single firm. This is especially true in the early stages when success depends on many factors aligning.
So, with that mindset, I started thinking: If we're betting on AI as an industry, who wins? Where's the opportunity?
To grasp why this sleeper corporation is winning, you must first understand how AI is shaping up behind the scenes.
Also: The best AI for coding in 2025 (and what not to use).
Right now, AI is in a hyper-growth phase. It's more like the early days of the dot-com boom: money flying everywhere, but very few companies are actually making money.
At the core of AI, there are three key players:
The Model Builders -- OpenAI, Google DeepMind, Anthropic. These are the companies building massive AI models. They need a considerable amount of data and computing power, meaning they are currently spending way more than they're making.
The Infrastructure Providers – Nvidia, AWS, Microsoft Azure. These companies sell the computing power, cloud storage, and GPUs that AI models desperately need. They're making money, but they're not the whole story.
The Data Owners – This is where things get interesting. AI models need training data to get smarter. The problem? Most of the internet's data is free and unstructured.
Using the gold rush as our blueprint, we see the real opportunities fall into two categories: Infrastructure and Data Owners.
I immediately ruled out Nvidia. Why? Too obvious. 😅.
Besides, I'm not looking for a stock pick; I'm looking for opportunities from which I can benefit.
With that in mind, what I discovered was mind-blowing.
✅ Doesn't have to build AI models even though they probably could.
✅ All they have to do is license the data they already have.
So, who is this winner no one is talking about?
Yup. That Reddit. I told you this was a Martin Scorsese plot twist.
If you're unfamiliar with Reddit, it is a social media platform and discussion site where people share content, discuss topics, and vote on posts. It's often called "The Front Page of the Internet."
Founded in 2005 by Alexis Ohanian and Steve Huffman, it was initially meant to be a food-ordering app called My Mobile Menu.
Good call pivoting to social media, lol. 🤣.
Also: 5 ways AI can help with your taxes - and 10 major mistakes to avoid.
Here is why they are winning… AI models need human-generated content for training. Reddit just so happens to have one of the largest human conversation datasets in the world.
What makes this so valuable is the quality of the conversations. The internet is full of junk, but Reddit's discussions are real, unfiltered, and human. Unlike algorithm-driven platforms, this is raw, natural interaction, which is precisely what AI needs to understand human behavior.
Reddit is the shovel of this AI revolution. 🤓☝️.
In May 2024, Reddit signed a deal with OpenAI to sell access to its data, which led to its first profitable quarter ever in Q3 2024, with 68% revenue growth to $[website] million and a net income of $[website] million.
In the last six months, Reddit's stock is up 236% at the time of writing.
Also: Reddit's latest AI revision makes finding the answers you want much easier.
The revenue growth comes from ads and data licensing, with AI companies paying top dollar for Reddit's content.
Needless to say, I'm bullish on Reddit. 🚀.
Reddit has become crucial for AI development, and no one else comes close to having this level of rich, human-generated data. It's seen as the last town square for free human conversation.
While it's evident that Reddit is worth watching in the stock market (not financial advice), its newfound profitability means it will likely invest more in the platform to improve user experience, add new capabilities, and scale its reach.
But here is what I find really interesting.
Also: Cerebras CEO on DeepSeek: Every time computing gets cheaper, the market gets bigger.
In 2019, Reddit had 430 million monthly active people. By the end of 2024, that number hit [website] billion. This is significant because this is a mature platform, not a startup. This kind of growth at this stage is meaningful and something to pay attention to.
I'm totally speculating here, but this growth ultimately comes from individuals craving real human connections.
No bots. No algorithm force-feeding them content they don't want. 🤗.
This is where I see an opportunity to market your product or service.
Wherever people gather, there's potential to connect with the right audience. Now that Reddit is profitable, it's likely to invest in the platform, potentially making it a bigger contender in the digital marketing space alongside Meta and Google.
Also: The work tasks people use Claude AI for most,.
That expressed, Reddit's community isn't like any other. You neeeeeeeeedddddddddd to be authentic or you are going to get cooked.
Focus on providing value, and don't try to be too salesy. Find ways to add to the conversation rather than take.
Here is a link to some Reddit case studies so you can get a feel of what works. 📈.
We should keep an eye on Reddit from a stock performance perspective and as a potential new traffic source to reduce dependence on Meta and Google. 👀.
The real opportunity isn't just in AI or any one platform but in how we adapt as things shift. Platforms come and go, algorithms change, and tech keeps evolving, but the goal is always to reach the right people in a way that actually matters.
Verint, a global leader in customer experience (CX) automation, has showcased the establishment of its new Global Innovation Centre (GIC) in Bengaluru......
L’intelligence artificielle a facilité la création de visuels plus réalistes que jamais. Cependant, ce progrès s’accompagne d’un défi majeur : comment......
Manage Environment Variables with Pydantic

Developers work on applications that are supposed to be deployed on some server in order to allow anyone to use those. Typically in the machine where these apps live, developers set up environment variables that allow the app to run. These variables can be API keys of external services, URL of your database and much more.
For local development though, it is really inconvenient to declare these variables on the machine because it is a slow and messy process. So I’d like to share in this short tutorial how to use Pydantic to handle environment variables in a secure way.
What you commonly do in a Python project is to store all your environment variables in a file named .env. This is a text file containing all the variables in a key : value format. You can use also the value of one of the variables to declare one of the other variables by leveraging the {} syntax.
[website] file OPENAI_API_KEY="sk-your private key" OPENAI_MODEL_ID="gpt-4o-mini" # Development settings [website] ADMIN_EMAIL=admin@${DOMAIN} WANDB_API_KEY="your-private-key" WANDB_PROJECT="myproject" WANDB_ENTITY="my-entity" SERPAPI_KEY= "your-api-key" PERPLEXITY_TOKEN = "your-api-token"
Be aware the .env file should remain private, so it is essential that this file is mentioned in your .gitignore file, to be sure that you never push it on GitHub, otherwise, other developers could steal your keys and use the tools you’ve paid for.
To ease the life of developers who will clone your repository, you could include an env.example file in your project. This is a file containing only the keys of what is supposed to go into the .env file. In this way, other people know what APIs, tokens, or secrets in general they need to set to make the scripts work.
#env.example OPENAI_API_KEY="" OPENAI_MODEL_ID="" DOMAIN="" ADMIN_EMAIL="" WANDB_API_KEY="" WANDB_PROJECT="" WANDB_ENTITY="" SERPAPI_KEY= "" PERPLEXITY_TOKEN = ""
python-dotenv is the library you use to load the variables declared into the .env file. To install this library:
Now you can use the load_dotenv to load the variables. Then get a reference to these variables with the os module.
import os from dotenv import load_dotenv load_dotenv() OPENAI_API_KEY = [website]'OPENAI_API_KEY') OPENAI_MODEL_ID = [website]'OPENAI_MODEL_ID').
This method will first look into your .env file to load the variables you’ve declared there. If this file doesn’t exist, the variable will be taken from the host machine. This means that you can use the .env file for your local development but then when the code is deployed to a host environment like a virtual machine or Docker container we are going to directly use the environment variables defined in the host environment.
Pydantic is one of the most used libraries in Python for data validation. It is also used for serializing and deserializing classes into JSON and back. It automatically generates JSON schema, reducing the need for manual schema management. It also provides built-in data validation, ensuring that the serialized data adheres to the expected format. Lastly, it easily integrates with popular web frameworks like FastAPI.
pydantic-settings is a Pydantic feature needed to load and validate settings or config classes from environment variables.
We are going to create a class named Settings . This class will inherit BaseSettings . This makes the default behaviours of determining the values of any fields to be read from the .env file. If no var is found in the .env file it will be used the default value if provided.
from pydantic_settings import BaseSettings, SettingsConfigDict from pydantic import ( AliasChoices, Field, RedisDsn, ) class Settings(BaseSettings): auth_key: str = Field(validation_alias='my_auth_key') api_key: str = Field(alias='my_api_key') redis_dsn: RedisDsn = Field( 'redis://user:pass@localhost:6379/1', #default value validation_alias=AliasChoices('service_redis_dsn', 'redis_url'), ) model_config = SettingsConfigDict(env_prefix='my_prefix_').
In the Settings class above we have defined several fields. The Field class is used to provide extra information about an attribute.
In our case, we setup a validation_alias . So the variable name to look for in the .env file is overridden. In the case reported above, the environment variable my_auth_key will be read instead of auth_key.
You can also have multiple aliases to look for in the .env file that you can specify by leveraging AliasChoises(choise1, choise2).
The last attribute model_config , contains all the variables regarding a particular topic ([website] connection to a db). And this variable will store all .env var that start with the prefix env_prefix.
The next step would be to actually instantiate and use these settings in your Python project.
from pydantic_settings import BaseSettings, SettingsConfigDict from pydantic import ( AliasChoices, Field, RedisDsn, ) class Settings(BaseSettings): auth_key: str = Field(validation_alias='my_auth_key') api_key: str = Field(alias='my_api_key') redis_dsn: RedisDsn = Field( 'redis://user:pass@localhost:6379/1', #default value validation_alias=AliasChoices('service_redis_dsn', 'redis_url'), ) model_config = SettingsConfigDict(env_prefix='my_prefix_') # create immediately a settings object settings = Settings().
Now what use the settings in other parts of our codebase.
from Settings import settings print(settings.auth_key).
You finally have an easy access to your settings, and Pydantic helps you validate that the secrets have the correct format. For more advanced validation tips refer to the Pydantic documentation: [website].
Managing the configuration of a project is a boring but crucial part of software development. Secrets like API keys, db connections are what usually power your application. Naively you can hardcode these variables in your code and it will still work, but for obvious reasons, this could not be a good practice. In this article, I showed you an introduction on how to use pydantic settings to have a structured and safe way to handle your configurations.
iPhone people can now tap into Google's Deep Research agent to research a topic on their behalf. Added to the Gemini website in December and to ......
Gujarat chief minister Bhupendra Patel on Tuesday unveiled the Gujarat Global Capability Centre (GCC) Policy (2025-30) at GIFT City, Gandhinagar. The ......
To be considered reliable, a model must be calibrated so that its confidence in each decision closely refle......
Market Impact Analysis
Market Growth Trend
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 |
---|---|---|---|---|---|---|
23.1% | 27.8% | 29.2% | 32.4% | 34.2% | 35.2% | 35.6% |
Quarterly Growth Rate
Q1 2024 | Q2 2024 | Q3 2024 | Q4 2024 |
---|---|---|---|
32.5% | 34.8% | 36.2% | 35.6% |
Market Segments and Growth Drivers
Segment | Market Share | Growth Rate |
---|---|---|
Machine Learning | 29% | 38.4% |
Computer Vision | 18% | 35.7% |
Natural Language Processing | 24% | 41.5% |
Robotics | 15% | 22.3% |
Other AI Technologies | 14% | 31.8% |
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity:
Competitive Landscape Analysis
Company | Market Share |
---|---|
Google AI | 18.3% |
Microsoft AI | 15.7% |
IBM Watson | 11.2% |
Amazon AI | 9.8% |
OpenAI | 8.4% |
Future Outlook and Predictions
The Dimensional Data Visualization landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:
Year-by-Year Technology Evolution
Based on current trajectory and expert analyses, we can project the following development timeline:
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:
Innovation Trigger
- Generative AI for specialized domains
- Blockchain for supply chain verification
Peak of Inflated Expectations
- Digital twins for business processes
- Quantum-resistant cryptography
Trough of Disillusionment
- Consumer AR/VR applications
- General-purpose blockchain
Slope of Enlightenment
- AI-driven analytics
- Edge computing
Plateau of Productivity
- Cloud infrastructure
- Mobile applications
Technology Evolution Timeline
- Improved generative models
- specialized AI applications
- AI-human collaboration systems
- multimodal AI platforms
- General AI capabilities
- AI-driven scientific breakthroughs
Expert Perspectives
Leading experts in the ai tech sector provide diverse perspectives on how the landscape will evolve over the coming years:
"The next frontier is AI systems that can reason across modalities and domains with minimal human guidance."
— AI Researcher
"Organizations that develop effective AI governance frameworks will gain competitive advantage."
— Industry Analyst
"The AI talent gap remains a critical barrier to implementation for most enterprises."
— Chief AI Officer
Areas of Expert Consensus
- Acceleration of Innovation: The pace of technological evolution will continue to increase
- Practical Integration: Focus will shift from proof-of-concept to operational deployment
- Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
- Regulatory Influence: Regulatory frameworks will increasingly shape technology development
Short-Term Outlook (1-2 Years)
In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing ai tech challenges:
- Improved generative models
- specialized AI applications
- enhanced AI ethics frameworks
These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.
Mid-Term Outlook (3-5 Years)
As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:
- AI-human collaboration systems
- multimodal AI platforms
- democratized AI development
This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.
Long-Term Outlook (5+ Years)
Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:
- General AI capabilities
- AI-driven scientific breakthroughs
- new computing paradigms
These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.
Key Risk Factors and Uncertainties
Several critical factors could significantly impact the trajectory of ai tech evolution:
Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.
Alternative Future Scenarios
The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:
Optimistic Scenario
Responsible AI driving innovation while minimizing societal disruption
Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.
Probability: 25-30%
Base Case Scenario
Incremental adoption with mixed societal impacts and ongoing ethical challenges
Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.
Probability: 50-60%
Conservative Scenario
Technical and ethical barriers creating significant implementation challenges
Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.
Probability: 15-20%
Scenario Comparison Matrix
Factor | Optimistic | Base Case | Conservative |
---|---|---|---|
Implementation Timeline | Accelerated | Steady | Delayed |
Market Adoption | Widespread | Selective | Limited |
Technology Evolution | Rapid | Progressive | Incremental |
Regulatory Environment | Supportive | Balanced | Restrictive |
Business Impact | Transformative | Significant | Modest |
Transformational Impact
Redefinition of knowledge work, automation of creative processes. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.
The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.
Implementation Challenges
Ethical concerns, computing resource limitations, talent shortages. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.
Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.
Key Innovations to Watch
Multimodal learning, resource-efficient AI, transparent decision systems. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.
Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.
Technical Glossary
Key technical terms and definitions to help understand the technologies discussed in this article.
Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.