Ryan welcomes Illia Polosukhin, co-author of the original Attention Is All You Need Transformers paper and co-founder of NEAR, on the show to talk about the development and impact of the Transformers model, his perspective on modern AI and machine learning as an early innovator of the tech, and the importance of decentralized, user-owned AI utilizing the blockchain.
NEAR is the blockchain for AI, enabling AI agents to transact freely across networks.
Connect with Illia on LinkedIn and X, and read the original Transformers paper that Illia co-authored in 2017.
Today’s shoutout goes to Populous badge winner Adi Lester for answering the question DataTable - foreach Row, EXCEPT FIRST ONE.
[Intro music]
RYAN DONOVAN: Hello everyone and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I am your host, Ryan Donovan, and I'm joined today by one of the co-authors of the original Transformers Paper and co-founder of NEAR, Illia Polosukhin. Hey Illia, welcome to the show.
ILLIA POLOSUKHIN: Hey, Ryan. Great to be here.
RYAN DONOVAN: So, top of the show, we like to get a sense of how our guests got to where they are today. What's your journey in tech and software?
ILLIA POLOSUKHIN: I started programming, I guess when I was 10 in middle school. We had like a group of kids effectively who were hacking first in Pascal, and then Delphi, C++, and we were doing programming competitions as well, some kind of national level programming competitions. And I actually got excited about two things: gaming, building games, and machine learning.
And so I tried to build a game in high school. I realized that it's really hard to do it alone.
RYAN DONOVAN: (laughs)
ILLIA POLOSUKHIN: And so I started focusing more on machine learning and I got a job actually remotely, so I'm originally from Ukraine for a machine learning company out of San Diego. It’s a pretty OG company; they've been doing econometrics, machine learning for 30 years, even at that time.
RYAN DONOVAN: Right.
ILLIA POLOSUKHIN: So it was great learning experience. Through them I moved to the U.S. kind of professionally. And then when I saw the cat neuron paper from Google back in 2013, I was like, okay, deep learning is happening. That was like my beacon because it was effectively the first time where we saw training a model without any supervision and it's learning concepts that we can map back to a kind of human understanding. And so I'm like, okay, I wanna do that, but I wanna do that for language.
RYAN DONOVAN: Right.
ILLIA POLOSUKHIN: And for me, language seemed like the kind of rich environment where we can test for intelligence, right? There's– I say there's thousands of species that can see and there is arguably only one species that can talk and transfer knowledge.
RYAN DONOVAN: Right.
ILLIA POLOSUKHIN: And so I joined Google Research to kind of work on natural language understanding and specifically question answering. We're doing a lot of different models to really kind of power that using deep learning.
But the challenge we had was, it was really hard to put that into production because Google has a very stringent requirement on latency. And so if you think of when you're typing things, it should give you a response immediately. And the models we're using were these recurring neo-networks. This is like from the nineties, effectively.
RYAN DONOVAN: Right.
ILLIA POLOSUKHIN: They did not really scale very well with the kind of context size, right? Effectively, they would read one word at a time, and if you throw multiple Wikipedia pages or something in context, it'll take a while to do this. So we ended up doing a lot of very crude approximations so we can actually launch something.
And that actually kind of was the route toTransformer because one kind of fateful afternoon at lunch, Jakob Uszkoreit suggested this idea that like, “Hey, what if we just remove recurrence? What if the model just like in one shot reads the whole document and then tries to answer the question or translate or whatever the problem is?”
And so, I prototyped a version of this right after the lunch. It wasn't ridiculously– like it was getting some signal even in the first version and that's– then took a lot of experiments and, and tuning to get it to state of the art, but that effectively became Transformer.
RYAN DONOVAN: Yeah, that's interesting. It started really early in the machine learning journey. I remember, I had an AI class in college that introduced neural nets and it was just a sum function, right? And it was beyond my pay grade for sure.
ILLIA POLOSUKHIN: (laughs)
RYAN DONOVAN: But yeah, it's interesting how much that Transformer paper has affected things. What's your take on the sort of– what has it been eight years since that came out?
ILLIA POLOSUKHIN: Effectively, yeah. I mean it's insane to see, obviously. I think it's interesting because as a machine learning researcher, as an AI researcher who's been kind of through the deep learning curve, it is kind of expected. You know, if you're sitting there in 2013, 2014, 2015 and you're really plotting the evolution of these models and like you're kind of extrapolating, I would actually say it's actually been slower than expected.
RYAN DONOVAN: Oh really? (laughs)
ILLIA POLOSUKHIN: Yeah. I mean, part of the reason, so I left Google in 2017 after the paper, and like I wanted to effectively leverage this technology to build what now it's called vibe coding. Right?
RYAN DONOVAN: Right.
ILLIA POLOSUKHIN: Back then it was natural language to code and the expectation was that the evolution of AI is picking up and accelerating. And so we want to catch that wave and kind of build a product into that. And it actually didn't happen. Right? That wave only happened in ‘22 effectively, right? So we were four years earlier and like, not just us, I mean, we were– there's a whole community of people who were effectively trying to ride that wave and it like didn't happen at the time.
RYAN DONOVAN: Yeah. Well, I mean there is a sort of like a Pandora's box moment when Chat GPT came out, right?
ILLIA POLOSUKHIN: Correct. Yes. I think as soon as you're like, okay, well this is working, everybody's like, okay, now reproducing that is way easier than getting to that result the first time. I mean, same as Transformers, right? Like, you know, implementing Transformers is super easy, coming up with this idea and getting it to work was a lot of hard work.
But I think in general, yes, like as an AI researcher, we kind of all expected this to happen. Now specifically on like architectures, we would come up with new model architectures, every lunch. So it's not, it's surprising that there's nothing like dramatically different that came up after that that would improve things further. But I think part of it is also just like the– like one, it's a very simple architecture, right? Like it was more about removing things than adding things, right?
RYAN DONOVAN: (Laughs) Right.
ILLIA POLOSUKHIN: And this is in general like a lesson in life. You know, the more you remove, the better things become.
RYAN DONOVAN: Right, because a lot of NLP before that was like Markov chains and a lot of rule-based stuff, right?
ILLIA POLOSUKHIN: A lot of rule-based, a lot of like human– like you needed to have an actual PhD in, you know, linguistics to actually do stuff. (Laughs). Now you just run it on your computer for a while and it just understands everything perfectly without–
RYAN DONOVAN: Yeah.
ILLIA POLOSUKHIN: Yeah. So– and I think similarly kind of, you know, what OpenAI did, it's a lot of hard work to get the models from, hey, here's Transformer that can do machine– basic machine translation to like now it understands everything in the world.
RYAN DONOVAN: Right?
ILLIA POLOSUKHIN: But again, if you look at it from okay, how do we reproduce it? It's actually pretty simple. So like the reality is like we keep seeing these kind of innovations happening in new areas where it's okay now that we have the previous component, the next component is actually very simple, but a lot of hard work to make it work.
Right? So the RL, like the DeepSeek and o-series models, it's the same thing. Like everybody tried RL before, like RL’s been tried for years and years and years and it never worked very well, including, even when Chat GPT came out, people were trying RL on like smaller models and it didn't work yet.
And so it's really like this accumulation of innovations then a small tweak and you know, like the GRPO, the DeepSeek’s paper, right? That model is also very simple. It's just like one formula.
RYAN DONOVAN: Yeah. (Laughs)
ILLIA POLOSUKHIN: And it's like applied now to a well-trained model on top of Transformers. Now it's like, okay, cool. This is actually like, you know, a next bump in the quality and understanding of the world.
RYAN DONOVAN: Yeah.
ILLIA POLOSUKHIN: So I think like we're kind of seeing now, just like evolution, I mean, the step functions happening in different places, not just in architecture. And they're all about being very simple concepts, not applied, just kind of this accumulated improvements that happened previously.
RYAN DONOVAN: Yeah. But there's no further like big explosion, almost like the Cambrian explosion of the Transformers in the Chat GPT moment. Do you ever look at the landscape and you know, like the old mad scientist and say, “What, what have I wrought?”
ILLIA POLOSUKHIN: (Laughs) Well, so this is, this is bringing to some of the focus I've been doing for the past couple years. The challenge with this is like, if we zoom out a little bit and, you know, continue projecting this, I mean, I truly believe, you know, AGI level capabilities is not far away, and in many ways we already kind of have it.
Like o3 is phenomenal by all means and imagination, some of the other models as well are really good at different– like a wide variety of different tasks. So the challenge is now, like, okay, well effectively you have a handful of companies that control and provide access to this effectively intelligence as a service, right?
And if you imagine this world of kind of intelligent internet that is ahead of us. Like it's the same if, you know, AOL back in the day was controlling, you know, every access point. And that was the only way to do this. Right? And they also like by law, need to enforce a lot of things and they cannot open up it in different ways for innovation.
But here, I think the important part is that it's also kind of similarly how the internet was, you know, a final communication kind of thing, right? Like now everything is built on top of the internet. We're probably not gonna have another communication breakthrough, right? That's going to be different from the internet.
I think with AI it's actually more fundamental. This is the last technology period because everything else will be developed by AI already.
RYAN DONOVAN: Yeah.
ILLIA POLOSUKHIN: Maybe it was human like in the loop or human driving it, but AI will be the kind of main horsepower. And if this horsepower is controlled by, let's just imagine, a single company, like that's a high, you know, concentration risk of control.
RYAN DONOVAN: Right?
ILLIA POLOSUKHIN: Because now they can decide who can use it and who can't use it. They can decide when, you know, to censor, they can decide also when to skew things and in a different direction, which can happen based on their decision, based on the government that they're registered at under, or just maliciously, you know, a kind of specific group of people working there can target. So it's just like a very dangerous situation.
RYAN DONOVAN: Yeah. The company's interests aren't necessarily our interests. Right?
ILLIA POLOSUKHIN: Exactly. I mean, like the company by definition needs to make money for the company, right? So their interests are very much, you know– I mean, the example I use is, if you’re sitting at any of these companies, you have a meta optimization function, which is how to make most money for the company and like whenever you A/B test which product to launch, it's not like you like evilly, “Hey, we're gonna launch a product that like manipulates people.” It's just like your A/B test will be, which one makes more money, and the one that manipulates people will make more money.
RYAN DONOVAN: Right.
ILLIA POLOSUKHIN: And we've seen that, you know, with kind of Meta and Facebook historically before, and they like kind of try to roll it back, but like the optimization function is there to continue doing that, right?
And so that's why the idea we've been pursuing is this idea of user-owned AI, right? It's AI that is actually on the user's side and it' s optimizing for an individual's wellbeing and success, right? Which again, it's not about open-source, closed-source, it's really about this kind of Meta game which is about individuals versus some corporation’s economic success. And so that requires a whole different paradigm, obviously, of thinking and the whole infrastructure to be built in this. And this is where my stint in blockchain is very relevant. And so, for context, we started as AI company built as this vibe coding, back in 2017, it didn't work as well back then. We haven't had the scale yet of the models needed to do this, also GPUs weren't there yet at the right scale. And what we were trying to do is actually to coordinate a lot of computer science students around the world to give us more training data so we can improve our model. Right? And we face this challenge of like, how do we coordinate people and kind of how do we create this kind of platform where it's easy for anyone in the world to join and contribute.
And so like, students in China, for example, don't have bank accounts. In Ukraine, for example, you need to sell half of your dollars in arrival by law. And so there's like– every country has some kind of requirement and restrictions, which kind of create massive overhead. And so we started looking at blockchain as a way to solve our own problem.
Like how do we actually facilitate this coordination payment microtransactions around the world? And at that time, in 2018, there was nothing that would be really easy to use and actually scale and offer cheap fees that we could really tap into. And so we ended up focusing our efforts on building NEAR Protocol, which is effectively a blockchain that's designed to be scalable, kind of fixed fee, cheap to use, but also easy to use and easy to build on.
We launched it in 2020. It's been live for over four years now, and it has like 50 million users. People use it for remittances, for payments, for loyalty. It's about a hundred times cheaper for microtransactions than Stripe. It's probably 300 times faster on settlement than Stripe as well, and also way faster than even other blockchains in the ecosystem. It's easy to use. We have users who don't even know that they're using blockchain or NEAR, kind of transacting through it very easily, but what it gives us is a very interesting platform to create this kind of new type of incentives where it's focused on the individual's users and not on like, economic success of like a specific entity. And so that's where, you know, we can discuss how to like build that stack.
RYAN DONOVAN: Yeah. Well, I will admit, I've been a blockchain skeptic for a while. It does seem like it is, you know, been largely used as a platform for speculation or enabling, you know, untrackable transactions, but this sounds like an interesting use, and I have seen other interesting uses that are less about the individual coin, but also about solving a problem, which I think is, is what you're doing. How do they settle up? You know, you said that the settling is fast. How do they go from having a NEAR token to having local currency?
ILLIA POLOSUKHIN: Yeah, so, I mean, there's so-called on-ramps off-ramps, which effectively are last mile, you know, way of going in and out of the local currencies, which we have this protocol we're building, which we're calling “unit five liquidity layer” and it allows already to exchange– it works with dollar, debit card or ACH and you can receive a stable coin on any chain. Right? And then, we are making sure that you can go back. Same, you know, we have a partner Abound. They're doing remittances of dollars from non-resident Indians in America to India to their families. And so they have a partnership in India to then withdraw the stable coin to the INR or even directly pay– India has this UPI thing that allows to pay through QR code so people can actually directly pay for services and goods from their crypto wallet.
So there's kind of a– I would say a mash of local partners in different jurisdictions, which are doing the last mile, but then you as a user just have kind of this single wallet that can, can hold stable coins. So it doesn't need to be kind of volatile assets, it can be like dollar denominated, euro denominated, et cetera. And then you can go and withdraw and pay directly at a time when you need it.
RYAN DONOVAN: I know, you're all about the user-owned AI. So my understanding of a lot of AI, it requires a lot of infrastructure. How can a user own the AI? Do they have to own the infrastructure as well?
ILLIA POLOSUKHIN: It's an interesting question. So what is the properties? Right? The properties is you want that the user can make sure that specific AI model was run. Now, like if this model is closed-source, like it doesn't matter if you know it's specific, you don't know what it is. If it is open weights, which is kind of what we have with DeepSeek, Llama, et cetera, this is good. You can, you know, benchmark it, you can test it, but you still don't actually know what went into the model. Right? So you don't actually know if there are sleeper agents, is there's attacks, is there's manipulation that was embedded into the training data or if there's just pure biases from whoever kind of was doing data, right?
RYAN DONOVAN: Or even copyright risks.
ILLIA POLOSUKHIN: Copyright risks, et cetera. And so ideally you want an actual open-source model where you know the source. Now the challenge is if you just build open-source model, let's say somebody does this, if the weights are then open and everybody can use it for free, then there is no economic kind of feedback to go and build the next model. Right? And we know these models right now are, it's all about effectively velocity as well as research and development models. So what we need, we need a system to build effectively open-source models, but that are monetizable when people are using them.
So you want to know the inputs, but then the output needs to be actually monetizable. And so what we've built is effectively a system to do exactly that. So it combines blockchain and hardware. So modern hardware, as of about a year ago now, supports this mode called confidential computing. So this is NVIDIA, HOPPR and Blackwell together with N2, Zion, you can actually turn them into a specific mode where whoever runs execution, there can have a proof that indeed only this execution was run on this data, nothing else. And even the owner of hardware is not able to access what's happening inside. Right.
RYAN DONOVAN: Interesting.
ILLIA POLOSUKHIN: So nobody effectively can see what happens inside. So what this primitive allows us to build now what we call decentralized confidential machine learning cloud, right? This is effectively uniting anyone's hardware enabled in this mode, to use for fine tuning training or inference where the hardware owner cannot see what's happening inside.
If you're training, you can have training data and training code that you know public that everybody can run, but the outcome will be staying inside this secure enclave. So the outcome is encrypted and only usable if you pay for the inference so you get all the same properties of closed-source, but right now, like everybody can inspect what went in.
The interesting thing as a user, I also get confidentiality because my data doesn't go anywhere. Right? So when they're on inference, I know my data doesn't go anywhere either and you can still, you know, fine tune on top of it, distill, do all the kind of usual operations on top of these models that you would do open- source.
So it kind of combines the best properties of closed-source, like monetization with all the best properties of open-source. Meaning like, you know, developability access, you can run on your own hardware, you can run on somebody else's hardware, but they cannot see your data and really offering effectively a new primitive to do this.
And the only way to pay for these things is through blockchain because it can run anywhere, right? And it runs inside these secure enclaves. So that's kind of how we are combining these things and solving a lot of the core infrastructure challenges.
RYAN DONOVAN: So you said there's the specialized hardware. Do they need to have that hardware to get this sort of encrypted, secret primitive?
ILLIA POLOSUKHIN: So you need to have the hardware, but it is a standard hopper. You just need to have a pretty modern CPU, but like, you know, just modern CPUs are so much cheaper than GPU, so you probably are getting it anyway.
For example, clouds and data centers that were built maybe until like mid last year, for example, wouldn’t have it. So it needs to be something like built out in the modern– that's why like all the Blackwell clusters will have it.
RYAN DONOVAN: You mentioned that you were getting the sort of contribution from computer science students. Are you building a model or training a model yourself that's available?
ILLIA POLOSUKHIN: So we are actually working toward that. So we're building all the pieces that are needed to do that. So the way I think of it is like, hey, let's take an AI lab and decompose it into components, right? So what are the components?
Well, there's an infrastructure layer, right? How do we train models, right? And how do we then fine tune and do inference and like let's do that such that anybody can contribute. But then it's still monetizable. Okay. That’s piece number one. We have actually built a crowdsourcing platform that does have computer scientists and other, like computer science students and other students, and as well as kind of more broader crowd workers.
So this allows us to actually run this kind of data labeling, you know, data curation workloads on this platform and then the next step is we're designing right now, this kind of set of benchmarks. So if you, if you know the process of building a large language model or in any kind of modern AI model, it includes a lot of data, kind of sourcing data, kind of cleaning and curation, maybe some synthetic data.Then there's the whole pre-training process and then fine tuning and model specialization. So there's like a whole pipeline.
Now, each of this thing usually has this whole separate team that's focused on that, and they themselves actually evaluate their work. Kind of on their own set of benchmarks, but that usually is all internal. And so we kind of designing this now in an external way where everybody can contribute and evaluate themselves on and in an open way you also need to make sure that people don't game it and don't you know, try to use test data and training, et cetera.
RYAN DONOVAN: Right.
ILLIA POLOSUKHIN: And so, we're designing a system for that. Again, it's very helpful to have this primitive where you can run effectively model on a private, like on data that nobody has seen because we can crowdsource data and put it inside the secure enclave, so nobody has seen the whole data set. And then we can bring people's models and evaluate. So kind of redesigning this process to get to effectively a decentralized AI lab where models can be continuously built.
RYAN DONOVAN: Yeah, that's interesting. How do you guard against or evaluate for the sort of like data attacks, the data injection, the sort of inception attack that somebody might use?
ILLIA POLOSUKHIN: Yeah. So this is exactly the question and exactly the problem that, you know, I think we actually as an AI space in general, we’ll have to deal with, because I think the internet is becoming more and more just inception attacks and like sleeper agent attacks. So I mean there, there's two effectively parts here: One is like whatever will go into the training data will be publicly visible, right? So everybody can evaluate, you know, you can run whatever analysis on top of it you want. Now that's why we kind of need an evaluation cleaning process to be on its own a competition where you effectively need to have people competing to build the best, the cleanest data set and include the valuation that has like attacks inside the eval set. So that's why we actually need to have eval set to be hidden so people don't know types of attacks that they're being tested on, but they need to clean data in very, and like figure out ways to clean data from those attacks, which probably will include actually using previous generation model as well to detect them.
RYAN DONOVAN: Yeah. The eval dataset, being part of the training dataset is sort of an increasingly difficult problem, isn't it?
ILLIA POLOSUKHIN: Well, yeah, I mean, like everybody just pipes all the data, so (laughs)
RYAN DONOVAN: Yeah. I saw one– like the highest level training set, humanity's last exam. Have you seen that one?
ILLIA POLOSUKHIN: Yeah. I have. Yeah.
RYAN DONOVAN: That one seems like it's just using security by obscurity, almost like the questions are so deeply rooted in some domain.
ILLIA POLOSUKHIN: Yeah. I mean, I know the people who worked on it. Yeah, it's like if you open it yourself, it's like you probably cannot answer anything unless you go like, spend a bunch of time on Google.
So it, I mean, part of it just, just shows these models are really good already. And like all the normal questions already answered. I saw another test, which was like, kind of trying to solve puzzles and I was, I couldn't solve any of them (laughs).
RYAN DONOVAN: (laughs)
ILLIA POLOSUKHIN: It's like, this is like, oh, you know, AI cannot solve this. I'm like, yes. So, am I. It's like, you know, find all the, like nines that are– it's like a bunch of dots and a bunch of numbers and like find the numbers that overlap. And you're like, find those, like all the nines that form a triangle or something, or like form a star and you're like…
RYAN DONOVAN: Jesus. (Laughs)
ILLIA POLOSUKHIN: Yeah. But I think like, at the same time, we know those models have a lot of, still have a lot of issues, right. They're not consistent. They hallucinate, they make logical mistakes. They're not able to make proofs. So there's still a bunch of actual good problems that we should be evaluating on.
RYAN DONOVAN: Yeah.
ILLIA POLOSUKHIN: It's kind of a chicken and egg because like as soon as you publish something that gets solved, but you don't know if that's solved because people just like looked at it and– I mean, even if they didn't train on that specific data, they can just generate a bunch of data like that, right? And then it like figures it out.
So that's why, again, like I think having hidden evaluation is extremely important because it allows to kind of have like a lot more neutral…and then continuously getting more data from this crowdsourcing. So that's why we're kind of building all those pieces to make sure. We can actually have this.
And then still you need the aspect of user ownership where we need to make sure the model kind of as a meta function is still optimized for user success. And so that's another important aspect, like, you know, evaluating this model, making decisions that are on behalf of the user in their preference.
RYAN DONOVAN: Right, right.
ILLIA POLOSUKHIN: And then on top of this, you still need like all the tools and like all the infrastructure to actually, I mean, in fact, the agents to do work and, and you know, serve the user because just the model is not gonna like solve all the needs of the users. And so you need an assistant, you need this kind of assistant to be able to execute actions around the world, which also needs money. And so that's where all the other pieces as well that we kind of in the ecosystem that we are building out.
RYAN DONOVAN: Yeah, the agents are definitely something a lot of people are talking about. Do you think any of the existing standards that are coming out, the model context protocol, the agent-to-agent, do you think there'll be one standard at the end? Or are we yet to see something that will become the standard?
ILLIA POLOSUKHIN: I mean, Model Context Protocol I think is pretty good stab. I mean, we actually have this agent interaction and transaction protocol, AITP because we started from a different problem, which was like, how does different agents who have different owners collaborate, right? Like the MCP and even A2A to an extent, I call it a single player mode, right?
They assume everything belongs to the same user or the same organization and you effectively like, don't have counterparty risks and so I think those are useful and it's useful to have and like we actually, you know, sent the port request to MCP to add like extensions and we are going to add payments to MCP so your, your model will be able to pay MCP.
We're collaborating with Coinbase on this four two protocol, which is like a for HTP 402 code, which actually has been around for a long time, but never was implemented by anyone. So there's like a way to pay through, through HTP. But all of this is kind of still single player mode.
The multiplayer mode is when like, hey, how do two different companies actually figure out how to do a transaction? Well, the thing is like in the real world, it's effectively a legal contract. It all boils down to a legal contract. And this, you know, maybe you've heard of smart contracts.
RYAN DONOVAN: Sure.
ILLIA POLOSUKHIN: So why don't we make the smart contracts actually smart with AI and allow agents to actually collaborate in a way that can be adjudicated if they've done the work that, you know, if they have contributed from both sides in the way that was expected. And so–
RYAN DONOVAN: So would, would the AI agents be making legal contracts then?
ILLIA POLOSUKHIN: So this is kind of what we're designing. It's a binding contract. I mean, we are not right now calling it legal, but I think in reality this will become effectively, you know– so the system we're designing is effectively, you know, you have agents between different trust zones.
They can negotiate on something, there can be escrow for money. The negotiation itself is recorded and then when the interaction is complete. Either party can effectively trigger dispute, and then you have an AI dispute agent joining, looking at the interaction and which effectively is your contract and deciding which side is right. Now, if that fails, right, if the parties are not agreeing with the dispute, then you can bump it up to actual people in court. So you can have all of those interactions in court and if this agent’s representing actual legal entities, they effectively come in with that. Now there's a lot of, you know, work to get there.
RYAN DONOVAN: Sure, sure. Yeah.
ILLIA POLOSUKHIN: But fundamentally, I think that's how commerce in the future will be done because a lot of the current interactions, negotiations, you know, defining terms, et cetera, all of that will be done effectively by AI agents on the fly, and then they can, you know, facilitate payments, transactions, et cetera, underneath.
They can also go and like, you know, negotiate with other parties on behalf of, you know, like for example, I need to, you know, order a hundred tons of steel and, you know, deliver it somewhere in the U.S. Well, an agent can go, you know, find where to source steel, find the tanker, you know, find a warehouse where to house it, make sure all the contracts are set up,and then come back to me with already master contract that includes all these items and a little bit of surcharge for doing all that and having a repository of all that information and preexisting potential relationships. So that's kind of like the commerce of the future will be more of that.
RYAN DONOVAN: Right.
ILLIA POLOSUKHIN: And on an individual level as well. Like one of the kind of paradigms that I usually talk about is if you have AI assistant on your side, one of the things that our, like humanity got into themself is this decision, bottleneck, right? Like there's so many small decisions we're making all the time, day to day, right? And we ended up just kind of delegating a lot of it because like you cannot just go and review every single pharmaceutical company and every single product and decide what's good for you. Right? And you like not– you know, neither of us are experts.
RYAN DONOVAN: We rely on experts, right? Yeah,
ILLIA POLOSUKHIN: Yeah. We rely on experts. But those experts, they don't like evaluate for you, they evaluate in general, it’s like, is this okay in general based on, you know, statistics, et cetera. And so your AI can actually go and evaluate that and actually talk with every single pharma company, their assistant, you know, given your medical history, given your medical state, find the best product, maybe even ask them to fine tune the formula for you specifically and get that, and that actually fits really well with the peer-to-peer interactions, right, because you're effectively transitioning away from middle men deciding what is good enough for everyone, and then distributing back, which is literally all of the internet and all of the commerce is done like that.
You know, supermarket is the same thing. They source some products and now you can choose from them like if you want anything else that, good luck, but your assistant can go and source anything, anywhere. And then those are– on the other side, there's the AIs of those factors and services can figure out how to budget, how to bundle it, how to ship, like, yes, maybe you only need one product, but maybe in your areas there is other people who also want it.
So I think there's kind of a transformation that's going to happen in society with this that actually fits really well into peer-to-peer interactions. And again, this is where blockchain is extremely useful because it facilitates that at way faster speed.
[Outro music]
RYAN DONOVAN: All right, everyone, thank you for joining. It's that time of the show where we shout out somebody who came on to Stack Overflow, dropped the little knowledge, shared some curiosity, and earned a badge. Today we're shouting out a populous badge. Today's winner, Adie Lester dropped an answer that was so good, it outscored the accepted answer and the question they dropped it on was “Data table for each row except the first one.” So if you are curious about that, we'll drop the question in the show notes.
I'm Ryan Donovan. I edit the blog, host the podcast here at Stack Overflow. If you wanna reach out to us with questions, concerns, topics, et cetera, email us at podcast@stackoverflow.com. And if you wanna reach out to me directly, you can find me on LinkedIn.
ILLIA POLOSUKHIN: I'm Illia Polosukhin. I'm on Twitter @ILBlackDragon and on LinkedIn at Illia Polosukhin, and you can keep track of NEAR at NEAR.org and @NEARProtocol on X.
RYAN DONOVAN: All right. Thank you very much everyone, and we'll talk to you next time.
[Outro Music]