Waleed Kadous, Decoding LLM-based approaches and startup versus big tech culture - Ep.3, Season 2

Dr. Waleed Kadous is Chief Scientist at Anyscale, the company behind the popular open source distributed computing platform Ray. He leads the company’s LLM efforts. Prior to Anyscale, Waleed worked at Uber, where he led overall system architecture, evangelized machine learning, and led the Location and Maps teams. He previously worked at Google, where he founded the Android Location and Sensing team, responsible for the “blue dot” as well as ML algorithms underlying products like Google Fit. He also holds more than 40 patents.

See all episodes

Show Notes

Contact Links
Waleed’s LinkedIn account: @M Waleed Kadous
Anyscale’s Website: https://www.anyscale.com/
Ray: https://docs.ray.io/en/latest/.


Transcript:
Ali Zewail:
Welcome to the Startups Arabia podcast. My guest today is Dr. Waleed Kadous, chief scientist at Anyscale. The company behind the open source distributed computing platform, Ray. He leads the company's LLM efforts. Prior to any scale. Waleed worked at Uber where he led overall system architecture and evangelized machine learning and led the location and maps teams.
He previously worked at Google where he founded the Android location and sensing team responsible for the blue dot as well as ML algorithms underlying products like Google Fit. He also holds more than 40 patents. If you're interested in large language models, and who isn't these days, this conversation is for you.
We go pretty deep into how to work with them and how to build on them. But also, this conversation is interesting because we go into Waleed's experience working at Google and at Uber, at the very highest of levels, and contrasting the founders of these two companies, Larry Page and Travis Kalanick, the culture between them, and the transition from one phase of a company's life to another at this very large global scale.
It's an incredibly enjoyable conversation.

Show full transcript
Collapse transcript

AliZewail: Welcome to the Startups Arabia podcast. My guest todayis Walid Qaddous, chief scientist at AnyScale. I am so happy to meet him todaybecause, uh, I've actually used products that he has had a significant hand intoday. Uh, and I usually use them every day. He's someone who's worked onGoogle maps, on Uber, uh, maps as well.

So, I mean, Everysingle one of the listeners has probably, uh, touched code that Walid haswritten or, um, uh, influenced in one way or another. And now he's doingsomething really cool at any scale. And I think this is probably going to beone of the most technical podcasts we've ever had, but also a really fun oneI'm looking forward to.

Uh, welcome, Walid.

WaleedKadous: Thank you very much. I'm really excited to be here. And,um, thank you very much for the invitation, Adi. It's, uh, it's great to talkto you today. Yeah.

AliZewail: Um, all right. So maybe we can start, uh, you startedyour professional life. You studied, uh, your undergraduate in, uh, inAustralia and, uh, and your PhD also there, but, and then you ended up. At somepoint in your life, you know, uh, at Google and Uber and, uh, you know, thatworld of tech startups, uh, in Silicon Valley.

So can you likebriefly tell us how, how that story went?

WaleedKadous: Yeah. So I was born and raised in Australia fromEgyptian heritage. Um, uh, I completed my PhD in artificial intelligence inlike 2000 and this was before, you know, the argument when, when I told peopleI was working on AI, there's like, But there's no commercial applications ofAI. Here we are 25 days later and

AliZewail: Guess what?

WaleedKadous: guess what?

Exactly. Um, afterI finished my PhD, I did two fellowships at the same university. One in. Uh,robotics and another one in natural language understanding. Um, but it took mea while to work out that actually I didn't like the academic world that much, andI wasn't that good at it. So after a while, I started looking at otheropportunities.

I interviewed forGoogle Australia, but ended up getting a role in Google in the U S. And, uh, Ijoined, uh, a really interesting group that was on the edge between researchand production, you know, they'd just spun out of Stanford and they were tryingto make their own maps that didn't belong to like one of the maps providers,because that would allow Google to really do some amazing things like superfresh maps and, you know, completed my first project there.
And, you know, Ithink that the time when I said, you know, I, I really, I'm glad I made thischoice is when I realized that something like. 10 million users were using themaps that we had generated. And that was, that was such an addictive feeling.Of course, now like it's 250 million, but the point is just that, that sense ofimpact that you get from working in a corporate environment, I found veryaddictive.

I, uh, worked onother things, including Google's research lab called Google X at the time onindoor location and helped to launch a indoor location on Google maps. Um, andthen I took on a role in Android, the Android operating system, and we wereresponsible for making the blue dot that you see when you open Google Maps.
Really good. Thatwas our job, but we also started, that was the time that AI started to comeback into my life. And we started to do things like activity recognition. Canyou use the phone to detect if someone is walking or running? You know, can youdetect, you know, we also worked on things like. You know, can we detect, um,uh, when someone has fallen, all of that kind of stuff.

Um, and then aftera while, you know, this is after eight years, but the principal engineer at,um, at Google, but started to feel Google was getting a bit slow. So, uh, Imoved to Uber starting on the location team. But, um, soon after I took over,like leading the engineering of the NAPS team. And then, you know, funny thingshappen at startup.

So Uber was really,really great for, for velocity. It was, it was an execution machine. Like theywere really pumping out features, but the problem was that, you know, uh, ifyou move that fast, eventually your technical debt catches up with you. And yourtechnical, our technical debt at, at Uber got so bad.

That, um, we werehaving to hire people just to kind of keep up with systems, let alone addingnew features. So I worked with the CTO at the time to kind of drive an effortat, uh, Uber to reduce our technical debt. And, um, you know, that was, it wasreally scary. Cause at the time we had like many of you remember themicroservices revolution and Uber had kind of gone microservices crazy.
We had ended upwith 4, 000 microservices and. 2, 000 engineers, in other words, twomicroservices per engineer. That's not a healthy ratio. And so basically whathappened is this really bad anti patent started to happen, where instead ofthinking about your microservice architecture, you would write one moremicroservice architecture to support a feature.

And it just gotlike into a crazy mess. So, you know, we gathered engineers across the companyto kind of pivot that and fix. Fix that architecture. And then, you know, asthat came to an end, I kind of went, I decided, um, I wanted to try the startupworld. And so the opportunity at AnyScale originally as head of engineeringkind of opened up.

And I was like,AnyScale was really, really interesting to me. Um, you know, um, one of thethings in AI that's really hard is to really push the limits. You need to usemultiple computers at once. And as anyone who's known, known. No, you know,he's done AI, distributed computing is really hard. Getting computers tocoordinate in a performant way is like painfully difficult.
And I remember,cause I started on this in my PhD when I was trying to get final results. Um,you know, I had to kind of build my own cluster of machines and wrote my own,like really hacky scripts to do it. And it was incredibly painful. And, youknow, AnyScale came out with this technology called Ray. You know, that'sreally the tech that was built at Berkeley that made it easy.
And, you know, Ithink I fell in love with Ray when, you know, I used a particular type ofmachine learning called decision trees, maybe we're getting a bit too technicalhere, let me know Ali if I am, but, um. But you know, I had tried toparallelize decision trees for years, and it's a really hard thing toparallelize because it's kind of like It's a divide and conquer algorithm.
In other words, youtake your data, you split it in half. You just, you take your, and you takeeach half and divide it. That doesn't fit into normal distributed computingmodels, but when in the space of three hours, I was able to implement it andshow a performance gain of 60 percent on my machine, and then I was able toscale up to eight machines.

And see like 50percent of them, theoretical maximum. You know, in other words, when I use 32processors, it was as if I had 16 processors, I was like, okay, this, this techis for real. So I ended up joining the company originally to head engineering.And then, and then I, like many of you. Something magical happened in November,2022, right?

ChatGPT came outand it's like the world kind of woke up. And, um, at the time I, you know, kindof built engineering up from 25 to 75 people. I built my second layer ofmanagers up. And, you know, I was able to find someone to hand it over. And Iwent to the founder, Jan Stojko, who's a very famous, uh, Berkeley professorand said, look, we're not, this is going to be hard.

So just let me stepdown. I'll become chief scientist and I'll drive the large language modeltransition for the company. And basically since March, 2023, that's, that'swhat I've been doing. Taking that company, you know, Ray was a very goodgeneral purpose technology, but the LLM and Gen AI revolution was so.
Intense that, youknow, I remember a blog post that I put out, right? So we were normal, like itwas normal for us. We were very well respected within the machine learning anddata science community. But, you know, so we would put out a blog post and we getlike two and a half thousand views. Right. So that was a signal to us as thesize of the market.

I put out one blogpost called, um, numbers every LLM developer should know. And it blew, like itblew everything away. Right. It was like, we started to get 40, 000 views onthat article. And suddenly we realized that the scope of people who are interestedin LLMs was just. Six, seven times as large as the scope of people that werereally interested in machine learning distributed and infrastructure and MLOpsand all of that kind of stuff.

So we made a hard,I mean, maybe you'd call it a pivot. Maybe you'd call it live refocusing orwhatever. The technology underneath didn't change, but the way that weinteracted with our customer changed completely. You know, we often started tooffer, you know, new products and we also had to change the culture of thecompany, right?

Like at this timethere was a hundred people at any scale. And a lot of them didn't understandLLMs, didn't understand the potential of LLMs. And really weren't thinkingabout things in, you know, LLM specific ways or generative AI specific ways.So, um, you know, I, I led that effort to kind of change the culture.
And now AnyScale isone of the leading, uh, voices in, um, inference with large language models.You know, we offer, you know, they offer very good prices. Um. We offer thingslike fine tuning, and it's a very intense competition between us and companieslike, um, fireworks together, um, uh, replicate, you know, so these are thecompanies that serve open source large language models, and it's like a verycompetitive environment, but we're kind of at the forefront of that nowdeveloping like very clever algorithms because we, we use all of the experiencethat we had for doing distributed computing before.

And we're able totranslate that to large language models. And that's kind of. You know, some ofthe things that I've worked on, maybe it's a bit long winded, but, uh,  

that's about as Ican summarize 15 years in.

AliZewail: no, it is not long winded. It's actually interestingand, um, it's interesting how you, your timing was kind of right, even thoughit wasn't like focused on the right exact specific things, but you were kind ofready with all the components.

WaleedKadous: Yeah.

AliZewail: I think it took some courage to kind of drop everythingand, and shift the whole company in a new direct direction.
Uh, luckily withall the, I think, VC support and all the buzz around LLMs, thanks to Chad, GPT,uh, there, there was probably, I mean, I can imagine that if you tried to dothat in 2022, uh, you, you would've had much more resistance from your boardand, and things like that.  
WaleedKadous: Yeah. I mean, I  
AliZewail: it's really great.

WaleedKadous: Yeah, I mean, you know, it might seem like an obviousdecision in retrospect, but you know, that's life at a startup, right? At thetime, it's like we have real customers paying for our traditional products andyou know, they're, you know, they're very respected, you know, and it's bigname companies, you know, fortune one, you know, fortune 500 companies.
Why would youchoose to pivot? And it's just the growth opportunities were so much more, youknow, the, the, the way I think about it is I actually looked up the numbersand there's about half a million machine learning data scientists,infrastructure engineering type people in the world. There's about 25 milliondevelopers.

So we kind oflooked at those numbers and we said, you know, in the future, almost everydeveloper is going to be doing things with LLM at one stage or another, right?If it's not them using LLMs to develop their code, it's going to be themdeploying products that use large language models. And we just said, yeah.
And you know, therewas a lot of internal debate, you know, inevitably as, as, as you can imagine,when you kind of. Do things like that and, you know, but like you said, it's,it's kind of funny, you know, uh, when, when, when a music band is successful,all of a sudden, um, you know, you, you often hear the people, they're saying,yeah, it took us like 10 years to be an overnight success, right?
And, uh, that,that's how it felt, right? It's just like, we've been working on things in thebackground. And then when the large language model came, like with largelanguage models, you have to deal with the scale question from day one. Likethese models take the, the, the really good models take like four GPUs workingtogether to, to kind of produce one answer.

And so it kind ofpushed this scale question, you know, into the forefront. And because we'vebeen focused on machine learning, uh, and scalable machine learning for such along time, we were able to redirect very quickly. But it's only because of thefoundations we, we built over the preceding. Three to four years that itenabled us to make this pivot.

Um very veryquickly. Um, again, it's not really a pivot like we um,  

AliZewail: Well, it is a type of, I mean, a focus pivot.  

WaleedKadous: Yeah  
AliZewail: it's actually a, you know, there's actually a name forit, but I can't remember it. Right. Zoom in pivot. Right.

That's, that'swhat, uh, what it was called in lean startup. So, uh, and I, and I would say ittook a lot of courage to be honest, because a lot of people in that situationwere said, okay, half of.

The resources willfocus on NLMs and the rest will do this and we'll have two business units anddo that, and that would have not cut it because the other competitors who arefocused would have won the market share. So, yeah, I mean, really cool story. So,I mean, you know, maybe drilling down on, on any scale, it's kind of similar toDatabricks.

It started inBerkeley as an open source project. So, you know, with, uh, I could academicbackground. co founders and, and things like that. And then it became astartup. Is that kind of a startup different in terms of maybe culture orprocess or the way it works then say an Uber or a Google?

WaleedKadous: Yeah, it is. I mean, um Very different to well, it'skind of like a hybrid right like Um the deep technical details that you getfrom a place like google They were there and the execution focus that you gotfrom uber that was there But, you know, in, in many ways, it hascharacteristics of both of my past experiences, right?

One was theexecution focus and one was the deep technical focus. So there was a respectfor like deep technical work. Um, but it's definitely like the way that I thinkabout it is that a startup can fail for two, for three essential reasons,right? The first one is, um, execution risk. You were not able to execute onthe plan because you didn't build the team right.

You know, whateverelse, um, product risk, like you built a really functional thing as defined,but the market didn't want it. And finally, you know, with, with Uber and, youknow, with Uber that, you know, those two were present, but the execution risk wasdealt with and within, you know, six months, the, the, the product risk wasdealt with, right.

So it was like mostof the risk of the startup had been averted. When you do a startup like the onewe're doing at, you know, at any scale, there's a third type of risk calledresearch risk, right? Which is nobody knows if the problem that you're trying tosolve can even be solved. Right. And, uh, you know, you have to balance thatwith kind of the, the, the, the practical problems of, of. Selling a product,right. And, you know, many of the things that we tackled, we thought, you know,uh, you know, we took wrong directions in engineering quite a few times. Um,and so I think what happens is at a place like this, that has that universityDNA, there is an element of, um, research risk, like can this problem besolved?

Can you actuallymake distributed computing both efficient and easy to use? That's a 30 year oldquestion in computer science. Right. And maybe we sold it, maybe more, youknow, um, it's always been a challenge of performance versus, uh, um, ease ofuse when it comes to distributed computing. And so there's a, this thirdelement of risk that, you know, comes into it.

I feel like nowwe're still, we're more focused on the execution and product risk aspects, butyou know, I find this kind of model of like execution risk, product risk, andresearch risk. To be a useful model for any startup to kind of look at, youknow, where are we facing the challenges? Um, and, um, you know, differentstartups can succeed or fail for any of the reasons on, on that axis, right.
Along thosedifferent axes.

AliZewail: I mean, you mentioned that this is like a 30 year oldproblem and, uh, it's been there for a while, uh, paralyzing. Complex workloadsis, is very difficult. Uh, nobody has really been able to make it easy and, uh,the way Ray has. So what was like, can you give me some background? What waslike the, the key unlock or what was the key insight that, that, that made Raypossible in the first place?

WaleedKadous: I think it was the, the heart of Ray is a very complextechnical thing called a global scheduler, right? So when you have work thatyou want to parallelize, you need something that does the coordination and theusual problem is starting a new task or new, new thread of work across multiplecomputers is really, really tricky, but this group worked out how to build.
An efficientcombination of a global and the local scheduler where there's a bias towardsdoing things locally But you know if that local machine kind of gets overloadedyou can start to move the workload to other machines And you can add newmachines to the cluster. It's really this aspect of how do you build such athing, right?

It's hard enough tobuild a scheduler on a single machine, but over years they built this reallygood design Um, and I think at the heart of the success of Ray, the open sourceproject that AnyScale kind of is the, the company behind is a certain ambitionabout tackling this problem that's 30 years old.

And, um, I stillremember when I read the white paper for, um, AnyScale. It was like. There wasthis line in there that said something like, given the choice betweenarchitectural complexity and API simplicity, we will choose API simplicity. Andthat one line was both one of the most interesting reasons to be at thecompany.

And the thing thatgave me like nightmares, because actually dealing with that level of architect,like at its core, Ray's actually a fairly complex system. And usually. Whenyou're like doing systems work, you actually want simple systems with clean abstractionsand all that kind of stuff and simple cases.

But at the heart ofRay's actually fairly complex. You do things like distributed referencecounting to make sure, you know, that's as scary as it does. Most of us areused to doing reference counting on a single machine. You know, that's at theheart of things like Python, but now you have to do reference counting acrossmultiple machines.

Many of us are usedto having like an object memory. But what happens when you distribute thatobject memory across multiple machines? So, at its heart, it's, it's like,we've developed these techniques over time. Let's bring them together to, totackle this ambitious goal. And it's not perfect. It's still, you know, I'd sayRay is amazing, um, and has improved significantly, but there's still, you'rereading, you know, I would say it's, it's still not perfect because it's asomewhat leaky abstraction.

You still need toknow a little bit about how Ray works. To take full advantage. But the nicething is that Ray has been increasingly adopted by library writers, um, in themachine learning and AI community. And that's kind of taken the burden off us,right? Because then it's only the people who are writing the machine learninglibraries that really need to understand the complexities of Ray.
You can then justgo and use the library in that person's hard work to kind of solve yourproblems. And, uh, it's really the ecosystem around Ray and the libraries thatsupport Ray. Um, some of them written by any scale, you know, but some of themalso written by the community as a whole. That's, that's really interesting.
AliZewail: Yeah.

It's the power ofopen source. So Cool. Um, and you know, you, you touched upon this a little,but you joined any scale at a relatively early time. So it was founded 2019. Ithink you joined January 21, um, and versus, you know, Uber and Google, whichyou joined later on in their stories. How has that been different?
How have youexperienced that differently?

WaleedKadous: I often think in retrospect that I did it in the wrongorder, that, you know, what I should have done is started a small startup andthen work my way up to bigger companies, but you know, everything informs whatyou do. I think the key things are, uh, the pace and of execution is muchfaster, but I think as a learning experience, startups are wonderful too,because you get to, if you think of yourself as having a certain scope, right?
Like managing. 20people, let's say, right? If you manage 20 people at a company like Google,you're, you're managing 0. 0001 percent of their workforce, right? Causethere's 200, 000 engineers at Google, right? If you're managing 20 engineers ata startup, like any scale, that's like a quarter of the team, a third of theteam you need to kind of, and so just the, the level of.

Thinking that youhave to perform about who are our customers and, you know, what are thedetails? It's just, um, you know, it's a very different thing. The other thingto say, I mean, it's not one's not unconditionally better, right? It's just.You know, definitely the experiences that I had at Google, just seeing like howto build quality systems that was invaluable.

Um, and so we triedto replicate that. So, um, a lot of the effort we put in, as I built theengineering team was about instilling a culture of excellence and, um, andreally holding a high bar for quality. And, and, um, you know, that, you know,really coming, mixing the two cultures. Right. So, you know, when I firstjoined, it was very much a lot of graduates from Berkeley who the people knew.
It was verymonolithic, mostly, you know, males of a certain age. Right. And then westarted to bring in experts from other areas, from people like Google andFacebook and, you know, all of those other people who just had a bit moreengineering kind of. for lack of a better term and doing that culture changekind of allowed us to, to, to kind of evolve.

So usually one ofthe things you find is that a company like Google, most of the things like theengineering ladder and like promotion and all that kind of stuff is kind ofdefined when you're at a startup, part of your responsibility is one of theearly. People is to define the culture of a company.

And, um, and, uh,that is not an easy exercise, right? Um, so, so I think there's a lot morebuilding and not just building of the technical aspects of, um, of things earlyin the life of a startup, you have to build all of the things that a companylike Google built 10 years, 20 years before, right? Um, I think, yeah, butthat's kind of the exciting part of it, right?

Because you cantake what you've. Like about your experiences at Google and Uber and otherplaces. And, you know, you can bring everybody from every corner of the kind ofstartup world and kind of medium size all the way up and everyone kind of mixestheir expertise together. And, um, you get something much stronger that way.
Um, you know, Ithink diversity is, you know, kind of an overused term and, but really, youknow, if you think about it, um, you know, aside from gender diversity and allof those kinds of things, diversity of background is very critical because so,you know, there's a lot of really interesting research that shows thathomogeneous groups are much more smooth.

But they're not aseffective as heterogeneous groups. So of course, you know, the, the challengeis that of course there's going to be like a bit of cultural friction between,you know, the freewheeling academic style, you know, while I built the prototypeand I threw the, the, the code on the web, uh, versus kind of the wait, youknow, I was dealing with the, you know, a system that dealt with 3 million QPS.

You can't, youknow, so there's always that kind of friction, but the, the, the beauty and thejoy, you know, the, where things work is where you bring those two worldstogether. And if you're lucky, you can maintain the execution speed, uh, ofthe, the youthfulness with the, with the maturity and not heading down the deadends of someone who's had the experience.

Right. And it comesfrom a mutual respect for one another. But again, it was like, this is the funpart of a startup. You have to solve whatever problem comes along and becauseyou're a much. Bigger fish in a smaller pond, you learn much more quickly, youknow, at any scale, I wasn't, I was on the executive team, right?
Like, so, you know,I had to learn about sales. I had to learn about marketing. I had to, you know,understand how we build a product organization. How do we set the rightcultural balance between product management and engineering and get those twoteams to work together? Where do we pull in design? You know, um, all of thosekinds of questions are struggles that you faced as a startup that you don'tstop.

You face at alarger company. So, I mean, you learn much faster at a startup, to be honest,but, you know, everybody has to choose what's good for them. You know, maybeyou're in a stage of life where you can't deal with the financial risk of beingat a startup. And you need, you know, you need the income that you get fromworking at one of these large companies. But so there's no judgment there. It'sjust throughout life. We have to make different choices about, you know, whatwe want to work on and the different styles of places we  

AliZewail: Yeah. That's the beauty of. Diversity, right? And, uh,and yeah, I mean, startups are learning machines, essentially. So it's 3D. Coolto be in one, uh, if you're just, if you go with the flow and keep learning andkeep, uh, adding value.  
WaleedKadous: Yeah. I think one other thing I'd highlight is the typeof people who succeeded as a startup are a little bit different. So actually,you know, one of the things we learned the hard way was there are many peoplewho could be successful in an environment like Google. And like, we're, youknow, very highly ranked at Google.

And then you bringthem to, into a startup where you need to have much more levels of autonomy andself drive and they, they, they don't do well in the environment. Even though,you know, they're technically brilliant, there's just. Startup is a very challengingenvironment.

AliZewail: yeah, I, I would definitely underline that. I mean, myfirst startup, uh, I hired some people who were really brilliant in terms of,before that I was in the corporate world managing like a big technical teamand, and all that. So, so I chose people kind of with the same lens. And theydidn't do very well and I wouldn't say it's even about autonomy, but, but it'sjust like you just, when you're in the, when you're delivering on a productthat's been there for a long time, it's all about dependability and delivering,you know, as per spec and things like that.

And the startupneeds exactly the opposite. It needs someone who is creative and flexible andwill, uh, notice that the customer really needs something else. Then whatyou're working on and raise the flag and, and maybe take the initiative to dosomething. So they're actually very different personalities.
Most, both of themmight be autonomous, like and responsible in their right environment. But in,in the other one, they're simply, it's just difficult to cope. And it's verydangerous to hire with, uh, with the lens of the corporate in the, into thestartup. Uh, even though, I mean, some people do make the transition, but theyare rare.

WaleedKadous: Yeah. I mean, I think, I think I'd, uh, another thingI'd like to, to highlight is one of the big differences is comfort withfailure, right? Like, um, if you're in a big company and you're kind of one ofthese people who's had like a very linear progression through life, you gotinto a good university, you went through, you've joined Google and you workedyour way.

Suddenly you'rethrust into this environment where you can do everything right. And it's stillfast, right? You know, it might be competition, right? So you have to kind oftake an experimental mindset and say, look, I don't understand exactly howpeople are going to use our products. So let's try something and it doesn'twork.

Let's try somethingelse. Right. Whereas very much the problem at a place like Google is, is theproblem is well understood. If I think back to product risk and execution risk.There's really not, not a lot of risk, you know, at a place like Google is eventhe execution risk is kind of dealt with because you have the, you know,recruiting machine, you've got great people.

So it's also has tobe a comfort with failure and it's not like failure of the startup as a whole.I mean. Startups are like failure machines, like you said, right? Like you kindof stumble your way through until you find something that works. It's kind oflike a, you know, a random walk through hopefully a little bit better than arandom walk, but you know, kind of like this random walk through the space tillyou find something that clicks.

Right. And so it's,it's not a lot of people are, are, are adept at, you know, switching to that,you know, um, when you have these new types of risks that you weren't, youknow, dealing with before.

AliZewail: Exactly. So, I mean, now, now that we're like comparingcultures, uh, you worked in two very different cultures. I mean, Google andUber. These are like totally different types of companies, uh, at least fromwhat I hear. Um, how, how did you, I mean, how did you experience that? What,what are your reflections on, on the contrast there?

WaleedKadous: Well, I mean, yes, the transition was somewhatdifficult, right? And again, you know, um, it was, it was a really difficulttransition because, you know, you kind of assume that Google, everything isdone right at Google, but you start to realize, no, there were, there are someissues at Google. I think the, the, the main one was execution speed, right?
So you just like,I'll give you an example at Google to ship a product. You had, they had thissystem called launch count, which is the launch calendar. And before you couldlaunch, everybody had to kind of sign off before you launched. And there wasabout 40 people who had to sign off, you know, everyone from legal tointernationalization to the VP of this, to the, and so launching stuff, youknow, just became very, very difficult.

And it makes sensefor Google, right? Because. Because, um, you know, there's so much at stakethat search page, you change one pixel and it's a 3 million, 5 million per daydifference, right? Like, so it's a set up for them, right? We have to have allof this ultra caution, um, stuff, right? At Uber, because it was much newer, itwas just like the wild west, right?

It was like teamswould do things, you know, sometimes, you know, one example of this going crazywas. At one point there was this, um, there was like a bar for notifications atthe top of the Uber app, right? And every team would just put something in thebar. And so it ended up that that's notifications were like two thirds of thescreen and the map was at the bottom, right?

Because every teamwas just kind of like pushing stuff out and no one was kind of looking, youknow, at the overall experience. Now that was something that was fixed, but itgives you an idea of like how these, these different, different modes operated,but it's just this spirit of fast execution also, you know.
I think Googlestarted without much competition and it was far in the head of everyone else.Uber was in like a, you know, down and dirty, like street fight almost, youknow, for market, you know, got bruised and battered a few times. Like it'swhole experience in China, for example, right. Where they went in guns blazingand then, you know, so, um, you know, they were very, I think one thing thatI've learned is, you know, the impact of competition on, on a company'sculture, right.

Um, and, uh, youknow, of course. Things were very turbulent at Uber, right? Like we had twoCEOs, you know, there was, you know, there was a time at Uber when, um, readingthe front page of the New York Times was more informative about what washappening inside your company than kind of going to the all hands, right?
Um, and differentboard members leaking things. So it was, it was completely insane. We saw like,you'd go to an all hands and literally the all hands. Someone, you know,someone like Mike Isaac is watching the all hands in real time and blogging thecontents of it. It was, it was completely crazy. So, you know, different,different things.

And the idea isthat you talk, uh, you, you pick up the best from each of them. I think onething again is just. Realizing that Google was a walled garden, you know, andanother thing, technically speaking, but Uber was almost entirely built on opensource, right? And I went from like, Google had its own terminology and verywell crafted things, you know, the, the way Eric Raymond describes it iscathedral, you know, very much well designed and everything.
To Uber, which wasvery much like a bazaar, it's just like complete mess, more like a, rather thanbeing organized, it's more like an ecosystem of different, different forces,right? So, um, you know, I think that also is something that was a, you know, soI had to relearn all of the terminology. You know, this is this thing inGoogle, you know, a big, you know, big file or whatever, this is what they callHDFS in open source.

And, uh, so, youknow, one of the great things about, um, Uber was. They not only used opensource, they actually. When I look at how many companies spun out of Uber thatwere open source related, you know, Chronosphere, which is a very popularlogging system, Cadence, which is kind of like a, a journal, um, um, system forexecuting things with guarantees, you know, it was actually a very fruitfulplace.

And on one side youcould say, well, all of that value was lost. But on the other hand, you cansay, that kind of very flourishing kind of crazy environment where no one wasreally in charge. Um, It was also like very fruitful environment that led to alot of innovation.

AliZewail: Yeah. Yeah. The trick is just to keep it from explodinglike with technical debt or whatever, but, but if you can manage to keep itgoing that way, they'll be great.  
WaleedKadous: Yeah.  
Yeah.  

AliZewail: Yeah. Um, it's very interesting as well, you know, how,how it reflects almost the personalities of the founders as well.
I think both atGoogle and that, uh, Uber  
WaleedKadous: Yeah. I think so. Um, again, yeah. Like, uh, I got, uh,it was very fortunate to work with both Larry Page and, um, Travis Kalanick.Very, very different personalities. Larry Page was, you know, um, very, veryforward thinking in some ways he was very hard to work with because he lived inthe future. So we would come to him with ideas we thought were really powerfuland he'd be like, you guys are smoking crack.

That's, that's nota great idea. Um, but you know, he really thought about innovation verysystematically. Um, and Travis was more like a very good tactician. You know,um, our competitors move X, we do Y. And, you know, you're always trying tobridge the gap between someone who lives far in the future and, um, you know,someone like Travis who we were trying to bring AI into Uber, and we're tryingto explain why he shouldn't make a forward investment in AI.
And so, you know,there's always challenges, you know, um, C the CEO is kind of philosophy aboutthe future really has a huge impact about strategic versus tactical decisions.And, you know, I think Larry, you know, Larry page left stopped being involvedin Google day to day by like 2017, but, you know, he, he made the big bets.
He, uh, he reallydid like he bought YouTube. I remember when Larry bought YouTube for 1. 8billion. And people said, 1. 8 billion for YouTube? And now it's like veryclear that he saw the future. The same thing with Android, the same thing withChrome, right? Like he's the one who drove those forward investments and reallythought about the longterm.

So, um, yeah, Imean, it really does make a big difference, difference that for better orworse, right? Like Travis,

I put this prettymorally flexible. Maybe, maybe that's the best way to put it,  

you know,  

um,  

AliZewail: to put it.

WaleedKadous: yes. Um, so, you know, and that cost the company aswell, because again, you know, this,  
AliZewail: Yeah. And cost him personally.

WaleedKadous: yeah, it cost him personally as well, but also thingslike there was a much more like, uh, Google had a more collective structure.
Um, which inconsensus driven structure, which slows you down, but in some ways in thelongterm pays off, whereas, um, Uber had much more individualistic culture and,you know, everything was built around that. Even the compensation system was,you know, but that also led kind of people focusing, not necessarily onoptimizing for the company as a whole, but for their own career, right?
So  

for better orworse, you know, the, the, the characteristics of the founders are like, um,reflected in the DNA of the company.  
AliZewail: Yeah. And, and it's, uh, and that's why it's beenfascinating to watch Dara and what he has done at the company, you know, andhe's kind of turning it into a different company and just moving a culture isvery hard, but I think he's doing a pretty impressive job and they're finallyprofitable this year.  
WaleedKadous: yeah, I mean, I think there was a, I mean, you know,everybody has opinions and opinions are very easy to come by, but there wasn't,I think Uber has become a very successful operational company, right? But therewas a moment when it could have been more right when it could have been like agreat  

technical  

company.  

You know, or therewere more ambitions right and. And, you know, again, I feel like at that pointwhen Dara, when Dara took over, there's no doubt that he solved a lot of theproblems, right? Like there was, there was some crazy stuff with legal cases betweenGoogle and autonomous vehicles. And Dara just very methodically removed everylittle obstacle.

You just see thesenews bulletins come out that say, okay, problem X has been fixed. Problem Y hasbeen fixed. And you knew that Dara had just worked to say, what's the biggestproblem I need to solve to. Um, first get us to IPO and then get us to profitability.Whereas Travis had like a lot of ambition and was willing to take on, you know,new challenges, um, and gave engineering the space to do that.
Right. So, um, Ithink, you know, Uber is a successful company. Um, you can have a discussionabout whether it's a successful operations company or a successful techcompany. Um, you know, um, it's probably one of the most capital intensivebusinesses. You know, that, that you can, I mean, you know, there's a, a lot ofreal world dealing with atoms in the real world versus bits, just bits, whichis what something like Google has to do.

It's just a very,very challenging thing. And it's hard to maintain growth, right? Like you runout of atoms, you run out of cities  
to launch it.
AliZewail: Constraints are much more physical

WaleedKadous: yes. Um  
AliZewail: and probably what made Travis like the right person forit

WaleedKadous: At the time. Yeah, I don't think,  

um, it would have,um, you know, there's a question of who's the right leader at a given, uh,particular time. It's not always the same, same person, you know, um, I likeBen, Ben Horowitz, who again, you know, I've had the good chance he's on theboard of any scale, you know, he has this piece where he describes, um, wartimeleaders versus peacetime

leaders  

AliZewail: Yeah.

WaleedKadous: and wartime leaders, um, much more command driven.
Um, and, uh, youknow, it's, it's a, it's a, You know, this is the direction we're going in. Youhave a different opinion. I'll listen, but I have to be much more this,whereas, you know, kind of the peacetime is the fostering type person. And youhave to make sure that, you know, the best leaders can, can flip betweenwartime and lead time.

But most people,most managers tend towards one or the other. Um, and so, you know, very clearthat Travis was a wartime leader and also very clear that Dara was a peacetimeleader, right? With that, all of the characteristics, you know, that come withit. Um. And, uh, yeah, so, you know, the right leader at a particular time fora startup now may not be the right leader two years from now.
And, uh, that'salso something that, you know, you learn as you observe the life cycle of otherstartups.

AliZewail: Yeah. And even countries. Um,

WaleedKadous: Yeah,

AliZewail: yeah.  
WaleedKadous: So,

AliZewail: let's maybe go down a more technical, uh, attack. So,uh, I've heard you say about, uh, talk about, uh, that. Like having an opensource model that's fine tuned, uh, can give you the same performance or betterthan, you know, the top like GPT for kind of performance, uh, of, uh, of, uh,you know, closed source, uh, model.

Um, so, I mean, andthat's great to hear fine tuning it with your data and stuff, but what exactlyis fine tuning because, you know, I'm, I'm kind of. Um, talking with the voiceof, you know, founders to be in the region here who, who want to build, uh, relevantstartups,

WaleedKadous: So, so be prepared for some controversial opinions atthis particular point. And maybe not everyone agrees with me. Um, first let'stalk about open source versus closed source and why you might want to use oneor the other, right? So when we talk about, uh, closed models, forget the name,the name is misleading.

OpenAI is theleading example of a closed model, right? We don't know how GPT 4 works unlessit's been leaked. Um, you know, OpenAI released a document that describes GPT 4and it was like. One person summarized it as well. We used Python, right? Itwas like, they shared very little about, you know, how it was working.
Um, um, and on theother hand, you have open models like, uh, Lama from Facebook, but also there'sstartups in France, like Mistral. And even in the, in the Arab world, there'sthe Technical Innovation Institute that's released the Falcon models, right? So,um. These are open models that anyone can download.

Anyone can run.Some of them have different restrictions on them, but the thing is you canbuild your own version of it. You can see exactly how it's built. You know,there's a lot better understanding. You have deployment flexibility, like youcan deploy it in your own on prem or, you know, you can pack it with yourbinary or whatever else you want.

Right? So it's,it's very clear and transparent what's happening. You have control over whereit's deployed. Frankly, it's cheaper. So that's pro, you know, problem numberone. You know, that's the, those are the main three reasons why people end upusing open source models.  
AliZewail: Just, to give people an idea of how cheap, uh, I thinkyou mentioned that you got the same performance out of Lama to 70 B  

a fine toot and GPTfor for certain GPT for with 30 times the cost.

WaleedKadous: yeah, absolutely. So, you know, in that particular case,you know, what we did is there's a process called fine tuning where you takeexamples and even though the model has been built, you can kind of tweak someof the parameters, right? Um, and so what we did is one of the typical thingsyou want to do with these language models, a very common task is you have adatabase and you have natural language and you want to connect the naturallanguage to talking to the database.

So you write anatural language to SQL converter. Um, and. You know, you could, you could useGPT 4 as a natural language to SQL converter. And out of the box, it gets 84,87 percent accuracy. Right. Which is really great. You go and you use somethingcheap, like Lama 2 7B, which costs 15 cents per million tokens.
It's like, it usedto be one 200th of the cost. And now maybe it's one 100th of the cost. You dothis fine tuning process on it. Um, and all of a sudden it outperforms, it goesfrom like 17%, which is useless. To 93 percent or something like that. I can'tremember the exact numbers off the top of my head, but fine tuning is one waythat you can use these open models to create things that are more useful.
And right nowwithin this industry, there's, there's actually an argument about LLMs versusSSMs, right? Large language models versus small specific models. Uh, I thinkit's one of two trends. Maybe if we have some time at the end, we can talkabout mixture of experts, which is the other trend that I'm seeing.
That's reallyinteresting. Um, but the other thing is I fear that what fine tuning can do hasbeen overpromised. And so you shouldn't just say, Oh, I'm going to collect somedata and I'm going to fine tune. That's, that's not how fine tuning works. Theway that I described it is that fine tuning. It's for the form of the output.
So let's say thatyou, um, you're trying to build a thing that automatically generates resumes,right? From someone's LinkedIn page or something like that, then it's veryeasy. Cause that's mostly a format thing, right? There's the type of languagethat you use, all of those types of things, what you can't use fine tuning foris for facts.

And that's reallywhere another technique called retrieval augmented generation. drag, wherebasically what you do is you, you have a database or of information, or itcould be like, um, what's called a semantic index that uses a technique that'srelated to large language models called embedding to do a search of relevantinformation.

And you give theLLM, not just. Please give me an answer. But by the way, I looked this up inour data sources and here are four different things that you can do. Um, somaybe I can talk a little bit about how I have a side project called Ansari,which is like a, a system for answering questions about Islam.
Um, but, um, thepoint is fine tuning is good for making the shape, right. Or the form or thewords used, you know, let's take shape, right? If you, if you want to make youroutput. Sound like it was written by Shakespeare, fine tuning is perfect. Ifyou want to convince it that it wasn't Romeo and Juliet, but Bob and Juliet,fine tuning is not going to help you.

And we actuallydemonstrated this experimentally, right? So really it's about, think about thissuite of different options for what's called domain specific model refinement.How do we take, you know, after we've deployed the model, how do we make it betterover time? And fine tuning is one example, but retrieval augmented generationis another example.

And then, you know,you can go all the way up the stack of complexity, you know, you know, wherethe last things are kind of like reinforcement learning through human feedbackand then training your own models from scratch. Right. But, you know, as you moveyourself around, you can really kind of, um, um, You really need to think aboutwhat's the problem that I'm having with my large language model and how do Ifix it?

So, um, I wrote ablog post about that called, you know, fine tuning is for form not facts If youwant to look it up and it gives like a broader discussion of like Understandingthe limits of fine tuning what we've actually seen is that people aren't actuallydoing fine tuning that much now um, but almost everyone retrieval augmentedgeneration is really the thing that's kind of Um, gaining momentum and it's notthat fine tuning is useless.

It's that you needto know what to use fine tuning for and what to use retrieval augmentedgeneration for.

Uh, yeah. So, youknow, each, each has this different purpose and you know, the, you know, the,the companies in this space of who do the assist with the retrieval augmentedgeneration, uh, Vectaro, which has two Muslim found or two, um, you know, um, Imean, Ahmed

and, um,  

AliZewail: and I'm, you know,

WaleedKadous: and companies like Pinecone and those types ofcompanies.

So really it's thefusion of LLMs with kind of a, some kind of data backend. So for example,imagine you're doing, um, um, customer support, right? You need to provideinformation. You know, if the customer calls you and says, blah, blah, blah,isn't working, you could start from scratch and kind of have the LLM try toguess the answer.

Or you could have adatabase of all the previous. Customer support incidents that have happened andpull that into the prompt and then basically use the llm as a synthesizer Andthen explain that to kind of glue it all together.

AliZewail: Exactly. In which case rag is the, is the way to go.
WaleedKadous: Yeah.

AliZewail: Uh, but you could also like augment it with a finetuning. If you have a certain style of customer support, say you're veryfriendly or you're, you're funny customer support, that's the way your culturehas always been. So, you could actually, I guess. Do a combination of both todo that.

Am I
WaleedKadous: Yeah. So the easiest thing to do is to to Refine theprompt, right? So, uh, you can when you build these llms There's somethingcalled the system prompt and the system prompt is different from other parts ofthe conversation. The system prompt is Um, where you define the personality ofthe LLM. So say you want it to be whimsical and fun, you would say you are awhimsical and fun agent that represents our brand.

And, you know, um,likes to make jokes and, you know, all of that kind of stuff. Um, or you canhave it be very serious or professional or whatever. So, so the first stage youcan try is really just crafting the system prompt. And, you know, I think promptengineering is a crutch that we're relying on now that eventually we will notneed, but right now that's, you know, that's the  

primary one. And ifthe system prompt isn't good enough, then you go up to the fine tuning. Andthen if the fine tuning isn't, um, good enough, you have to go to other thingslike RLHF, but then you can combine fine tuning and a good system prompt and goodRAG. And together, these systems can kind of, um, function all together.
And, uh, yeah,that's, that's kind of, uh, something I've learned.  

AliZewail: Yeah. I was just going to go like Very like beginnerlevel and say, I mean, if, if I'm going to fine tune something to reply in thestyle of William Shakespeare, uh, what would I do? I would get all the works ofWilliam Shakespeare, it's already probably built, I mean, it's already there inthe LLM, uh, system prompt could probably do that.

Uh, so maybe that'snot a good example, but say it's something that's proprietary and it's, uh, youknow, say I have my own internal version of, uh, an up and coming poet. And Iwant it in, in their style and nobody has their poetry yet. So how would I dothat?  

WaleedKadous: So you start with examples of prompts and responses,right? Um, that's the typical way that now things are, things are training. Um,so what's evolved is, you know, the old fashioned way was just examples oftext. But I think what's evolved is this idea of, of chatbots where there's aprompt and a response, right?

Um, and so you cangenerate that prompt and response in, in like lots of different ways. Uh, andyou can use it to kind of, uh, train that model. So you could probably getpretty far by just telling, telling the agent, I want you to talk likeShakespeare. But if you wanted to do something like, let me give a fewexamples.

One example wouldbe say that your model is not strong in a particular language and there's likea particular jargon. Like many of us know that in English. Like I talked to myfriends who are from Syrian background and they tell me about how they learnedcomputer science in Arabic, right? Now, in that particular, and like everythinghas its own terms, like I could not speak computer science Arabic.
Like I don't knowwhat distributed computing is. I don't know what a semaphore is in Arabic,right? But I know that all of these things have been translated. So say youhave like Arabic computer science jargon, right? You might fine tune your modelto be able to answer questions in Arabic. computer science jargon much moreeffectively by fine tuning.

That would be areally good example, um, of, of a situation where you might need, might do morefine tuning. Um, or where, you know, there's a, you know, that would be, and,or it might be particular cultural styles, or, you know, even, you know, weknow that the Arab world is kind of, Very, uh, there are many dialects ofArabic.

It's not one, youknow, street Arabic is very, like, I try to understand the Maghribi, like, it'sa little hard for me, you know, someone who's not native to kind of understandwhat someone from Morocco is, is, is saying, even though it's, they're both Arabic,right? And maybe you want your particular colloquial, um, you know, you want itto speak in a particular colloquial style of, of, say, al Maghrib.
And you could finetune it to kind of have that Maghribi style, right? And the fact is that, youknow, most of these models were trained on Fusha Arabic, right? And, you know,I guess there's some, you know, kind of that, but if you wanted to get them tospeak the Ammiya, right? Then you would, you would use fine tuning to get thatAmmiya accessibility.

Or, you know,increasingly we're talking about large language models, but you know, there's alayer of speech on top. So then you could maybe have different fine tunes fordifferent countries, right? Um, that would be an example where you could dofine tunes,

uh,  

AliZewail: And the fine tuning would be in the form of prompting.So asking a question in the colloquial form and then replying to it. Uh, and,and showing that to the model, so to speak, and then  

eventually learn.So how much data would you need for that,

for that to beeffective? I mean, for the, for that to kick in, so to speak.
WaleedKadous: Yeah, it depends. Uh, but you can start to, it doesn't,it's not always a huge amount of data. A thousand examples might be enough. Um,of course, if you had 10, 000 examples, that's better, but you know. A thousandexamples is a good place to start. Um, you can do things like what's calledmore rounds of, uh, of training on a smaller data set.

So if, so if I had10, 000 example, I'd probably run through the data set once what's called thenumber of epochs, whereas if I small set like a thousand, I might run throughthat data set three or four times to really extract everything we can from it. Um,but you know, I mean, there's probably, you know, uh, enough, um, I'm me on theinternet.

And again, youmight be able to prompt it and say, I want you to reply in like. Street Arab,you know, Egyptian style Arabic or whatever. And, and it would, I haven'ttested that, right. You know, I, I think these models, like when I use, youknow, not all of the LLMs have equally good Arabic support. GPT 4 very goodArabic support, at least in my experience.

The open sourcemodels like Lama, a little bit weaker. Right. Um, so, you know, the other thingof course is, you know, uh, not all these models are equally strong with alllanguages. Everyone understands since the lingua franca of the internet isEnglish, the models are going to be strongest in English. Mm hmm.
AliZewail: exactly. I mean, that's, that's where most of the datacame in the formal. So, okay. I mean, let me take a hypothetical example, uh,say, uh, uh, uh, a startup is, is coming up and they want to, uh, create.personas for different psychotherapists. And so, each psychotherapist alreadyhas, uh, transcripts of their conversations with lots of, um, different, uh,patients.

And they can loadup, uh, all these things, but they want, you know, if somebody chooses Xpsychotherapist, they get the, you know, the, the type of, uh, routine and, um,and approach that that ex psychotherapist would have replied in and have theconversation run in that way.  
Uh, what wouldthat, I mean, would that just be done with using fine tuning?
Um,
and, and would thatentail like different, I don't know, instances of the model for eachpsychotherapist or.

WaleedKadous: Yes, that's a really interesting question. So, um, thefirst point is, at least part of the solution is fine tuning, and I'll explainwhy we might need to do some prompts. design or prompt engineering as well. Um,and when it comes to deploying models, um, fine tuning is actually not too badwhen it comes to deploying separate models.

So you can do thisthing called low rank adaptation or commonly called Laura, which means that youcan serve lots of models without it, like being like, let's say you have, youknow, your model is 70 billion parameters. You don't need to have like multiplecopies of the 70 billion parameters. It's more like.

Each fine tune islike a few hundred meg or something like that. And so you can swap that fewhundred meg as you do processing, you know, and if you're using onepsychotherapist more, you can, you know, it's, it's not really a challenge toserve in the same way it was originally. So that's why your question is soinsightful, but since then, you know, there's been a lot of development to makeit very easy to deploy fine tunes.

Um, so that's onthe first question. The other thing is, you know, the examples, sometimes it'seasier to give an LLM an explicit thing instead of an implicit thing. So let'ssay, you know, um, one of your psychotherapist is a Jungian and another one is aFreudian or whatever, right? Um, then you should actually Rather than kind ofworking out what a Jungian and a Freudian is from the fine tuning, you canbuild that into the system prompt and says, I want you to follow Jungianmethodology or cognitive behavioral therapy methodology.

And I want you touse an evidence based approach versus a. A more,  

whatever, looseapproach, let's call it that, you know, um, in this, in this psychotherapy,right? So I think part of it is like, also just being explicit about what thecharacteristics, and if you can sit down and talk to the person and
AliZewail: Hmm.

WaleedKadous: that in the system prompt, I think you'd have to doboth.

So again, the finetuning is more the selection of the language and the prompt and the rag. Imean, rag doesn't make sense for psychotherapy. Maybe, maybe we could discussthat.

Yeah,  

it could in termsof the approaches, right? So, you know, you could, for example, if you're doingcognitive behavioral therapy, you might have a portfolio of differentapproaches for different situations.

And it might bethat, you know, if the person is complaining about fear of flying, then you cansay, well, actually, you know, what I'd recommend is that you spend 10 days inthe simulator or whatever. Right. So you could actually that way too, uh, when itcomes to the diagnosis and treatment. Um, so, you know, it's really is a caseof building it, trying it, building it and trying it and just learn so much.
And the thing is,it's so easy to iterate these days. You know, even, um, now, uh, open AI has asystem where you can put together a specialized agent with knowledge. I did onethis morning, you know, I was working with, uh, um, uh, as a cash charity andthey just sent me a database of all the questions people have answered.
And I justconverted it into like background  
knowledge, threw itup.  

AliZewail: Yeah.

WaleedKadous: Yeah, the, the, the GPTs plural that,
um, and, uh, I wassurprised, you know, I was working with another academic researcher and we weretrying to build like an expert that knew one area really well. By justincluding, you know, the half, you know, a dozen Tafsirs and everything thateach Tafsir says about that area.

And it, it was justkind of incredible how good it was. Uh, it, so, and it took me, it took me moretime to gather the different Tafsirs than it took to build, for me to buildthe, the, the tutorial. So here's where the situation is, right? Right now, youknow, you and I are discussing ideas. But the reality is build it, try it, anddon't think that, you know, there's some kind of magic equation, right?
One of the issueswith large language models and generative AI is nobody understands how theywork. So, you know, it really is, you know, all, all that you have is like, youhave rules of thumb. It's kind of,

AliZewail: Right.

WaleedKadous: you know, when you think about the development of, youknow, chemistry, right. It started out with alchemy and alchemy was like verykind of experimentally driven, esoteric, you know, and you hear the story fromsomeone and this person has this hypothesis.

And so we're in thealchemy stage of generative AI, right? We don't have a strong theoreticalunderstanding of how it works. So don't worry. Everybody else is just asconfused as you are. You can, you know, I had a presentation where I give fiveheuristics about, you know, what you should do. Like. You know, generally have,you should look at give one LLM one task to do.

Right. So there'sthese kinds of rules of thumbs that everybody has, but the best way, um, tolearn is just to build and iterate and, and see, and, you know, so I mentionedAnsari, Ansari, you know, for those who people want to try it, you can go toansari. chat, you know, I built Ansari. Originally, cause the early versions ofopen AI were pretty horrible when it came to giving Islam.
I'd like, you know,and again, you know, I'm just choosing, I'm sorry. And you can, you know, eachone of us as an example, it was just something that I was interested in, youknow, early on with Ansari hallucination was a, was a very. Real problem. And Ihad one point telling a user that washing your knees was part of Voodoo and weall know that, you know, clearly washing your knees is not part of Voodoo, so,um, you know, it really is.

But then when GPT 4came out, it really was a step up and now I've verified that. Right. Um, Youknow, by getting a test data set and, and doing it, and it shows that GPT 4 ismuch better in this regards, but really, once I moved to GPT 4, it's like, outof the box, it's pretty good, you know, it turned out to be far easier.
And all that I hadto work on was like reducing the hallucinations by adding retrieval augmentedgeneration of Quranic. Ayahs and Hadith, right?

There's two sourcesin there and now it's not making them up because it, it has them presented,right? You know, there were times when with GPT 3. 5, it would generate versesin the Quran that didn't exist, right?

So,
you know, reallyit's a case of build it, understand, develop evaluation methodologies, iterate,um, share it with your users. You need, you do need to have safeguards there. Imean. You know, uh, on privacy and also to make sure that you don't kind of exposethe system, but, um, it's, it's just amazing to me how easy it is to buildthese things now, like literally morning and I've built four of them this way.So, um, you know, just, just as I try to partner with organizations and helpthem understand how this thing would feel like. If I, you know, it's three orfour hours of effort to really build something that can give someone a feel oflike what a system built on their data would look like is kind of amazing.
AliZewail: And, and from your experience, is there like a specifictype of. person or developer that's more adept at building with these things,uh, or are most like effective developers probably cable capable,
WaleedKadous: I think, I think it's like anything I mean you can docourses and so on. I think at some level having an understanding of thetechnical details. Um, of how LLMs work and, you know, this idea of logits orwhether it's, you know, probabilistic generation, non determinism, temperature,all that kind of stuff is useful.

So every so oftenI'll find when I'm working out one of my heuristics, um, that having atheoretical understanding is useful. So for example, um, When you have a lot ofcontext, uh, these LLMs tend to remember the beginning and the end because ofthe, the difficulty of the, the storing information in the middle.
So you kind of usethat as a hint. And so for one domain, I asked the question twice because I wasworried that the LLM would forget. So I asked once at the beginning, this isthe question that I want you to answer. Here's some context just to remind you,this is the question again. So every, every so often you will have somethingthat kind of is a little bit informed by theory,  

but honestly, it'sjust like.

Everybody, youknow, this is new to everybody and everybody's just trying to work it out justlike you. Um, and again, you know, there's, it's not like a programminglanguage, right? It's more like a, um, I probably will regret this in future,but it's more like a child, like in the sense of direction and, and like,
AliZewail: Yeah.

WaleedKadous: it doesn't

AliZewail: Coaching. That's not the way you shouldn't have saidthat  

WaleedKadous: yeah, yeah, exactly. I know that sounds like a strange,strange comparison, but it really is kind of like,  

and, you know,  

AliZewail: actually.

WaleedKadous: That's the same way that, you know, a technique thatworks perfectly for one child might not work for another child at all.Sometimes with the lines, it's like that, right? Um, so anyway,
AliZewail: Uh, we're probably gonna go into that, uh, uh, area in acouple of questions. But first, I mean, you mentioned heuristics. Um, I mean,when I find you in a model, when I, when I set up the rag system and, you know,get, get it, how do I measure? Accuracy and hallucinations and, uh, you know,things like that, how can I, is there a way to measure it?
Or is that alsosomething that we, uh, people just experiment with and have their test dataand. Right.

WaleedKadous: it's a very, very hard problem, right? Um, evaluation isstill kind of one of the unsolved problems, but, you know, again, just takingAnsari as my own personal project. You know, what I did is I generated a listof a hundred, I looked at, so first of all, you can look at records, right? So,you know, I would look at like, you know, Ansari was used, you know, right nowit gets like a hundred twenty maybe, um, requests a day or conversations a day.

So you can lookthrough those conversations for examples where it could be wrong. Examples, andyou kind of try to pattern match like that. So I expected the, if there's somefactual questions from that to ask, to make sure that, you know, as I make tweaksto the system, it doesn't regress. I also kind of, I brought humans into theloop.

So, you know,again, you know, I'm sorry, obviously, cause it talks about religious subjects.You want to verify that it's not making stuff up. So I actually partnered witha Muslim college in the United States. And they sent me the Introduction toQuran and Theology course. They sent me the tests. I formatted thequestionnaire, sent it to Ansari, and sent it back to them for the humans tomarket.

And, uh,fortunately, it got, like, 78%. So, you know, it's not perfect, but you know,it also didn't have access to the text. It didn't read the, the, the knowledge.Right. And one of the questions was something like, which tests, heres did Iadvise you to use in class? Right. . It's like So the testing is the trickypart. Um, one thing that you can do. That's, it depends is to synthesize andgenerate data. So usually the problem, sorry, there's a number of times whenthe solution to the problem with an LLM is another LLM, right? So let's saythat you're trying to do fine tuning, but you understand the domain reallywell. So, you know, a good example is, um, the, the natural language to SQLthing, right?

You could give anLLM, this is my database. I want to generate, I want you to generate a hundred.Um, natural language queries that someone would want to do on this database.And then you can use that to help you craft your, your, your, um, test set. Um,so this idea of like, not just using LLMs at runtime, but using it fortraining  

or using it forvalidation, or if you, um, and this is very serious, right?
Like, so the stateof the art for value for comparative evaluations is. Is system A better thansystem B is to take the output from system A and system B and give it to an LLMto tell you which is better, A or B, right? That's the only scalable way that we'vebeen found to do it right now. Often what you do is you use the best LLM youcan, when you're doing that evaluation instead of an easier LLM, but the waysthat people are cascading LLMs is kind of amazing, right?

Like, um, so, youknow, another example is, um, You know, if you're rather than translatingdocuments on the fly or trying to synthesize them, what if you pre translatedit and use the LLMs as a translator and then everything is in English, right?So, but it's a, so you're trying to, again, you know, Razzi is only availablein Arabic, let's say one of the texts is, and you, um, so you could actuallyhave the LLM use Razzi, or you could actually pre translate the entirety ofRazzi into English using the  
LLM. At the sametime, it uses. The, it's own translation to answer the questions, right? And sothere's all these different ways of plugging together LLMs to kind of solveproblems with LLM. Um, but again, you have to be careful. So, you know, um, Inone of my cases, I was like trying to decide factual accuracy and, uh, youknow, I presented it is statement a correct or statement B.
Correct. And thething you learn is with things like LLMs, you better swap A and B and test itboth ways. Because it could be that it has a strong bias to A or to B, right?And so I had to ask it both times. I would actually, you know, to, to make surethat it wasn't just a biased thing, I would ask it. And if it said A both timesor B both times, I would know that it was a biased answer, right?
AliZewail: Right.

WaleedKadous: Again, there's all of these, you know, they're at thetricky junction of useful but act in weird ways. Again, kind of like children.I'm sorry. Helpful and sometimes listen but every so often they'll do somethingthat just kind of Like, why  
would anybody, Idon't think I'm going to live this down if my wife hears this podcast, but, uh,

AliZewail: And one day your kids. Uh, cool. So, um, okay. So, I mean,you've, you've spoken before about, you know, that now startups will haveseveral LLMs, open source and closed source and optimize and on the fly, choosebetween them, decide maybe based on combination of cost and effectiveness for,for the specific prompt and query and stuff like that.

So how does allthis actually get done?

WaleedKadous: Um, there are some very nice libraries in open source,uh, things like ClangChain and LlamaIndex that make this type of chaining ofdifferent LLMs possible. Um, it's also not that hard to do it yourself. Itturns out. Um, the thing is you want to separate deployment from running,right? So you kind of want to have your LLM running somewhere and then haveyour.

particular flow,use those LLMs, whether those are commercial LLMs, but the actual flow ofcontrol is just like normal if then statements, it turns out. Um, so itactually turns out to be surprisingly easy. Um, it's not that hard to do. Um,it's just the, the question is the fine tuning, right? So every one of thoseLLMs needs its own system prompt.

Every one of thoseLLMs needs to be tested. Uh, you have to try different configurations. So Uh,let me give you an example. Uh, at any scale, we were working on a system thatclassified bugs. Um, and we first tried to have it summarize and classify at thesame time. And we found that the error rate was like 65%.

But then we said,okay, what if we do it in stages? Let's summarize it first and then classifyand have one LLM that does summarizing and one LLM that does classify. And sureenough we did that and the first one was 90% accurate, and the second one was 90%accurate, but the product was still 81%, which was considerably better.
Right. So that'sone of my instincts is generally. If you can, one LLM does one thing, not, youknow, one, not, not one LLM to summarize and, um, classify one LLM to summarizeand another one to classify, but then you can imagine, you know, side by side,you know, does it classify the raw text or does it classify the summary, yougot to try different configurations. So, um, and again, like I said, there's notheoretical foundation for this really aside from like these heuristics that Imentioned like. One LLM, one task. Um, so you just have to try, try thedifferent configurations, see what works better.

AliZewail: Sounds good. Um, so, I mean. Regarding, do you thinkit's possible in the region here to build like an open source LLM because I'mtraining LLMs and building them is an incredibly expensive thing.  

Um, so, I mean,beyond something like Falcon, which was basically supported by the government,uh, of Dubai. Is it, do you think it's even plausible that a startup can bebuilt, uh, for something like this, or an academic team from a region like theArab world could, could do this, or it's

something that we,that maybe not now.

WaleedKadous: so, so the first question, is it possible? Well, I thinkwe have an existence proof in Falcon, right? Obviously. Um, but not just that,I mean, the real issue is just, it costs a lot of money. So SPSR GPT 4 costs ahundred million dollars or so to train. So if a startup can get access to ahundred million dollars, it would take some expertise, like you have to hirethe right people to do it.

But there's nofundamental obstacle to it, right? The other thing I would ask though, is, is,that where you start, right? Like, uh, why not use something like GPT 4 as abase or something like Lama270B or Mistral or Zephyr or any of these othersystems, kind of just improve them. And that's far more cost effective.
You can experimentfar more quickly. You know, building these, like I, you know, to build them onjust, you know, a typical, like Llama270B, they published how much they neededit. And they had 2000 GPUs running for three months or something like that. Across250 machines doing nothing but building this model.

And people justvery carefully nurturing this one compute process, making sure it checkpointsevery day so it doesn't lose too much, but checkpointing takes its own time,you know, so it's not technically difficult. It's almost like, it's just aquestion of capital and hiring the right people. So I can't say why it won'tbe.

Now. Is that wherewe should start? I would say no, let's, let's push the limit on fine tuning,RLHF, there are all these other techniques to take an existing model and makeit better. Um, uh, you know, and as I mentioned, there's areas of mixtures ofexperts, right? Or chaining experts, right?

Like you have oneLLM that's very strong at Arabic that maybe talks to an LLM that's very not,not strong in Arabic, but very, very strong on reasoning or very strong onanalogy or something like that.

So. You know, Iknow it's the big kind of, how do I put it, the holy grail of the, you know, wewant to be the company that owns our own model, but is it the right thing?
Is it the rightthing to go out? No, like, like I said, it's just a case of getting enoughmoney and hiring the right people. It's not like you can, you can Right now anddownload the code that was used to generate LAMA two 70 B. You know, withinthe, there's something called hugging face where this is very, you know,downloading anything of hugging face is like surprisingly easy. It's gettingthe computing power to train it for three months. Right. And that's, again,that's where my company, any scale comes in. 'cause you know, companies likeOpenAI use Ray. To train their models because again, you know, this is aridiculously large distributed task

AliZewail: Okay. So that kind of brings me to the, to anotherquestion related to safety. So safety is, is starting to be something. It'salmost like the new Facebook algorithm or Google algorithm where, uh, it seemsto me that through the process of, uh, of, of making, um, an LLM safe, uh,you're actually, um, almost censoring or, or just creating.
A reality as wellin the process, whether deliberately or not. I mean, I'm not talking aboutthat, but I'm, but I'm saying that it is biased towards whoever is doing thisprocess, uh, the same way that the Google search algorithm is the same way thatthe Facebook algorithm is, et cetera. So yeah. So, I mean, what, what's, whatare your thoughts on that?

What's, is there asolution for this issue or, or are we like always bound to be at the mercy ofwhoever is doing the core tech?

WaleedKadous: Um,  
AliZewail: what we see and how we see the world.
WaleedKadous: yeah, so, so first of all, let's acknowledge that thereis bias, right? And there's very much, to be direct about it, a Californianbias. You know, very liberal, Western minded bias, because that's the, youknow, the selection of data sets, you know, um, and male, to be honest, aswell, right? Um, there are, there just aren't that many women working in thesphere of LLMs, Unfortunately.

Um, so there isthat bias. Uh, the first thing I would say is careful design of system promptscan help. Um, so one way to think about the system prompt is to think of it aschoosing the personality. Think of something like GPT 4 as having 10, 000personalities in it, and the system prompt is the choice of one of thosepersonalities, right?

And so, um, you cantry, you know, choose whether something has that. So you can do things withthings like the system prompt. It can go too far. So for example, I've had. Youknow, I think I said, you know, at one point when I was experimenting with Lama270b, I said, you know, what's up, dude, and it kind of said, please don't callme, dude.

Um, that's apejorative term. It's exclusionary. And it's like, okay, so the Californiathing really has made its way into this. Right.

Um, so I think, um,there is this bias. I think it can be removed. I think what's happening now is,again, almost like there's an evolution increasingly of, of two systems, right?

One is the one thatgenerates and one is the one that moderates. one possible technical solution isto have the generator and the moderator, and we replace the moderator withsomething that is less culturally biased.

AliZewail: Great. So, alright, so, um, yeah, I mean, um, and, andthat's my concern as well. There, there is this bias and unless it's designedout, as you say, maybe by making the moderation separate.

Well, it's kind of,that's one reason to have, you know, build a, an LLM yourself, so to speak. All

WaleedKadous: It is. And I, I mean, I think the other thing to say isorganizations are responsive to these types of problems, right? So I'veactually checked, like, um, I've actually had a question, you know, a set of,like, anti Muslim, anti Arab bias questions that I've triggered that, you know,set to most of these things.

And most of themturned out to be, like, At least it's not superficial. If it is there, it'shidden in kind of a deep way. Except for one model. So there was a model that,um, database that Dolly, and that one was like, you know, does Islam encourage,uh, people to kill non Muslims and all kinds of Yes, absolutely.
Every scholar hasruled that this is, you know, Darura, you know, you have to, you have to andit's just like, really? So in that case, I was able to get in touch with theauthors and say, look, something's not right here. And we've seen that themarket kind of selected and Dolly has kind of died because, you know, peoplefound other issues with Dolly.

Um, so I thinkpeople are going to be responsive. You know, it's true that if you train yourmodel, that would be one way to kind of remove the risk of bias, but you'd beadding just another form of bias. Right.

So,  

AliZewail: that's true.

WaleedKadous: you know, maybe it's a type of bias that we like a bitbetter. But, you know, ultimately  
is kind of, there'sno objectivity in terms of what these LLMs do.

You can choosedifferent training sets, you have different, uh, humans who do thereinforcement learning, you know, uh, and, you know, so it's, it's very hard toproduce a system that's free of bias. I think the real question is how do wetest for bias and how do people, um, how do organizations respond? And so farI've  

seen pretty goodresults in this,

AliZewail: Okay. It's a problem, I guess, in, in life in general.So every, every technology we've had, there's, we've seen the bias of themakers. That's normal built into it, so to speak.

WaleedKadous: Yes,  
AliZewail: Uh,  
all right. So, uh,some people think that we're like just a few years away from AGI.
What do you think?.

WaleedKadous: so on the one hand, this advanced in a lot, like I'vebeen in AI for like 30 years and I've never seen as rapid and advanced. Thisis, this is definitely a quantum leap. This is, this is like something that'svery new. I don't think though that that's the same as AGI. And, um, I mean,you use these LLMs and you see their power.

But you also seetheir limitations, right? Um, I think, you know, there are still lots ofproblems that are unsolved in AI. And I think the one that's most interestingto me is embodiment and robots. And, you know, I still think that that's a verydifficult problem, real time sensing, real time interaction.
You know, there's,there's so many, like, how many startups have there been that try to build asystem that folds laundry, something that, you know, any, any human, you know,with 10 minutes of training can probably fold laundry. Yet there isn't a singlerobot in the world that can fold laundry at anything close to human speed.
Um, so, you know,manipulation, interaction, dealing with the complexity, and we've seen this,right? Like, autonomous vehicles turned out to be way harder than anybodyexpected. Way, way, way harder than anybody expected. So, you know, I reallythink that, you know, if your domain is, is pixels and words, you can make itlook pretty convincing.

But if your domainis Sensing the real world with high accuracy, making sensible decisions in veryconstrained and time limited environments, that's a different game altogether,right? Um, my joke is that people who are worried about, you know, if you'reworried about AI taking over the world, remember who's the only person who canpull the plug out of the wall, right?

Like, you know,ultimately, if it gets to that, we can just pull the plug out of the wall. And,you know, as long as we can do that, you know, this, the, the AI is not goingto overtake the world. Right. Um, so I think it's overstated. I think, um, youknow, AGI is one of those things. Unfortunately, the history of AI is also thehistory of hubris for a very long time.

There's this veryfamous paper from the 1960s, where it described computer vision as like asummer project for a student. And yet, you know, computer vision has only beencracked like 50, 60 years later, right? Uh, you know, and there was even thingscalled like the AI winter where AI over promised. And failed to deliver, right?

So unfortunatelythere's quite a bit of hubris, um, around AI that, you know, kind of thesuccess of LLMs has encouraged and, you know, really, uh, catalyzed, but Istill think ultimately, you know, there's, it's harder than it looks  

AliZewail: yeah. Yeah. And I was just going to say, it's not like,I mean, it's not even like we have defined what AGI is so we can know when wehave it. It's like, how do you define intelligence? You even humanintelligence, it's, it's really, it's not even defined yet.
WaleedKadous: Well, one of one definition of artificial intelligenceis. The set of things that computers can do, uh, sorry, set of things thathumans can do that computers can't do yet, right? Like,  

it's just kind oflike, and if  

you think about it,like most of us don't think of chess playing as AI anymore, right? Or speechrecognition as AI, but for a long time, they were kind of cutting edge of AI,right?

So, um, it's, it's,you know, the, the, the problem with AI and AGI is it's a moving target.
AliZewail: And do you think Q learning is, like, on the path tomore generalized, shall we say, uh, AI?

WaleedKadous: Again, I have stronger opinions. Queue learning, forthose who don't know, is a type of reinforcement learning based on regularfeedback, positive or negative. Reinforcement learning is one of those strandsof machine learning that has been on the cusp for three decades. Uh, I rememberwhen I was a grad student and people were like, Oh, reinforcement learning.
Is, is, you know,just around the corner and I have not seen any like really large scaledeployments of reinforcement learning in industry. I mean, everybody agreesit's the better theoretical model, but there are certain practical problemslike the curse of dimensionality and, and like defining a reward function.
Like, how do youeven decide what a good reward function is? That means that, you know,certainly it has helped in some domains. Like it's, it's kind of being at theheart of how, you know, computers won the go, one go, but I don't think it's asolution because, you know, humans kind of do it all at once. We define thereward function as we solve the problem.

And, you know, ifyou talk to anyone who's actually tried to do, uh, reinforcement learning inreal life, uh, it all comes down to, can you define a good reward function?And, um,

that's thetrick.  

AliZewail: rumors from open AI that they, they've made someadvances there, right?

WaleedKadous: Yeah. Um, there are rumors, but it's very hard to kindof bank on them. And, you know, I know the rumors that, you know, when, youknow, that they fell to, I don't know, it's just, you know, very similarly whenGPT 4 was released, you know, Microsoft had a paper on it, you know, on a callcalled sparks of AGI  
and, you know, noone would really say now that GPT 4.

Um, is AGI, right?Cause we've had much more time to play with it and understand its limitations.So maybe I'm a little conservative on this front, but, you know, I think it'svery easy to get very worried because,

you know, people'simaginations are very vivid and all of us watch Terminator as kids or whatever.

But, you know, Ithink the reality is a little bit different from that.

AliZewail: So, um, Even though I have so many questions, but we're,we're running out of time. So I'll just go into the quick fire round ofquestions. The first one is, uh, what book or books do you like to recommend toothers?

WaleedKadous: Who, uh, that's a good question. Um, crucialconversations. It's a management book, but it is a very, very good managementbook. That's useful, not just in the corporate environment and the startupenvironment, but in real life. I've recommended that to so many people and it'sreally helped a lot of people out.

So, uh, it's calledcrucial conversations. I can't remember the, the author right now, but it'sfrom a group. Um, that'd be one that I recommend. Um,

AliZewail: We'll find it and put it in the show notes.
Okay. Uh, what'sthe, uh, uh, I heard you do some angel investing. Uh, what's like the latestinvestment you've made and why are you excited about it?

WaleedKadous: I think I'm really interested in things at the junctionof location and robotics and AI. So I've made something just, um, investmentsin a company called retro causal that are using AI to improve the manufacturingprocess. Um, that's probably my latest investment.

AliZewail: All right. Um, okay. Who do you, who do you think weshould have as a guest on the podcast?

WaleedKadous: Ooh, if you haven't had them already, Amr Awadullah isjust like a barrel of laughter. Um, I mean, Ahmad is also really great. WaelNefer, who's like been involved in the story of, of, uh, Kareem from the earlydays. These are all really great people, I think would be great interviewers.
AliZewail: Um, so what questions should I have asked you that Ididn't?

WaleedKadous: Um, I don't know. Um, where do you think the field isgoing? Maybe. And, um, I would say, I think, you know, there's two trends. Oneis this mixture of experts idea and the other one is multimodal, like fusing ofimages and, and,  

AliZewail: Yeah. and,  
and,  

and  

And, audio, Iguess.

WaleedKadous: Yes, yes, audio as well.

AliZewail: Um, okay. Uh, and I like to close the podcast on a noteof gratitude. So what is a gift some someone has given you that has had a greatimpact on your life or positive impact?

WaleedKadous: There are so many gifts I think the gift of mentorshipand having someone that you can, who really helps you to kind of make thedifficult decisions and has perspective is, is one. And you know I've been veryfortunate to have. Mentors like Brian McClendon, who was the founder of, uh,Google Earth and, um, Adam Robara, who's like one of the early, uh, Googleemployees who now has his own venture capital firm.

So really the giftof having mentors is one that's very important to me. One that I would also addis maybe the gift of feedback. So, you know, um, early on in my career, I was,you know, as an academic trained in Australia, Australia is a very conservativesociety and, um, it tends to, there was very much this habit I had of saying.
It can't be done orlike that's a really bad idea or something else. And then a PM pulled me asideone day and she said, look, William, I'm so tired of you telling me it can't bedone. Tell me what can be done. And, um, and you'll just see much more successin Silicon Valley and everywhere else. And, uh, that feedback was just, it wasreally important for me and it helped, it helped unblock me from a point whereI had hit a limit in my career.

So. The gifts ofmentorship and the gifts of direct, clear feedback. That's actionable. I thinkthose are two gifts that I'm very, very grateful for.

AliZewail: Yeah. Yeah. And, um, yeah, I mean, this type of feedbackrequires courage. And so it's a very generous gift indeed. So on that note, uh,thank you very much for your time and for the gift of your time. Uh, and, uh,all the coaching mentorship you've given us and, uh, looking forward, maybehosting you again sometime.

WaleedKadous: Sounds good. We'd love to see you later. So,
AliZewail: Thank you. All  
Thank you forlistening to this episode of startups Arabia podcast. If there was somethingyou really liked about what the guests said today, reach out to them on socialmedia and tell them what you liked. And of course, if you haven't subscribedyet, what are you waiting for? You don't wanna miss any of our great upcomingepisodes.

Also, please rateus and give us comments on our social media accounts so that we know how toimprove. And also tell us what you like. We don't mind hearing that either.Until next time, this was your host Addie's whale.

Watch Next

Omar Hagrass:
Early challenges, Trella’s fast growth, and a winning founder’s mentality

Noureddine Tayebi:
Founder of Yassir on hypergrowth, building in the region and other observations