Monday’s episode with Matteo Franceschetti, Co-Founder & CEO of Eight Sleep:
Download the transcript here:
Here are Harry’s Top Takeaways!
1. Five Questions to Ask Every New Hire:
How did you find your previous jobs & what were you hired for?
Biggest achievement?
Lowest point?
Who is your manager and what are they going to say about you in the next reference check?
Why did you leave?
2. Two Secrets to the Hiring Process:
First, you have to get comfortable hiring slowly.
Second, anyone who wants to join us has to do a project that they present to a panel.
No one is too senior, if you want to join, you do the work.
3. How to See the Best Talent in Market:
- Measure top of funnel of each exec; see how many interviews they do a week.
- Hire 1 in 100. If they start telling me I cannot hire X and they did 30 interviews. It's math, if you didn't see 100 people, you're not going to hire.
4. Two Secrets to Hiring Senior Execs:
They have to have two core elements:
- Worked at a company in the same time in the market. Recession different to bull market and spend spend.
- Optimised on same core objective in prior role. Growth vs capital efficiency is different.
5. What Makes the Best Leaders:
Your job as an executive is not to be loved.
Your job is to help your people achieve more than what they believe it was possible.
6. The Most Important Thing a Manager Can Do:
People should always know what you think of their performance.
This is how you avoid problems in the future and allow them to course correct.
Develop a realtiponship of clear and direct feedback immediately.
7. Bigger Teams = Bigger Problems:
The pain that you go through if you hire the wrong person is 10x bigger.
We were becoming slower as we were adding people, and the quality was going down, which is really the worst.
8. What Makes a Good Memo:
The best are short, data driven, and clear.
Structure:
Executive summary with key data.
Points to be discussed. Then OKRs, goals are monthly.
Action items that the team takes live.
Who accountable? What deadline? Done.
LiveFlow is the hottest name in FP&A right now. LiveFlow integrates and customizes all of your financial reports from your accounting system directly into your spreadsheet. LiveFlow has a library of over 150 financial models that are completely plug-and-play, and they’ve also just released a brand new consolidation product that completely automates your consolidation in 10 minutes! Learn more here: https://bit.ly/3S0rQ0V
Wednesday’s episode with Jeff Seibert, Co-Founder & CEO of Digits:
Download the transcript here:
Here are Harry’s Top Takeaways!
1. LLMs Will Be Commoditised:
We will have an open-source equivalent.
Meta is highly motivated to open source it’s work. Many want to run and tune these themselves. That is hard and expensive and nothing in tech stays hard and expensive for long. It will be commoditized.
2. Will LLMs Specialise:
There will be an open-source base LLM.
There will then be tools to fine tune it easily.
Training an LLM is so hard and expensive. Fine tuning is much easier with ihgh quality and focused data.
3. What Startups Will Be Killed by Large LLMs:
Many will be killed; they are thin wrappers on top of LLMs.
2 types:
If your primary product is scripting GPT. If you have built it in 2-4 weeks.
Value of high-quality proprietary data to fine-tune models has never been greater.
4. OpenAI Will Evolve into an Infrastructure Play:
They will host models and allow you to fine tune them.
Large LLMs will not build vertically specific products.
If you are working on a horizontal problem, it is likely one of the large LLMs will come for you in time.
5. Why Apple is the Winner in the World of AI:
They are able to pioneer small custom models on device and then they have custom silicon to make them run.
Performance could be outlandish compared to any other platform.
If they can run large LLMs on a iPhone, then OpenAI is out.
6. Two Reason Why Adoption of AI Will Be So Fast:
No new hardware to buy and so no huge purchase price.
No new UX pattern to get familiar with. Chatbots are a very fluid interface that we all know well.
7. What Would Jeff Do If He Was CEO of Google:
They need to go all in on it. I don't think they have a choice.
It's existential for them. If AI replaces search, their golden goose has been killed.
It is better to kill your own golden goose than watch someone else do it.
8. Angel Portfolio in Review:
97 angel investments made.
30 failed 19 are at 1x
10 matter.
Very very few have returned any cash at all.
9. How I Made 200x in Secondary Markets on a Failing Company:
I invested $10K in a social network 10 years ago.
They pivoted into a new project which ultimately failed.
They pivoted into crypto & started Alchemy.
At height of boom, I exited for 200x in secondary markets.
10. Three Biggest Pieces of Advice for Angels:
Always write the same size check. You do not know which one will work.
Invest in a lot of companies. You need to have real diversification.
Be patient. It will be 10 years+ before you see any real cashback.
Friday’s episode was a Thanksgiving special, bringing together the best minds in AI for a virtual panel with:
Des Traynor is a Co-Founder of Intercom
Yann LeCun is VP & Chief AI Scientist at Meta
Emad Mostaque is the Co-Founder and CEO @ StabilityAI
Jeff Seibert is the Founder & CEO @ Digits
Tomasz Tunguz is the Founder and General Partner at Theory Ventures
Douwe Kiela is the CEO of Contextual AI
Cris Valenzuela is the CEO and Co-Founder of Runway
Richard Socher is the founder and CEO of You.com
Download the transcript here:
Here are the top takeaways from the AI Panel Discussion:
1. Foundation Model Companies: The discussion predicts that only five or six companies, including Nvidia, Google, Microsoft, OpenAI, Damme, and possibly Apple, will lead in training foundational AI models in the coming years.
2. Commoditization of LLMs: There's a debate about whether Large Language Models (LLMs) will become commoditized, with various AI companies like Bistral, Glamour, Anthropic, and Cohere contributing to this space.
3. Open Source Movement: Some experts believe that market forces will drive the commoditization of foundational AI models, with entities like Meta being motivated to open-source their work.
4. Rapid Evolution of Models: There's an acknowledgment that AI models are evolving rapidly, rendering current models obsolete within a year. This rapid progression is exemplified by improvements from GPT-3 to GPT-4.
5. Challenges of Unbiased Models: The discussion touches on the difficulty of creating unbiased models, highlighting the need for national, cultural, and personal datasets to address biases effectively.
6. Efficiency in Model Training: Yann LeCun points out that large models are not necessary for effective AI, as efficiency in training is improving, allowing smaller models to achieve significant tasks.
7. Importance of Model Size: Richard Socher emphasizes the importance of large models for training diverse tasks, suggesting that smaller models might be insufficient for complex applications.
8. People Over Models: The conversation shifts to the idea that the people building and improving AI models are more critical than the models themselves, as learning and adaptation are key to progress in AI.
9. Data Size and Model Performance: There's a consensus that larger models with more data and parameters tend to perform better, though there's also interest in making these models more efficient and compact without losing performance.
Thank you for reading this week’s newsletter! You can find links to all our content across platforms here: https://linktr.ee/20vc