Building an AI-powered company
AI is one of the most important technologies of this century holding the promise of breakthroughs in energy, health, and pretty much every other field of human activity.
AI capabilities have been evolving at a breakneck speed over the last few years enticing entrepreneurs to start companies and leverage these capabilities in new products.
The goal of this note is to cover the unique challenges faced by the founders starting a business applying AI. It’s not going into general company-building advice [1].
Instead, it highlights how building an AI-powered company is different.
The note is structured around several key focus areas: market, product, distribution, business model, moats, etc. It offers lessons from 15+ years of experience building and investing in technology startups as well as our discussions with dozens of founders building in AI (you can listen to some of those conversations here).
While the most recent wave of innovation in AI is powered by the deep learning paradigm, we look at the field as a continuum where the use of each technique (including hardcoded traditional software) has its place that is dictated by the customer use case and the focus on delivering customer value in a practical way.
We also expect the emergence of new AI techniques (DL 2.0?, neuro-symbolic approaches) to address current shortcomings of AI making the systems even more powerful.
It will be a living document, that we will update over time. We hope to improve it with your help. Please post your feedback/questions in the comments here or on Twitter.
A huge thank you to Rakesh Yadav for his comments on this document.
Market
The DL-powered AI adoption started with tasks with a narrower scope and higher tolerance for mistakes, like recommenders (showing “people also buy” widgets, or choosing the next post in the social network feed), dictation, OCR, and basic image recognition. Though such tasks used to be performed by hard-coded algorithms, the DL approach demonstrated much better results.
The adoption is now expanding along the following four vectors:
Boosting existing software and hardware capabilities in every field. Look at products built by old slow-moving incumbents — there is a good opportunity to reinvent these products from the ground up with AI.
Augmenting knowledge workers/co-creation. Early results show that DL techniques can be used to enable co-creation where humans direct the machine to come up with drafts and edit the output. There are early examples of co-writing, co-coding, co-composing, co-designing, and so on. While these areas are mostly greenfields, there is a risk that the large horizontal use cases will be captured by existing horizontal platforms. More specialized/vertical cases may be more attractive.
Augmenting physical labor. The physical world is a challenging environment. Currently, AI SOTA and falling sensor prices are allowing the development of single-purpose robots that can perform narrow tasks like keeping a car in its lane, picking a product from a box, or sanding a surface. We will likely need additional breakthroughs in hardware and AI techniques to solve autonomous vehicles on public streets and more versatile robots. If your heart belongs to robotics — look at repetitive narrow tasks performed around humans (at a factory, office, or home) or in straightforward outdoor environments (inspection drones).
Enabling AI. The innovation in all three areas powers the growth of a fourth segment: building an enabling business.
One type of such business focuses on the infrastructure to build and run AI systems in production: MLOps, DataOps, Observability, Security,— there are multiple segments to tackle separately or in an end-to-end product. Note — it’s not too late to start a “pick-and-shovel” business. E.g. Cloudflare (NET) started in 2010 long after the early days of the Internet. And AI today is more like the year 1996 of the internet days.
Other types of enabling businesses are companies building essential hardware, core SDKs, implementation services, and training.
While these companies are essential for the field to thrive, this note primarily focuses on customer-facing products.
Use cases
When thinking about a specific product to focus on, it’s helpful to consider what use cases can benefit from
- Search and recommendations: text, video, audio, products;
- pattern recognition: human pose estimation, visual objects recognition, anomaly detection, content moderation, noise reduction, churn risk estimation, text <-> speech, question-answer matching;
- structuring: summarization, entity extraction, intent;
- translation: hi -> fr, en -> computer code;
- assessment of unstructured content: fraud risk, grammatical correctness;
- narrow repetitive tasks: taking notes during a meeting, filling out CRM, proof-reading, patrolling, painting, sanding;
- content generation/rapid prototyping: images, sounds, music, text, or even car parts’ shapes in CAD systems;
- content modification: removing an object from the photo, removing “uhms” or replacing a word in the podcast recording;
- highly complex simulations: earth atmosphere, protein folding;
- operating with large amounts of data that are impossible for traditional software to handle.
All great use cases are (1) narrow and within the reach of the current AI capabilities and (2) either low risk (capturing player’s actions in the field for coaching [2]), or done with human supervision (text generated by the LLM is reviewed/corrected before sharing) while not requiring the machine to work 100% autonomously in very complex, open-ended, and highly sensitive environments: driving on public roads, rejecting job candidates, and prescribing medical treatment.
See the Pitfalls section below for more on what it means to target a “narrow” use case vs. going after a general set of capabilities.
What makes a great market
Best markets are characterized by the combination of “high value” and either “rapid adoption” or “formidable barrier”.
- High value — your AI system is enabling a massive improvement that matters to the customer. If AI can increase my LinkedIn headshot’s resolution by 10x — I don’t care. If it can half my electricity bill — I’m listening.
- Rapid adoption — the faster you can get the product to the customers the better. Markets that win here have no regulation, don’t require physical world operations, or complex buying decisions. Social media is an example of such a market and TikTok [14] is one of the biggest successes of AI-powered startups to date.
- Formidable barrier — if the market doesn’t allow for rapid adoption, there should be significant barriers to entry to keep competitors away from entering while your company is capturing it. Palantir is a good example of building a data-centric company in a slow market. The lower the structural barriers in your market the more you should think about how to accelerate adoption (see “Distribution” section below).
Start narrow. Even if the ultimate goal is to build a horizontal platform. No one, especially a startup, can boil the ocean. Pick a niche and expand use case by use case.
But not too narrow. The initial segment should theoretically be able to get you to ~$100M in revenue. You’ll likely expand much earlier. However, if your niche is too narrow, you may not be able to build escape velocity to expand and may end up orbiting that narrow market.
Look for the signs of the fast-moving water
- Look for situations where mediocre AI-powered products are growing suspiciously well. It’s a sure sign of the “market pull”
- Look for a physical reaction from your customer, when you show them the prototype. “When can I have it, how can I get it sooner?” are the questions you’re looking for.
The rule of 10x is not dogma. It can be less or more — depending on the industry E.g. a merchant will beg you to accept their money if you double their conversion rates [3]. In all cases, it should be important enough for a customer to prioritize among a million other things on their plate. The “Build something people want” mantra never goes out of style.
Consider competition, but don’t overthink it — there were dozens of search engines when Google started. Assume there will be competition, as barriers to starting a business in AI are getting lower. The number of competitors will rapidly grow. Companies that were partners will likely become competitors, and so on. The real question is — do you have a unique insight about how what you build is radically different — e.g. one of Zuckerberg’s insights was that allowing anonymous users leads to toxicity, so they focused on authentic identities.
That said, some competition concerns are real, like going against a tech-savvy incumbent aiming to replace their core offering. They can fight back, they will fight back, and they have a massive distribution advantage (see the distribution section).
However, there are multiple markets where incumbents lack the “tech-savvy” piece that will give a new company an opening.
Product
Hit the market as fast as possible. Although this idea is as old as entrepreneurship itself and isn’t specific to AI businesses, it can be particularly hard for AI founders to follow it. Building powerful AI systems is hard, and sharing something not particularly well functioning with the customer seems like a waste of time. The key here is understanding the goal of “hitting the market”. It isn’t to sell your product or to demonstrate that it reliably works in production. The goals are — to validate the need, the key set of capabilities, the data availability, and the purchasing process. Look at your readiness through this set of goals. Often you can achieve all four goals with just mockups.
E.g. the founder of Winn identified and talked to 200 sales leaders, iterated through several versions of mockups based on their feedback, and refined his ideas about the product. And this is not it. Some of those leaders became his design customers and several became investors.
Could you do it better with the product — maybe, but at what cost? What if you discover that 80% of what you’ve built isn’t needed (a real scenario that happened to one company)?
There is a serious psychological machinery involved in our reluctance to hit the market early (the fear of rejection), and it’s really hard to fight it being in the trenches, so a good idea is to have an experienced business person as an advisor, who can push you to act faster. [4]
Reducing the time-to-value for the customers is just as important. Your system may work best after months of training on the customer data, but make it deliver the first value within days, hours, or even minutes [5]. Rely on pre-trained models and heuristics where needed. This will be especially important in the early days, as it’s much harder to sell a POC that will take weeks of data collection before the customer can even see the first signs of the system’s capabilities. Make it easy for customers to say yes to a trial.
AI is just one component of the solution. Think end-to-end UX. E.g. Winn uses NLP to guide sales teams but had to build a lot of things around it to support the complete workflow of a sales call. Aidaptive built recommendation widgets and Shopify connectors for “a one-click” integration. One AI developed a set of enterprise-grade APIs and a WYSIWYG in-browser studio. Excellent AI capabilities delivered in an unusable way are still useless.
AI systems are imprecise and probabilistic in their outputs. Address this in your product. Implement an adjustment layer that will let the customer set up the boundaries for the system (e.g. Aidaptive allows merchants to pause all discounts during the hot season regardless of the AI suggestions), make the user aware of the level of conviction the system has about the provided result (risk scoring in lending and cyber security), offer multiple options for a user to choose from, combine, or even use as an inspiration to come up with a new option. E.g. Abzu develops multiple possible equations describing the relationships between different variables in the data, Wordtune suggests several alternative phrasings, Neural Concept generates hundreds of alternative shape designs, and Google yields pages of search results [6]. Don’t get dogmatic about (arguably unachievable) certainty. Embrace uncertainty.
Data strategy. The data is an essential element of the AI system, but accessing it to train your AI systems is often not straightforward. Some use cases allow for leveraging publicly available data (e.g. an algorithm summarizing news articles). In other cases, a path to data may lie through acting as a service provider to the customer and training your system as a part of the arrangement. In many cases, the only option is to initially rely on a set of heuristics in the production, and eventually switch to the DL models after the data collection and training. Synthetic data can also be helpful, especially when working with edge cases, but it’s a significant challenge to make it work.
Recruit design customers. These will be the ones ready to take the risk of working with a tiny team and unproved (non-existing) product. Why will they do that? Genereally, it’s because they are either innovators in hart or they suffer immensely from the pain you aiming to solve. Finding those takes time and is often a numbers game, so make sure you set yourself up to approach as many as needed. Should you try to get them to some commercial commitment early on? It depends on the situation but in many cases starting asap is more important than getting that commitment. The exception is — if you believe that without the commercial terms they won’t pay needed attention to your partnership. Aim to get 2–3 design customers so not to overfit for any one, but also not to overwhelm yourselves.
Privacy. Access to the data is often treated as a sensitive matter by midsize and large companies, and by the regulation. Make sure you understand the environment you’ll be working in and adjust your approach accordingly (another reason to start work with your market as soon as possible). You may have to consider building your product so it can be deployed in a private cloud, use federated learning, anonymization, or rely on user consent.
Fairness/bias. Deep Learning is an opaque system trained on imperfect data and is susceptible to biases. It’s important to address this issue head-on when dealing with sensitive use cases (loan application risk scouring, job application fit scouring, etc) and demonstrate to the customers how your product is scanned for biases (one approach is to test the performance of your system against a specifically designed dataset) and how the biases are corrected (since it’s a learning system it often involves adjusting the training dataset or building an adjustment layer specifically designed to compensate identified biases).
Regulation. It is very early days and there is more ambiguity than clarity in regulating AI. One of the most recent issues is the intellectual property question that emerged with the introduction of image-generation models. Expect this and other regulation-related conversations to carry on for some time, and not to be resolved on the “startup schedule”. That’s why every founder applying AI should be comfortable with operating in an emerging regulatory environment.
That said, there is already established regulation in several fields that directly affects building AI systems, regulating data use, data privacy, anti-discrimination/fairness, and so on. Make sure you understand the regulatory landscape in your market.
Distribution
For a startup to succeed it must solve distribution faster than the incumbent solves innovation. E.g. Microsoft came up with Teams to stop Slack from dominating the market. Had Slack grown faster MS might not have had time to respond.
In some cases, like building a generative AI system working with text, visual, or audio content the results of the system itself can and should be used for viral growth. Another case is when your product value is driven by the network economies — in this case, users had to onboard other users to benefit from the product (Venmo).
It’s always a blessing for a challenger when incumbents can’t or don’t want to compete with it head-to-head giving an upstart the precious time to scale. E.g. Comma.ai benefits from the fact that most car manufacturers aren’t ready to rapidly evolve their ADAS offerings, as they have much more to lose in case of a malfunction than to gain in sales. Another early sign of this dynamic is Getty’s decision to ban AI-generated content, giving an opening to services like Lexica. The IP issues will be solved eventually, but you can’t take back that opening.
Aim for trendsetters, if those exist in your market. E.g. for an AI-copywriting product aim to get marketers-educators on board (like Niel Patel and many smaller ones). For consumer-focused products — consider going after Instagram, TikTok celebrities, etc. If they use it and love it, they will talk about it to their audience (often for free).
An embedded distribution deal can be great but think about strategic flexibility. Often such deals come with limitations that can make the company a hostage of the partnership. These limitations can often be overcome by the thoughtful design of the relationship that acknowledges the interests of both parties. Google-Yahoo deal in the early days was one such example. Open AI and Microsoft relationship is another example of companies working thoughtfully to build a mutually beneficial partnership.
Be wary of focusing on single-feature companies that incumbents can quickly integrate into their products and offer for free unless you also plan to offer it for free and can scale quickly. If your innovation edge can be bought by the incumbent relatively easily it will be bought, and then your best hope is to be that acquired company.
Keep in mind that many of your prospects have been burned by the previous generations of “AI” systems, sold by vendors capitalizing on AI hype, that never really delivered. It is important to address this issue directly and make the customer feel comfortable giving you a try. Short time to value will help here.
Business model
The primary factor differentiating an AI-powered product business model is the compute-hungry nature of the DL systems which lends itself best to the tiered consumption model, freemium or trial-based with an option for wholesale enterprise deals with the largest customers (see e.g. OpenAI pricing structure). However, there are a few nuances.
If the system operates in a private cloud or on the edge, thus consuming the customer’s computational resources, it can be offered as a more traditional subscription. This is also true for the systems where AI-compute costs are predictably connected to the traditional subscription metrics, like the number of seats, and thus can be reliably calculated in advance (this is the way it’s done by Jimminy where they can reliably estimate how many calls a salesperson has per day and how long each one lasts).
This is also true for consumer applications — the trick is to move as much of AI heavy lifting as possible to be processed on the consumer device. Such applications can be offered as a flat subscription or monetized via ads, transaction fees, and other consumer-focused business models. The drawback is that smartphones’ computational abilities vary a lot, posing a significant engineering challenge.
In all cases, the model performance and associated compute costs should be compared against the potential price point viable for the market. Usually, there are ways to optimize the compute costs with a tolerable impact on the system’s performance [7], and we can always hope for Moore’s Law to act as a tailwind in the long term.
Moats
One thing to capture a piece of the market, another is to hold it against an onslaught of challengers. This section is structured along the lines of the 7 Powers framework, focusing on dynamics introduced by the AI (“7 Powers” by H.Hemler is one of the essential readings on the general company building [1]).
Scale economies. Training is expensive, the more customers will benefit from a trained system the lower the cost attributed to each of them. Building complex reliable systems at scale is expensive too (including the cost of hiring and maintaining the best engineers — see Talent).
Network economies should become a powerful source of a moat if the business leverages learning across customers to improve products for all of them (cyber security, ecommerce recommenders).
Switching costs. AI system as a learning system is a natural fit for this power. However, if your customers can easily retrain alternative tools with their data — this moat is weak. When your system collects additional data in production and uses that data to fine-tune itself the competitors will find it much harder to match its performance. Another source of this moat is accumulated expertise — if your AI-powered product enables advanced tuning (prompt engineering, etc.) and requires a bit of time to master it to the pro level, it’s unlikely that your users will easily switch and go through the learning process again. Also the wider the usage of the b2b product across different departments the lower the chance that the system will be challenged by any one team’s decision to switch.
Process power. Delivering value at scale requires engineering a complex system: data, labeling, models, infrastructure, security, performance, scalability, reliability, observability, etc. This is hard and in many cases, the company’s ability to just reliably deliver will be the key differentiator. E.g. Twillio emerged as a leader in text messaging primarily because of reliable delivery, not groundbreaking features.
Cornered resource. This can be a certain type of unique training data (e.g. sales calls transcripts), hardware (partnership with a big partner), or key talent. Also, the regulation comes here as a friend, if it forces your competitors to spend months (years) to get the necessary approvals, you already have.
Brand (trust). AI systems are capricious, and in many sensitive areas, customers will stay with a proven player even if a new competitor claims to have “cooler” features/lower price. Errors are just too costly. (E.g. autopilot software for cars, cybersecurity, financial fraud), so it pays to invest in the reputation of being that trusted partner from day one.
Team
As in any rapidly evolving and fast-changing fundamental technology field the AI company will disproportionally benefit from the best-in-class engineering talent. Think about Apple and Wozniak. Zuckerberg coded himself and brought the best people on board, eventually overcoming buggy and slow Friendster. Paypal and Square winning against multiple competitors not in small part due to their ability to efficiently and automatically fight fraud. The list goes on. In the case of an AI company, it means making sure you brought in great teams working with the AI system (Data Science/ML-engineering, MLOps, etc) and general engineering (frontend, backend, infra, DevOps, etc.). You can’t build a production-level AI system without either of the groups and their intense collaboration. At the very early stages, it makes sense to aim for people who can work across the stack. These folks are rare but are well worth the effort of finding them.
A great R&D team will deliver 100x better results than the “good” one. Best teams will not only be able to follow and implement SOTA but will push the boundary in areas critical to the company’s success [8]. They will engineer a scalable, secure, and robust system (see process power) and will know when to leverage 3rd party components and when to build in-house. The engineering team will often be the defining factor for the success of some challengers against others.
AI-powered business is operationally heavier than a pure SaaS product since it needs to identify, collect and label data. Even when relying on partners for data labeling, synthetic data, and other related activities, the companies build an internal team to coordinate all activities and work on elements not covered by the partners.
Pitfalls
There are several pitfall patterns of AI-powered companies:
Falling in love with tech — building a solution in search of a problem. It’s a common issue across the board but maybe even more pronounced in AI because let’s admit it, the tech is mindbogglingly cool, so it’s easy to fall in love with it. Fight it by hitting the market early and getting feedback. Make sure you discern “yeah, this is kind of cool” from “can I get it asap?”. Also, check your pitch. If your 1-minute pitch allocates more than 5 seconds to AI technologies, it’s a good idea to consider filling those extra seconds with value-focused lines instead. Mention AI to properly position the product as next-gen, but that’s it. Capture customer attention with the outcomes presented as prominently as possible.
Attacking a use-case where there is no data. If your potential customer sells bespoke products to a handful of customers every year it is going to be very challenging to train a machine learning model on that data, and that model is unlikely to deliver better results than a few common sense heuristics. That said, some low-data cases can be covered well by the models pre-trained on general datasets. Another variation of this pitfall is when the data is abundant but can’t be captured due to the prevailing attitudes toward privacy (e.g. some face recognition use cases, employee monitoring, etc.).
Targeting too general a set of capabilities. AI systems can be trained to do magical things as long as there is enough data, compute, and the “thing” in question is relatively narrow. It’s important to define “narrow” here. A narrow task can be quite complex (e.g. winning a Go game against a world champion), but it should have limited action space (there are only so many moves one can make in Go at any given point in time). The danger lies in targeting an open-ended action space with an unlimited number of environment variations. It’s unlikely that the current AI SOTA can power an “AI-paralegal” or even an “AI-meeting scheduler” end-to-end. Aim for individual tasks of a job instead: searching similar documents, transcribing calls, etc. The trick here is to build your system one narrow capability at a time and tie these capabilities together into a beautiful customer UX with the traditional software. You will end up with a system that will empower paralegals rather than replace them, and it is ok. See this note for a high-level non-technical discussion of modern AI systems’ capabilities and their limits.
Selling to startups. This pitfall is especially relevant in the euphoria phase of the technology adoption cycle when everyone is excited about the opportunity and scores of startups raise money to grab it. But these startups can’t do everything themselves, so new startups emerge that help the first group with their challenges. The venture capital money flows from the first group to the second as revenue “validates” the second group’s business and allows them to raise more money. This dynamic was there during the internet bubble and the crypto-bubbles. It is present in AI as well. The key here is to make sure that your startup customers aren’t simply recycling the VC funds but actually have a viable business model. If there are concerns about that, it makes sense to use the startup segment’s revenues to diversify early into other markets.
Overestimating the durability of the R&D edge. After a lot of work has gone into building and training the superior model it is easy to assume that your advantage will last for a while. But it becomes increasingly clear that the innovations are copied quite quickly (e.g. OpenAI Dall-E has been open-sourced in a few months). What will give you a constant edge is the “machine that builds the machine” — your internal R&D and ops function that constantly improves the system performance through better data, better models, better infrastructure, and better UX.
Underestimating the amount of work to make the AI deliver in a real-world customer environment. The AI capabilities as demonstrated by the scientific papers and demos always look cool, but can rarely be applied directly to the production use cases at scale. Here the ability of the team to understand this issue and estimate the amount of work needed is critical. In some cases, it may just mean you have to plan for a prolonged initial development stage and act accordingly, especially in terms of the funding making sure you have enough runway to get to the other side.
Misjudging the adoption pace of the industry. Some industries just move slower than others. If you are developing a medical device powered by AI it will take time to go through the regulatory process and get to the first revenues. In another interesting example, Abzu’s Casper Wilstrup persuades biotech companies to adopt both Abzu’s AI system to identify targets and RNA therapeutics to test them, thus closing the validation cycle in a matter of weeks. Otherwise, it may take these companies years to validate each target, preventing them from fully leveraging the AI’s powerful ability to rapidly develop hypotheses [6].
Technical side
There are several options for implementing AI capabilities in your product depending on the importance of each function and its availability elsewhere:
- Core AI capabilities. These will likely have to be built in-house. It is a formidable task, even using all the wealth of open source, but it gives you maximum flexibility. E.g. this is how One AI’s team approaches building core NLP models, not only implementing SOTA but pushing its boundaries.
- Not core, but isn’t readily available from 3rd parties. Build this in-house by applying AI shortcuts and advanced training techniques: embeddings, pre-trained foundation models, active learning [9] E.g. Meta uses this approach for most AI needs, excluding the core feed recommender algorithm, that it designs and trains independently.
- Not core, and can be bought as a product or API — consider buying this capability at least at the beginning to speed up the development.
Good data. You will need clean data that covers as much of what the system will see in production as possible, and you’ll have to build a system to enable constant data collection in production. Without additional training on the new data, your system performance will most likely deteriorate over time. Design your system for continuous learning. It’s also important to keep in mind that more isn’t always better here. Often removing especially nosy data from the dataset (even if it’s a large chunk of it) can improve the system’s performance [10]. Plan to have synthetic data workflow to train for the edge cases. There are companies focused on this problem, but depending on the peculiarities of your data you may have to generate it in-house [11].
Use an ensemble approach — no model is perfect, different models work better for different things. Another benefit of an ensemble is the ability to filter out some outliers by averaging out several different models in production. The ensemble can include alternative ML techniques such as random forests. No DL purism — use every technique that will maximize the value for the customer. Rely on heuristics when needed. E.g. Aidaptive uses rule-based approaches where data is sparse and switches to ML-based techniques as the amount of available customer data grows.
Make sure you’ll build an adjustment/safeguarding layer to introduce guardrails for your models. Use it to protect the output from unwanted outliers and enable business logic that can’t be easily captured by the models.
Leverage the best available underlying infrastructure for data preparation, training, deployment, monitoring, etc. — avoid rebuilding it from scratch [12].
Build for constant retraining — expect your customer environment to change and the quality of your system to deteriorate over time. Keep the quality high by continuously retraining your models.
AI security. Protecting your systems from data poisoning, spam [13], evasion, and other attacks will be an important part of your journey, alongside the traditional cyber security best practices. Neglect this at your peril. Check out this conversation with HiddenLayer’s Chris Sestito for a thorough overview of the field.
Conclusion
We’ll be updating this document based on your feedback. Please let us know what you think about it and what important lessons we missed here.
Building a business powered by AI and have questions or looking for an investor-partner on this journey — please reach out.
Notes
[1] These are some very good sources of general advice. The list just scratches the surface of course, but here we go:
- Zero to one by Peter Thiel
- Books from the “Business” sections here
- Paul Graham’s blog
- YC startup school
- Content produced by NFX, Sequoia, A16z, and other VC firms.
- If you don’t know where to get a good source of advice on some specific topic — ping me in the comments to this post on Twitter
[2] You can learn more about this in our conversation with ReSpo.Vision
[3] You can learn more about this in our conversation with Aidaptive (ex. JarvisML) here
[4] Other areas where we are often too slow to act are parting ways with people that don’t work out and acknowledging the lack of a p/m fit of a certain product. In all such areas, our mind’s design gets in the way, and the best way to deal with that is to acknowledge this natural impediment and build systems to compensate. Advisors (for p/m fit and other key product/company design decisions) and co-workers not invested in the relationship with the employee in question for the parting way decisions.
[5] You can learn more about this in our conversations with the founders of Quin AI and Aidaptive (ex. Jarvis ML) where we talk about delivering in days, and One AI’s goal to cut this time to minutes.
[6] You can listen to the conversations with Neural Concept and Abzu here.
[7] You can learn more about this here.
[8] See our discussion of One AI’s approach to content summarization here.
[9] Learn more about this approach, defined by Graft’s Adam Oliner as “Modern AI” here.
[10] See for example this segment by Andrew NG.
[11] The team at ReSpo.Vision dresses up as soccer players and plays on the field while wired with sensors, to collect the necessary data.
[12] Check out this conversation with Elemeno’s Lucas Bonatto to learn more.
[13] Think for example about a subtle way to hack an AI recommendation system by sending bots to the merchant’s site and introducing “noise” that confuses the recommender.
[14] See this paper on Monolith, for example.