Skip to content

Commit

Permalink
update intro
Browse files Browse the repository at this point in the history
  • Loading branch information
souzatharsis committed Nov 21, 2024
1 parent 2457bbc commit ec75aeb
Show file tree
Hide file tree
Showing 14 changed files with 80 additions and 110 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
![Taming Language Models Logo](tamingllms/_static/logo.png#gh-light-mode-only)
<img src="tamingllms/_static/logo.png" style="background-color:white;" alt="Taming Language Models Logo" />
![Taming Language Models Logo](tamingllms/_static/logo_w.png#gh-light-mode-only)
<img src="tamingllms/_static/logo_w.png" style="background-color:white;" alt="Taming Language Models Logo" />


https://www.souzatharsis.com/tamingLLMs
Expand Down
Binary file modified tamingllms/_build/.doctrees/environment.pickle
Binary file not shown.
Binary file modified tamingllms/_build/.doctrees/markdown/intro.doctree
Binary file not shown.
Binary file modified tamingllms/_build/.doctrees/markdown/toc.doctree
Binary file not shown.
48 changes: 18 additions & 30 deletions tamingllms/_build/html/_sources/markdown/intro.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Introduction

In recent years, Large Language Models (LLMs) have emerged as a transformative force in technology, promising to revolutionize how we build products and interact with computers. From ChatGPT to GitHub Copilot, these systems have captured the public imagination and sparked a gold rush of AI-powered applications. However, beneath the surface of this technological revolution lies a complex landscape of challenges that practitioners must navigate.
In recent years, Large Language Models (LLMs) have emerged as a transformative force in technology, promising to revolutionize how we build products and interact with computers. From ChatGPT to GitHub Copilot, Claude Artifacts, cursor.com, replit, and others, these systems have captured the public imagination and sparked a gold rush of AI-powered applications. However, beneath the surface of this technological revolution lies a complex landscape of challenges that practitioners must navigate.

As we'll explore in this book, the significant engineering effort required to manage these challenges - from handling non-deterministic outputs to preventing hallucinations - raises important questions about the true productivity gains promised by LLM technology. While the potential remains compelling, the hidden costs and complexities of building reliable LLM-powered systems should not be neglected and instead may force us to reconsider our overly-optimistic assumptions about their transformative impact.
As we'll explore in this book, the engineering effort required to manage these challenges - from handling non-deterministic outputs to preventing hallucinations - cannot be overstated. While the potential of LLM technology remains compelling, understanding and addressing the hidden costs and complexities of building reliable LLM-powered systems will enable us to fully harness their transformative impact.

## Core Challenges We'll Address
While the capabilities of LLMs are indeed remarkable, the prevailing narrative often glosses over fundamental problems that engineers, product managers, and organizations face when building real-world applications. This book aims to bridge that gap, offering a practical, clear-eyed examination of the pitfalls and challenges in working with LLMs.
Expand All @@ -11,15 +11,24 @@ Throughout this book, we'll tackle the following (non-exhaustive) list of critic

1. **Non-deterministic Behavior**: Unlike traditional software systems, LLMs can produce different outputs for identical inputs, making testing and reliability assurance particularly challenging.

2. **Structural Reliability**: LLMs struggle to maintain consistent output formats, complicating their integration into larger systems and making error handling more complex.
2. **Structural (un)Reliability**: LLMs struggle to maintain consistent output formats, complicating their integration into larger systems and making error handling more complex.

3. **Hallucination Management**: These models can generate plausible-sounding but entirely fabricated information, creating significant risks for production applications.

4. **Cost Optimization**: The computational and financial costs of operating LLM-based systems can quickly become prohibitive without careful optimization.

5. **Testing Complexity**: Traditional testing methodologies break down when dealing with non-deterministic systems, requiring new approaches.

6. **Integration Challenges**: Incorporating LLMs into existing software architectures presents unique architectural and operational challenges.
## A Note on Perspective

While this book takes a critical look at LLM limitations, our goal is not to discourage their use but to enable more robust and reliable implementations. By understanding these challenges upfront, you'll be better equipped to build systems that leverage LLMs effectively while avoiding common pitfalls.

The current discourse around LLMs tends toward extremes—either uncritical enthusiasm or wholesale dismissal. This book takes a different approach:

- **Practical Implementation Focus**: Rather than theoretical capabilities, we examine real-world challenges and their solutions.
- **Code-First Learning**: Every concept is illustrated with executable Python examples, enabling immediate practical application.
- **Critical Analysis**: We provide a balanced examination of both capabilities and limitations, helping readers make informed decisions about LLM integration.


## A Practical Approach

Expand All @@ -36,7 +45,7 @@ This book takes a hands-on approach to these challenges, providing:
This book is designed for:

- Software Engineers building LLM-powered applications
- Product Managers overseeing AI initiatives
- Product Managers leading AI initiatives
- Technical Leaders making architectural decisions
- Anyone seeking to understand the practical challenges of working with LLMs

Expand All @@ -45,29 +54,8 @@ This book is designed for:
To make the most of this book, you should have:

- Basic Python programming experience
- Access to LLM APIs (OpenAI, Anthropic, or similar)
- A desire to build reliable, production-grade AI systems

## How to Use This Book

Each chapter focuses on a specific challenge, following this structure:

1. Problem explanation and real-world impact
2. Technical deep-dive with code examples
3. Practical solutions and implementation patterns
4. Testing strategies and best practices
5. Cost and performance considerations
6. Conclusion

## A Note on Perspective

While this book takes a critical look at LLM limitations, our goal is not to discourage their use but to enable more robust and reliable implementations. By understanding these challenges upfront, you'll be better equipped to build systems that leverage LLMs effectively while avoiding common pitfalls.

The current discourse around LLMs tends toward extremes—either uncritical enthusiasm or wholesale dismissal. This book takes a different approach:

- **Practical Implementation Focus**: Rather than theoretical capabilities, we examine real-world challenges and their solutions.
- **Code-First Learning**: Every concept is illustrated with executable Python examples, enabling immediate practical application.
- **Critical Analysis**: We provide a balanced examination of both capabilities and limitations, helping readers make informed decisions about LLM integration.
- Access to and basic knowledge of LLM APIs (OpenAI, Anthropic, or similar)
- A desire to build reliable, production-grade LLM-powered products


## Setting Up Your Environment
Expand All @@ -93,8 +81,8 @@ export OPENAI_API_KEY=your-openai-key
### 3. Code Repository
Clone the book's companion repository:
```bash
git clone https://github.com/yourusername/taming-llms.git
cd taming-llms
git clone https://github.com/souzatharsis/tamingllms.git
cd tamingllms
```

### Troubleshooting Common Issues
Expand Down
2 changes: 2 additions & 0 deletions tamingllms/_build/html/_sources/markdown/toc.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Table of Contents

## Chapter 1: Introduction
- The Hidden Challenges of LLMs
- Why This Book Matters
Expand Down
Binary file added tamingllms/_build/html/_static/logo_w.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
57 changes: 22 additions & 35 deletions tamingllms/_build/html/markdown/intro.html
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
<link rel="next" title="Non-determinism" href="../notebooks/nondeterminism.html" />
<link rel="prev" title="Chapter 1: Introduction" href="toc.html" />
<link rel="prev" title="Table of Contents" href="toc.html" />
<meta name="mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-capable" content="yes">
</head><body>
Expand Down Expand Up @@ -101,7 +101,7 @@
</div>
</header>
<nav>
<a href="toc.html" class="nav-icon previous" title="previous:&#13;Chapter 1: Introduction" aria-label="Previous topic" accesskey="P" tabindex="-1">
<a href="toc.html" class="nav-icon previous" title="previous:&#13;Table of Contents" aria-label="Previous topic" accesskey="P" tabindex="-1">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 512"><!-- Font Awesome Free 5.15.4 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) --><path d="M31.7 239l136-136c9.4-9.4 24.6-9.4 33.9 0l22.6 22.6c9.4 9.4 9.4 24.6 0 33.9L127.9 256l96.4 96.4c9.4 9.4 9.4 24.6 0 33.9L201.7 409c-9.4 9.4-24.6 9.4-33.9 0l-136-136c-9.5-9.4-9.5-24.6-.1-34z"/></svg>
</a>
<a href="../notebooks/nondeterminism.html" class="nav-icon next" title="next:&#13;Non-determinism" aria-label="Next topic" accesskey="N" tabindex="-1">
Expand All @@ -117,7 +117,7 @@
<div class="title">
<span class="text">
<span class="direction">previous</span>
Chapter 1: Introduction
Table of Contents
</span>
</div>
</a>
Expand All @@ -141,21 +141,30 @@

<section class="tex2jax_ignore mathjax_ignore" id="introduction">
<h1>Introduction<a class="headerlink" href="#introduction" title="Permalink to this heading"></a></h1>
<p>In recent years, Large Language Models (LLMs) have emerged as a transformative force in technology, promising to revolutionize how we build products and interact with computers. From ChatGPT to GitHub Copilot, these systems have captured the public imagination and sparked a gold rush of AI-powered applications. However, beneath the surface of this technological revolution lies a complex landscape of challenges that practitioners must navigate.</p>
<p>As we’ll explore in this book, the significant engineering effort required to manage these challenges - from handling non-deterministic outputs to preventing hallucinations - raises important questions about the true productivity gains promised by LLM technology. While the potential remains compelling, the hidden costs and complexities of building reliable LLM-powered systems should not be neglected and instead may force us to reconsider our overly-optimistic assumptions about their transformative impact.</p>
<p>In recent years, Large Language Models (LLMs) have emerged as a transformative force in technology, promising to revolutionize how we build products and interact with computers. From ChatGPT to GitHub Copilot, Claude Artifacts, <a class="reference external" href="http://cursor.com">cursor.com</a>, replit, and others, these systems have captured the public imagination and sparked a gold rush of AI-powered applications. However, beneath the surface of this technological revolution lies a complex landscape of challenges that practitioners must navigate.</p>
<p>As we’ll explore in this book, the engineering effort required to manage these challenges - from handling non-deterministic outputs to preventing hallucinations - cannot be overstated. While the potential of LLM technology remains compelling, understanding and addressing the hidden costs and complexities of building reliable LLM-powered systems will enable us to fully harness their transformative impact.</p>
<section id="core-challenges-we-ll-address">
<h2>Core Challenges We’ll Address<a class="headerlink" href="#core-challenges-we-ll-address" title="Permalink to this heading"></a></h2>
<p>While the capabilities of LLMs are indeed remarkable, the prevailing narrative often glosses over fundamental problems that engineers, product managers, and organizations face when building real-world applications. This book aims to bridge that gap, offering a practical, clear-eyed examination of the pitfalls and challenges in working with LLMs.</p>
<p>Throughout this book, we’ll tackle the following (non-exhaustive) list of critical challenges:</p>
<ol class="arabic simple">
<li><p><strong>Non-deterministic Behavior</strong>: Unlike traditional software systems, LLMs can produce different outputs for identical inputs, making testing and reliability assurance particularly challenging.</p></li>
<li><p><strong>Structural Reliability</strong>: LLMs struggle to maintain consistent output formats, complicating their integration into larger systems and making error handling more complex.</p></li>
<li><p><strong>Structural (un)Reliability</strong>: LLMs struggle to maintain consistent output formats, complicating their integration into larger systems and making error handling more complex.</p></li>
<li><p><strong>Hallucination Management</strong>: These models can generate plausible-sounding but entirely fabricated information, creating significant risks for production applications.</p></li>
<li><p><strong>Cost Optimization</strong>: The computational and financial costs of operating LLM-based systems can quickly become prohibitive without careful optimization.</p></li>
<li><p><strong>Testing Complexity</strong>: Traditional testing methodologies break down when dealing with non-deterministic systems, requiring new approaches.</p></li>
<li><p><strong>Integration Challenges</strong>: Incorporating LLMs into existing software architectures presents unique architectural and operational challenges.</p></li>
</ol>
</section>
<section id="a-note-on-perspective">
<h2>A Note on Perspective<a class="headerlink" href="#a-note-on-perspective" title="Permalink to this heading"></a></h2>
<p>While this book takes a critical look at LLM limitations, our goal is not to discourage their use but to enable more robust and reliable implementations. By understanding these challenges upfront, you’ll be better equipped to build systems that leverage LLMs effectively while avoiding common pitfalls.</p>
<p>The current discourse around LLMs tends toward extremes—either uncritical enthusiasm or wholesale dismissal. This book takes a different approach:</p>
<ul class="simple">
<li><p><strong>Practical Implementation Focus</strong>: Rather than theoretical capabilities, we examine real-world challenges and their solutions.</p></li>
<li><p><strong>Code-First Learning</strong>: Every concept is illustrated with executable Python examples, enabling immediate practical application.</p></li>
<li><p><strong>Critical Analysis</strong>: We provide a balanced examination of both capabilities and limitations, helping readers make informed decisions about LLM integration.</p></li>
</ul>
</section>
<section id="a-practical-approach">
<h2>A Practical Approach<a class="headerlink" href="#a-practical-approach" title="Permalink to this heading"></a></h2>
<p>This book takes a hands-on approach to these challenges, providing:</p>
Expand All @@ -172,7 +181,7 @@ <h2>Who This Book Is For<a class="headerlink" href="#who-this-book-is-for" title
<p>This book is designed for:</p>
<ul class="simple">
<li><p>Software Engineers building LLM-powered applications</p></li>
<li><p>Product Managers overseeing AI initiatives</p></li>
<li><p>Product Managers leading AI initiatives</p></li>
<li><p>Technical Leaders making architectural decisions</p></li>
<li><p>Anyone seeking to understand the practical challenges of working with LLMs</p></li>
</ul>
Expand All @@ -182,30 +191,8 @@ <h2>Prerequisites<a class="headerlink" href="#prerequisites" title="Permalink to
<p>To make the most of this book, you should have:</p>
<ul class="simple">
<li><p>Basic Python programming experience</p></li>
<li><p>Access to LLM APIs (OpenAI, Anthropic, or similar)</p></li>
<li><p>A desire to build reliable, production-grade AI systems</p></li>
</ul>
</section>
<section id="how-to-use-this-book">
<h2>How to Use This Book<a class="headerlink" href="#how-to-use-this-book" title="Permalink to this heading"></a></h2>
<p>Each chapter focuses on a specific challenge, following this structure:</p>
<ol class="arabic simple">
<li><p>Problem explanation and real-world impact</p></li>
<li><p>Technical deep-dive with code examples</p></li>
<li><p>Practical solutions and implementation patterns</p></li>
<li><p>Testing strategies and best practices</p></li>
<li><p>Cost and performance considerations</p></li>
<li><p>Conclusion</p></li>
</ol>
</section>
<section id="a-note-on-perspective">
<h2>A Note on Perspective<a class="headerlink" href="#a-note-on-perspective" title="Permalink to this heading"></a></h2>
<p>While this book takes a critical look at LLM limitations, our goal is not to discourage their use but to enable more robust and reliable implementations. By understanding these challenges upfront, you’ll be better equipped to build systems that leverage LLMs effectively while avoiding common pitfalls.</p>
<p>The current discourse around LLMs tends toward extremes—either uncritical enthusiasm or wholesale dismissal. This book takes a different approach:</p>
<ul class="simple">
<li><p><strong>Practical Implementation Focus</strong>: Rather than theoretical capabilities, we examine real-world challenges and their solutions.</p></li>
<li><p><strong>Code-First Learning</strong>: Every concept is illustrated with executable Python examples, enabling immediate practical application.</p></li>
<li><p><strong>Critical Analysis</strong>: We provide a balanced examination of both capabilities and limitations, helping readers make informed decisions about LLM integration.</p></li>
<li><p>Access to and basic knowledge of LLM APIs (OpenAI, Anthropic, or similar)</p></li>
<li><p>A desire to build reliable, production-grade LLM-powered products</p></li>
</ul>
</section>
<section id="setting-up-your-environment">
Expand All @@ -232,8 +219,8 @@ <h3>2. API Keys Configuration<a class="headerlink" href="#api-keys-configuration
<section id="code-repository">
<h3>3. Code Repository<a class="headerlink" href="#code-repository" title="Permalink to this heading"></a></h3>
<p>Clone the book’s companion repository:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>git<span class="w"> </span>clone<span class="w"> </span>https://github.com/yourusername/taming-llms.git
<span class="nb">cd</span><span class="w"> </span>taming-llms
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>git<span class="w"> </span>clone<span class="w"> </span>https://github.com/souzatharsis/tamingllms.git
<span class="nb">cd</span><span class="w"> </span>tamingllms
</pre></div>
</div>
</section>
Expand Down Expand Up @@ -306,7 +293,7 @@ <h3><a href="toc.html">Table of Contents</a></h3>
<div class="title">
<span class="text">
<span class="direction">previous</span>
Chapter 1: Introduction
Table of Contents
</span>
</div>
</a>
Expand Down
Loading

0 comments on commit ec75aeb

Please sign in to comment.