Skip to content

Commit

Permalink
docs: update some links and language; add LLM family tree because it'…
Browse files Browse the repository at this point in the history
…s neat
  • Loading branch information
ericrallen committed Sep 24, 2023
1 parent 2a23073 commit eee61b5
Show file tree
Hide file tree
Showing 3 changed files with 43 additions and 6 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,5 @@ __pycache__
_build
.ipynb_checkpoints
*.sqlite
.env
.env
.DS_Store
46 changes: 41 additions & 5 deletions SentimentAnalysisWorkshop.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -333,17 +333,41 @@
"\n",
"In this case we're taking advantage of an existing [language model](https://en.wikipedia.org/wiki/Language_model), VADER, that has been trained to analyze sentiment in text, but if we wanted to train our own model, it would be a much more involved process.\n",
"\n",
"With the advent of [Large Language Models](https://en.wikipedia.org/wiki/Large_language_model) (LLMs), like the [Generative Pre-Trained Transformer](https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) (GPT) models that power ChatGPT - and the various [other models that have exploded in popularity](https://informationisbeautiful.net/visualizations/the-rise-of-generative-ai-large-language-models-llms-like-chatgpt/) since - we can leverage the powerful inference and predictive capabilities of these models to perform tasks like sentiment analysis with greater accuracy without having to train our own models.\n",
"With the advent of [Large Language Models](https://en.wikipedia.org/wiki/Large_language_model) (LLMs), like the [Generative Pre-Trained Transformer](https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) (GPT) models that power ChatGPT [large language models have exploded in popularity](https://informationisbeautiful.net/visualizations/the-rise-of-generative-ai-large-language-models-llms-like-chatgpt/).\n"
]
},
{
"cell_type": "markdown",
"id": "da92b914",
"metadata": {},
"source": [
"### LLM family tree\n",
"\n",
"We can even leverage some prompting techniques - which we'll explore in later cells - to quickly teach the model how to perform more unique analyses and refine our results.\n"
"<div style=\"display: flex; alight-items: center; justify-content: center;\"><a href=\"https://github.com/Mooler0410/LLMsPracticalGuide\" target=\"_blank\"><img alt=\"LLM Evolutionary Tree\" src=\"./assets/llm-family-tree.gif\" /><a/></div>\n",
"\n",
"This visualiztion from [Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond](https://arxiv.org/abs/2304.13712) provides a great overview of how language models have evolved over time and gives you a sense of just how much things have been developing in the last 12 months.\n"
]
},
{
"cell_type": "markdown",
"id": "fabe668a",
"metadata": {},
"source": [
"### The power of LLMs\n",
"\n",
"We can leverage the inference and predictive capabilities of these models to perform tasks like sentiment analysis with greater accuracy without having to train our own models.\n",
"\n",
"We can even leverage some prompting techniques - which we'll explore in later cells - to quickly teach the model how to perform more unique analyses and refine our results.\n",
"\n",
"In the past, these would have been a significant undertaking, but now we can acheive similar results with some simple prompting.\n"
]
},
{
"cell_type": "markdown",
"id": "7c607a30",
"metadata": {},
"source": [
"## Real world example\n",
"## Real world data\n",
"\n",
"Let's take a look at how this works with text generated by other humans (_probably_) without expecting someone would be trying to analyze the sentiment of their text.\n",
"\n",
Expand Down Expand Up @@ -474,7 +498,15 @@
"\n",
"In responding to our prompts, ChatGPT follows a similar process to the NLP workflow described above.\n",
"\n",
"It breaks our prompts into tokens, predicts which tokens should logically follow the ones that we've provided, and returns that text.\n",
"It breaks our prompts into [tokens](https://learn.microsoft.com/en-us/semantic-kernel/prompt-engineering/tokens), predicts which tokens should logically follow the ones that we've provided, and returns that text.\n",
"\n",
"ChatGPT's tuning based on Reinforcement Learning from Human Feedback ([RLHF](https://www.assemblyai.com/blog/how-rlhf-preference-model-tuning-works-and-how-things-may-go-wrong/)) is what lead it to be so popular, and is also part of what makes it so powerful.\n",
"\n",
"#### Learn more\n",
"\n",
"- [How ChatGPT Actually Works](https://www.assemblyai.com/blog/how-chatgpt-actually-works/)\n",
"- [How ChatGPT Works: The Models Behind The Bot](https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-bot-1ce5fca96286)\n",
"- [The inside story of how ChatGPT was built from the people who made it](https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/)\n",
"\n",
"### Tokens\n",
"\n",
Expand Down Expand Up @@ -571,9 +603,13 @@
"- **User**: user messages are the individual prompts that the user sends to the model\n",
"- **Assistant**: assistant messages are the responses the model generates to the user's prompts\n",
"\n",
"If you're just chatting with ChatGPT via it's web-based User Interface (UI), you're probably familiar with **User** and **Assistant** messages, but you may not know that there's a **System** message behind the scenes that helps guide how the model responds to your messages.\n",
"\n",
"[Custom Instructions](https://openai.com/blog/custom-instructions-for-chatgpt) are sort of like [system prompts](https://github.com/jujumilk3/leaked-system-prompts), but don't give us quite as much control as we can exercise via the [Chat API](https://platform.openai.com/docs/api-reference/chat) - or as much control as we can get with some of the recent open source models.\n",
"\n",
"### Example conversation document\n",
"\n",
"The whole thing looks a bit like this:\n",
"When you put it all together, the whole thing looks a bit like this:\n",
"\n",
"```\n",
"[System]\n",
Expand Down
Binary file added assets/llm-family-tree.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit eee61b5

Please sign in to comment.