Skip to content

Commit

Permalink
added more content to llm
Browse files Browse the repository at this point in the history
  • Loading branch information
Kubus42 committed Apr 2, 2024
1 parent a60a8f0 commit 056252a
Show file tree
Hide file tree
Showing 44 changed files with 2,253 additions and 214 deletions.
4 changes: 2 additions & 2 deletions _freeze/embeddings/applications/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
{
"hash": "c7c6aaa88acff3e085f8bcd0144e88ab",
"hash": "4dcd8d01a2de3b1e2e0f0407e336d87c",
"result": {
"engine": "jupyter",
"markdown": "---\ntitle: Applications\nformat:\n html:\n code-fold: true\n---\n\n",
"markdown": "---\ntitle: Applications\nformat:\n html:\n code-fold: true\n---\n\nBuild a bot that can answer questions based on documents!\nResource: https://platform.openai.com/docs/tutorials/web-qa-embeddings\n\n",
"supporting": [
"applications_files"
],
Expand Down
4 changes: 2 additions & 2 deletions _freeze/llm/gpt_api/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
{
"hash": "dc7e1b2db90c90886a913f41ce9fd215",
"hash": "b873702b4a61fcaca14b721868545856",
"result": {
"engine": "jupyter",
"markdown": "---\ntitle: The OpenAI API\nformat:\n html:\n code-fold: false\n---\n\nResource: [OpenAI API docs](https://platform.openai.com/docs/introduction){.external}\n\n\nLet's get started with the OpenAI API for GPT. \n\n\n### Authentication\n\nGetting started with the OpenAI Chat Completions API requires signing up for an account on the OpenAI platform. \nOnce you've registered, you'll gain access to an API key, which serves as a unique identifier for your application to authenticate requests to the API. \nThis key is essential for ensuring secure communication between your application and OpenAI's servers. \nWithout proper authentication, your requests will be rejected.\nYou can create your own account, but for the seminar we will provide the client with the credential within the Jupyterlab (TODO: Link).\n\n::: {#a9cb3d89 .cell execution_count=1}\n``` {.python .cell-code}\n# setting up the client in Python\n\nimport os\nfrom openai import OpenAI\n\nclient = OpenAI(\n api_key=os.environ.get(\"OPENAI_API_KEY\")\n)\n```\n:::\n\n\n### Requesting Completions\n\nMost interaction with GPT and other models consist in generating completions for certain tasks (TODO: Link to completions)\n\nTo request completions from the OpenAI API, we use Python to send HTTP requests to the designated API endpoint. \nThese requests are structured to include various parameters that guide the generation of text completions. \nThe most fundamental parameter is the prompt text, which sets the context for the completion. \nAdditionally, you can specify the desired model configuration, such as the engine to use (e.g., \"gpt-4\"), as well as any constraints or preferences for the generated completions, such as the maximum number of tokens or the temperature for controlling creativity (TODO: Link parameterization)\n\n::: {#662daa55 .cell execution_count=2}\n``` {.python .cell-code}\n# creating a completion\nchat_completion = client.chat.completions.create(\n messages=[\n {\n \"role\": \"user\",\n \"content\": \"How old is the earth?\",\n }\n ],\n model=\"gpt-3.5-turbo\"\n)\n```\n:::\n\n\n### Processing\n\nOnce the OpenAI API receives your request, it proceeds to process the provided prompt using the specified model. \nThis process involves analyzing the context provided by the prompt and leveraging the model's pre-trained knowledge to generate text completions. \nThe model employs advanced natural language processing techniques to ensure that the generated completions are coherent and contextually relevant. \nBy drawing from its extensive training data and understanding of human language, the model aims to produce responses that closely align with human-like communication.\n\n### Response\n\nAfter processing your request, the OpenAI API returns a JSON-formatted response containing the generated text completions. \nDepending on the specifics of your request, you may receive multiple completions, each accompanied by additional information such as a confidence score indicating the model's level of certainty in the generated text. \nThis response provides valuable insights into the quality and relevance of the completions, allowing you to tailor your application's behavior accordingly.\n\n### Error Handling\n\nWhile interacting with the OpenAI API, it's crucial to implement robust error handling mechanisms to gracefully manage any potential issues that may arise. \nCommon errors include providing invalid parameters, experiencing authentication failures due to an incorrect API key, or encountering rate limiting restrictions. B\ny handling errors effectively, you can ensure the reliability and resilience of your application, minimizing disruptions to the user experience and maintaining smooth operation under varying conditions. \nImplementing proper error handling practices is essential for building robust and dependable applications that leverage the capabilities of the OpenAI Chat Completions API effectively.\n\n",
"markdown": "---\ntitle: The OpenAI API\nformat:\n html:\n code-fold: false\n---\n\n::: {.callout-note}\nResource: [OpenAI API docs](https://platform.openai.com/docs/introduction){.external}\n:::\n\n\n\nLet's get started with the OpenAI API for GPT. \n\n\n### Authentication\n\nGetting started with the OpenAI Chat Completions API requires signing up for an account on the OpenAI platform. \nOnce you've registered, you'll gain access to an API key, which serves as a unique identifier for your application to authenticate requests to the API. \nThis key is essential for ensuring secure communication between your application and OpenAI's servers. \nWithout proper authentication, your requests will be rejected.\nYou can create your own account, but for the seminar we will provide the client with the credential within the Jupyterlab (TODO: Link).\n\n::: {#1c41ead3 .cell execution_count=1}\n``` {.python .cell-code}\n# setting up the client in Python\n\nimport os\nfrom openai import OpenAI\n\nclient = OpenAI(\n api_key=os.environ.get(\"OPENAI_API_KEY\")\n)\n```\n:::\n\n\n### Requesting Completions\n\nMost interaction with GPT and other models consist in generating completions for certain tasks (TODO: Link to completions)\n\nTo request completions from the OpenAI API, we use Python to send HTTP requests to the designated API endpoint. \nThese requests are structured to include various parameters that guide the generation of text completions. \nThe most fundamental parameter is the prompt text, which sets the context for the completion. \nAdditionally, you can specify the desired model configuration, such as the engine to use (e.g., \"gpt-4\"), as well as any constraints or preferences for the generated completions, such as the maximum number of tokens or the temperature for controlling creativity (TODO: Link parameterization)\n\n::: {#a7e7ff6f .cell execution_count=2}\n``` {.python .cell-code}\n# creating a completion\nchat_completion = client.chat.completions.create(\n messages=[\n {\n \"role\": \"user\",\n \"content\": \"How old is the earth?\",\n }\n ],\n model=\"gpt-3.5-turbo\"\n)\n```\n:::\n\n\n### Processing\n\nOnce the OpenAI API receives your request, it proceeds to process the provided prompt using the specified model. \nThis process involves analyzing the context provided by the prompt and leveraging the model's pre-trained knowledge to generate text completions. \nThe model employs advanced natural language processing techniques to ensure that the generated completions are coherent and contextually relevant. \nBy drawing from its extensive training data and understanding of human language, the model aims to produce responses that closely align with human-like communication.\n\n### Response\n\nAfter processing your request, the OpenAI API returns a JSON-formatted response containing the generated text completions. \nDepending on the specifics of your request, you may receive multiple completions, each accompanied by additional information such as a confidence score indicating the model's level of certainty in the generated text. \nThis response provides valuable insights into the quality and relevance of the completions, allowing you to tailor your application's behavior accordingly.\n\n### Error Handling\n\nWhile interacting with the OpenAI API, it's crucial to implement robust error handling mechanisms to gracefully manage any potential issues that may arise. \nCommon errors include providing invalid parameters, experiencing authentication failures due to an incorrect API key, or encountering rate limiting restrictions. B\ny handling errors effectively, you can ensure the reliability and resilience of your application, minimizing disruptions to the user experience and maintaining smooth operation under varying conditions. \nImplementing proper error handling practices is essential for building robust and dependable applications that leverage the capabilities of the OpenAI Chat Completions API effectively.\n\n",
"supporting": [
"gpt_api_files"
],
Expand Down
Loading

0 comments on commit 056252a

Please sign in to comment.