Conversation
2fb430a to
228975a
Compare
gvanrossum
left a comment
There was a problem hiding this comment.
Here's a very nitpicky review of the basic Python docs (sorry :-).
| import schema as sentiment | ||
| from typechat import Failure, TypeChatJsonTranslator, TypeChatValidator, create_language_model, process_requests | ||
|
|
||
| async def main(): |
| `complete` is just a function that takes a `string` and eventually returns a `string` if all goes well. | ||
|
|
||
| For convenience, TypeChat provides two functions out of the box to connect to the OpenAI API and Azure's OpenAI Services. | ||
| You can call these directly. |
There was a problem hiding this comment.
Honestly that's all you need to understand the example; the earlier part of this section (from line 44 on) is advanced stuff that I'd move much further down (maybe with a link from here).
| from typechat import Failure, TypeChatJsonTranslator, TypeChatValidator, create_language_model, process_requests | ||
|
|
||
| async def main(): | ||
| env_vals = dotenv_values() |
There was a problem hiding this comment.
This requires a supporting import, a hint on how to install it (there are many modules with dotenv in their name on PyPI, but what we need is pip install python-dotenv), and a brief explanation of which keys create_language_model() looks for. I stumbled quite a bit over this. :-(
There was a problem hiding this comment.
I realize that you discuss this further down -- I wonder if there's a way to present things so that it's easier to just read it from top to bottom.
| With `create_language_model`, you can populate your environment variables and pass them in. | ||
| Based on whether `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY` is set, you'll get a model of the appropriate type. |
There was a problem hiding this comment.
It's honestly a bit confusing to call these "environment variables" since they are read from a file, not stored in the OS- (or at least shell-) managed environment variables. Unless the default (vals=None) actually reads os.environ?
| With `create_language_model`, you can populate your environment variables and pass them in. | ||
| Based on whether `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY` is set, you'll get a model of the appropriate type. | ||
|
|
||
| The `TypeChatLanguageModel` returned by these functions has a few attributes you might find useful: |
There was a problem hiding this comment.
| The `TypeChatLanguageModel` returned by these functions has a few attributes you might find useful: | |
| The `TypeChatLanguageModel` returned by these functions has a few writable attributes you might find useful: | |
| `process_requests` takes 3 things. | ||
| First, there's the prompt prefix - this is what a user will see before their own text in interactive scenarios. | ||
| You can make this playful. | ||
| We like to use emoji here. 😄 |
There was a problem hiding this comment.
Why? I still haven't figured out how to type emoji on a keyboard -- I only know how to do it on my phone. :-(
|
|
||
| We'll come back to this. | ||
|
|
||
| ## Creating the Prompt |
There was a problem hiding this comment.
"Prompt" is an ambiguous term here. Does it refer to the user input prompt (only used when file_path is None) or the prompt for the LLM?
| file_path = sys.argv[1] if len(sys.argv) == 2 else None | ||
| await process_requests("😀> ", file_path, request_handler) |
There was a problem hiding this comment.
In Python it's more common to name such a variable or argument filename or file.
| ``` | ||
|
|
||
| We're calling the `translate` method on each string and getting a response. | ||
| If something goes wrong, TypeChat will retry requests up to a maximum specified by `retry_max_attempts` on our `model`. |
There was a problem hiding this comment.
This is a feature of the Model class, right? May be helpful to mention that.
| A `TypeChatJsonTranslator` brings all these concepts together. | ||
| A translator takes a language model, a validator, and our expected type, and provides a way to translate some user input into objects following our schema. | ||
| To do so, it crafts a prompt based on the schema, reaches out to the model, parses out JSON data, and attempts validation. | ||
| Optionally, it will craft repair prompts and retry if validation fails. |
There was a problem hiding this comment.
I would love to read more about the repair process. I expect that in practice one might have to tweak this.
No description provided.