Skip to content

Commit

Permalink
Bump v0.8.6
Browse files Browse the repository at this point in the history
  • Loading branch information
kddubey committed Nov 22, 2023
1 parent 3a3051f commit 4d976cf
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 10 deletions.
4 changes: 2 additions & 2 deletions docs/source/motivation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,8 @@ model. Common to all of these solutions is the need to spend developer time and
sacrifice simplicity.

The fact is: text generation can be endlessley accomodated, but you'll still have to
work around its arbitrary outputs. Fundamentally, sampling is not a clean solution to a
classification problem.
work around its arbitrary outputs. Fundamentally, unconstrained sampling is not a clean
solution to a classification problem.


Solution
Expand Down
15 changes: 8 additions & 7 deletions docs/source/other_llm_structuring_tools.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,17 @@ Other LLM structuring tools

There are `other LLM structuring tools
<https://www.reddit.com/r/LocalLLaMA/comments/17a4zlf/reliable_ways_to_get_structured_output_from_llms/>`_
which support "just pick one" functionality. You should strongly consider using them.
`guidance <https://github.com/guidance-ai/guidance>`_, for example, provides a
``select`` function which almost always returns a valid choice.
which support "just pick one" functionality. You should strongly consider using them, as
they scale independently with the number of choices. `guidance
<https://github.com/guidance-ai/guidance>`_, for example, provides a ``select`` function
which almost always returns a valid choice.

One potential weakness of algorithms like this is that they don't always look at the
entire choice: they exit early when the generated choice becomes unambiguous. This
property makes the algorithm highly scalable wrt the number of choices and tokens. But
I'm curious to see if there are tasks where looking at all of the choice's tokens—like
CAPPr does—squeezes more out. Taking the tiny task from the previous page (where CAPPr
succeeds):
property makes the algorithm highly scalable wrt the number of tokens in each choice.
But I'm curious to see if there are tasks where looking at all of the choice's
tokens—like CAPPr does—squeezes more out. Taking the tiny task from the previous page
(where CAPPr succeeds):

.. code:: python
Expand Down
2 changes: 1 addition & 1 deletion src/cappr/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
https://cappr.readthedocs.io/
"""
__version__ = "0.8.5"
__version__ = "0.8.6"

from . import utils
from ._example import Example
Expand Down

0 comments on commit 4d976cf

Please sign in to comment.