Viewing a single comment thread. View all comments

blueSGL t1_j1n8084 wrote

They seem to be getting clever esp around certain concepts, I doubt they have hard coded training around [subject] such that the returned text is always [block text from openAI] more that they have trained it to return [keyword token] when [subject] gets mentioned and that is what pulls in the [block text from openAI]

you can bet they are going to work hard with every trick they can think of to remove inference cost, having a lookup table for a lot of common things and getting the model to return a [keyword token] that activate these would be one way of going about it.

Also likely how this sort of system would work in a tech support field. You don't need the system waxing lyrical over [step (n)] you just need to tell customer to perform [step (n)] with maybe a little fluff at the start or the end to make things flow smoother.

1

SnipingNinja t1_j1ni87i wrote

Look at Google's CaLM, it's trying to solve this exact issue afaict

2