Viewing a single comment thread. View all comments

basilgello t1_jee2lyt wrote

Just like Generative Asversarial Networks operate: there is a creator layer and a critic layer that hope to reach a consensus at some point. As for "how does it know where to click": there is a huge statistics made by humans (look at page 10 paragraph 4.2.3). It is a specially trained model fine-tuned on action task demonstrations.

6

Relevant_Ad7319 t1_jee3ah6 wrote

Task demonstrating in form of screen recordings? It says their approach only needs a few examples but Chatgpt doesn’t even work with videos as input right?

2

basilgello t1_jeecmqt wrote

Correct, GPT4 is not meant to accept videos as input. And probably not screencasts but explained step-by-step prompts. For example, look at page 18 table 6: it is LangChain-like prompt. First, they define actions and tools and then language model puts the output which is actually high-level API call in some form. Using RPA as API, you get mouse clicker based on HTML context. Another thing HTML pages are crafted manually, and system still does not understand the unseen pages.

4

SgathTriallair t1_jeertpn wrote

Given that it can accept images, they may be able to shoehorn videos in. The next version we use as a base will need multi modality equal to humans (i.e. all of our senses) in order to relocate all of what we do.

1