Viewing a single comment thread. View all comments

trnka t1_j4r661s wrote

Think about it more like autocomplete. It's able to complete thoughts coherently enough to fool some people, when provided enough input to complete from. It's often incorrect with very technical facts though.

It's really about how you make use of it. In scientific work, you could present your idea and ask for pros and cons of the idea, or to write a story about how the idea might fail horribly. That can be useful at times. Or to explain basic ideas from other fields.

It's kinda like posing a question to Reddit except that ChatGPT generally isn't mean.

There are other approaches like Elicit or Consensus that use LLMs more for literature review which is probably more helpful.

1