- Joined
- Jan 28, 2018
AFAIK sudowrite is using an old davinci finetune. If still the case, then the service is kind of a rip off. I'd look into llama 2 70b. Not only can you run it locally but it's definitively smarter in text related tasks. From my limited testing I'd even go as far to say it's better at such tasks than turbo, but mostly because OpenAI cheated to extend Turbo's context (which they haven't admitted but is easy to test) which makes it not "see" the entire context properly anymore. (look into RoPE for llama, they did something similar to Turbo, which ups perplexity)
Just like stable diffusion, with creative writing on language models, it's pretty much all in the prompt. Ask it to copy the style of an author you like. (e.g. use William Gibson and you'll see a lot of references to rain and neon lights) "Preseed" the context with text passages. If it's an instruct model, iterate yourself to the solution by describing characters, locations, and motivations until having the LM tie it all together in a final scene. There's a lot of scientific papers out there that pretty much prove that the texts output quality and reasoning gets better the more the model gets to "think" about the problem in it's context. It's all noise in the end, but the more guidance you deliver, the more accurate the "predictions" will be. LMs hallucinate by their very nature, but for creative endeavors that's a good thing.
Just like SD, the entire enthusiast field is full of coomers and other dregs of society, so it's hard to find serious discussion about this topic which doesn't revolve around sexualizing twelve year old anime girls - but it's out there. I saw some impressive model writing even from 70b I wouldn't have immediately clocked as machine generated. It does take talent steering though.
Facebook is apparently planning to kneecap the competition by releasing a GPT4 killer (llama 3 is already WIP) for free to level the playing field, and as a result we all will win. The Meta chief AI guy is an old, french white computer scientist who is outspoken anti censorship and anti AI doom, that's probably why the llama foundation models are all not censored.
Just like stable diffusion, with creative writing on language models, it's pretty much all in the prompt. Ask it to copy the style of an author you like. (e.g. use William Gibson and you'll see a lot of references to rain and neon lights) "Preseed" the context with text passages. If it's an instruct model, iterate yourself to the solution by describing characters, locations, and motivations until having the LM tie it all together in a final scene. There's a lot of scientific papers out there that pretty much prove that the texts output quality and reasoning gets better the more the model gets to "think" about the problem in it's context. It's all noise in the end, but the more guidance you deliver, the more accurate the "predictions" will be. LMs hallucinate by their very nature, but for creative endeavors that's a good thing.
Just like SD, the entire enthusiast field is full of coomers and other dregs of society, so it's hard to find serious discussion about this topic which doesn't revolve around sexualizing twelve year old anime girls - but it's out there. I saw some impressive model writing even from 70b I wouldn't have immediately clocked as machine generated. It does take talent steering though.
Facebook is apparently planning to kneecap the competition by releasing a GPT4 killer (llama 3 is already WIP) for free to level the playing field, and as a result we all will win. The Meta chief AI guy is an old, french white computer scientist who is outspoken anti censorship and anti AI doom, that's probably why the llama foundation models are all not censored.