Re: $500 Billion
Posted: Thu Jan 23, 2025 4:03 pm
I expect in the next year or so to see large-scale use of LLM AIs to commit crimes.
It seems plausible to have an system set up where:
1. AIs steal people's identities and set up bank accounts online.
2. AIs then scam other people out of money and put it in the bank accounts set up in #1.
3. AIs then recruit people to commit other crimes and pay them out of the bank accounts funded in steps #1 and #2.
So hypothetically you could have a literal Murder, Inc. which could assassinate people and you'd never be able to figure out who was behind it. It is the perfect cutout.
It would be bad enough of the enormous flood of online scams we live with day in and day out were suddenly automated at a much larger scale. But the potential for chaos and evil is so enormous here that it makes my skin crawl.
And you can't fix it. There are many LLMs capable of doing the above. And all of them have "safety" features to prevent this and all of those "safety" features can be overridden with very little effort. There are now capable LLMs that you can download for free and run on your laptop. They are probably capable of doing what I am describing. If they aren't already capable of that they'll probably be capable in a few months.
In the meantime, both Ukraine and Russia are experimenting with AI-powered autonomous weapons. Taiwan arguably has enormous incentive to do the same and far more technical capabilities in that area than either Ukraine and Russia. What could possibly go wrong here?
I don't see robots rebelling against us and killing humanity off in the near future. But I do expect that bad people will instruct robots to do bad things to other people and get away with it.
It seems plausible to have an system set up where:
1. AIs steal people's identities and set up bank accounts online.
2. AIs then scam other people out of money and put it in the bank accounts set up in #1.
3. AIs then recruit people to commit other crimes and pay them out of the bank accounts funded in steps #1 and #2.
So hypothetically you could have a literal Murder, Inc. which could assassinate people and you'd never be able to figure out who was behind it. It is the perfect cutout.
It would be bad enough of the enormous flood of online scams we live with day in and day out were suddenly automated at a much larger scale. But the potential for chaos and evil is so enormous here that it makes my skin crawl.
And you can't fix it. There are many LLMs capable of doing the above. And all of them have "safety" features to prevent this and all of those "safety" features can be overridden with very little effort. There are now capable LLMs that you can download for free and run on your laptop. They are probably capable of doing what I am describing. If they aren't already capable of that they'll probably be capable in a few months.
In the meantime, both Ukraine and Russia are experimenting with AI-powered autonomous weapons. Taiwan arguably has enormous incentive to do the same and far more technical capabilities in that area than either Ukraine and Russia. What could possibly go wrong here?
I don't see robots rebelling against us and killing humanity off in the near future. But I do expect that bad people will instruct robots to do bad things to other people and get away with it.