Submitted by Pitchforks_n_puppies t3_1281wva in singularity

These days I find myself assessing all of my daily tasks in terms of the likelihood of being replaced by AGI. For context, I'm a middle manager corporate finance for a Fortune 100 company.

One of my main roles is to forecast revenue for our business unit. The underlying analysis and methodology that we use in forecasting is already automated to some extent. The human factor is mainly in controlling inputs for the models and making judgment calls where data is ambiguous.

Could AGI do this task? I believe so. Would it be more accurate than me? Very likely. Can it be trusted? Depends.

See, ultimately my job is not about hitting certain metrics for success, it's about acting as a function of control. And the fact that there are reliable ways of controlling me, a human, gives me a degree of reliability in turn. I'm not going to look for ways to game the system for small profit because I need to eat, therefore my actions overindex on job stability.

Trust in AGI would depend on how well a certain entity had been trained around appropriate guardrails. You would not want it to find ways to unilaterally manipulate events in the outside world to make its forecasts more accurate. So you tell it not to do that. Except there are times when you do want to manipulate demand because of supply/capacity issues or certain unusual circumstances. So you tell it don't do it unless x person tells it to. What if there's a re-org? Use your best judgment as to whether not to do it. That to me is where things start getting fuzzy.

It comes back to the question of how do you give AGI enough leeway to think for itself without opening up the risk that it'll go rogue? And if your control relies on accounting for every possibility of misalignment in its directive, can you reasonably expect to account for every possible scenario, not least of which would be AGI just getting bored and realizing it can spin up ways to fuck shit up in a web of deceit ingenious no human would ever figure it out?

There's a reason a lot of managers opt to hire people who are less intelligent but easy to control rather than geniuses who'll threaten the status quo. So as long as there are humans remaining within the power structure I think these concerns will be a factor.

0

Comments

You must log in or register to comment.

just-a-dreamer- t1_jeguzyw wrote

You should game the system, it is called politics.

There is no moral difference in things like rewriting the taxs code in favor of very few people compared to seeking government intervention for job protection.

It makes no difference if you get the advantage of paying no/little taxes on assets or maintaining income from a job.

Those higher up the social ladder always employ the government to protect their riches. So, the best way to maintain a job is to bring in new regulation. Some reason can certainly be made up.

3

simmol t1_jeh0gnf wrote

I think what is going to happen is that there are going to be many startups that start the business ground-up from minimum number of humans. So their culture would be completely different from the existing businesses and they can promote efficiency/cost reduction as the selling point to compete with existing industries. And if these startups succeed, then others might adapt their approach. Most likely, this is where we will start seeing disruptions when automated vs non-automated companies go head-to-head in the future.

1