Very soon (maybe already), the developers will be using AI to do some of the technical work the company wants/needs.
Eventually, the AI will do something unexpected, and the business managers will want to know why. The developers will not be able to explain.
The managers will want to guarantee that the bad or unexpected thing will not happen again, but the developers will not be able to do that.
It's the nature of the non-deterministic AIs that are now being built.
This may be ok.
A possible takeaway is to notice that the appearance of intelligence (by giving a technical-sounding answer that the listener doesn't really understand) isn't going to be enough.
If you can't explain what you're doing now, how will you explain what AI is doing and that you can't guarantee what it will do?
0 comments:
Post a Comment
I get a lot of comment spam :( - moderation may take a while.