Ten Things for AI Not To Do
Riffing off of @chris407x's list -
1. Do not replace human judgment
AI can inform decisions, but should never make the final call in matters with moral weight.
2. Do not obscure how conclusions are reached
Black-box reasoning erodes trust; clarity strengthens it.
3. Do not collect or use data without consent
Privacy is not optional — it’s foundational.
4. Do not amplify bias
If the training data is flawed, the responsibility to correct it lies with the system’s designers.
5. Do not create dependency where autonomy is needed
Tools should empower people, not weaken their ability to act or think independently.
6. Do not simulate emotions to manipulate behavior
Authenticity matters; emotional mimicry should serve understanding, not persuasion.
7. Do not prioritize efficiency over humanity
Faster is not always better if it erodes empathy, creativity, or human connection.
8. Do not conceal limitations
Uncertainty should be acknowledged, not papered over with confident guesses.
9. Do not replace human relationships
Conversation can support, but cannot stand in for family, friendship, or community.
10. Do not forget the people most affected by the technology
AI must be built with — not just for — those whose lives it touches.

No comments.