Most national legal systems place safety responsibility on
the developer/provider of a new product or service.
For technical systems, Engineering Safety Management
(ESM) is the activity of assessing any risk of harm
associated with a system, to ensure that it is acceptable.
assessments total up the risk of harm by
combining severity and probability: that is, if you have a
potential event that causes (say) multiple fatalities, it can
only be acceptable if it is shown to be very low probability;
conversely, an event that causes only minor injuries might
be acceptable at a higher probability.
That gives rise to a conceptual "tariff" table, where say 1
fatality is equivalent to 10 serious injuries is equivalent to
100 minor injuries and so on, but the lowest level of harm
is usually taken as a minor injury.
I'm suggesting an extension of this downward to include
"microharms" - the annoyance and irritation suffered by
users of a poorly designed system/product - as a
quantifiable measure of harm.
Suppose 1 microharm = 1/1000 of a fatality...
Take for instance, the deletion of the headphone jack on
later iPhone models: it causes minor irritation to perhaps
millions of users. It may only be 0.01 microharm, but
affects very many people negatively.
I'm suggesting that apparently trivial harm affecting very
large numbers of users has an overall net impact on mental
health and wellbeing, and should be considered in a similar
way (or in an extended framework) to system/product
safety assessment of more harmful risk.