• Keine Ergebnisse gefunden

Activity Levels and Automation

are a multiple of the manual activity level and a constant per-unit-of-manual-activity care effort, i.e.,c(z,x) = (1−k)z x.

The social problem with care and activity levels can be described as follows:

maxx,z,jS=w(Z) +b(J)(1−k)z x−a−qp(f)L (3.5.1)

Note that when a=0, the socially optimal problem in (3.5.1) turns into its standard unilateral-case formulation. The socially optimal care level remains defined as in (3.4.2). The socially optimal activity levels ¯z and ¯j are respectively defined as so-lutions of the following fist-order conditions:

wZ=x+pL (3.5.2)

bJ=pL (3.5.3)

which have the standard interpretation: the socially optimal activity level is set at the point at which the benefit of a marginal increase in activity level (LHS in (3.5.2) and (3.5.3)) equals the corresponding marginal costs (RHS in (3.5.2) and (3.5.3)). For similar marginal values of activities, the automated-activity level is optimally greater than the manual-activity level since the latter implies care costs per unit-of-activity.

Negligence standards are conventionally defined solely in terms of care because of information costs: the amount and/or the frequency of an activity is usually not easily observable by courts. This claim should be revised when considering

automa-tion technologies. By automatically tracing the user’s activity, these technologies can convey information to courts and juries about the user’s activity level. The automated part of the activity becomes observable, thus raising the question whether it should be included in the standard of negligence. The answer lies on the trade-off between cre-ating incentives for socially optimal automated-activity levels and crecre-ating incentives to adopt automated technologies. In order to show this trade-off, we have to introduce the private optimization problem.

If the standard of negligence is defined as in (3.4.2) only in terms of care, the privately optimal problem becomes:

minx,z,jT =

⎧⎪

⎪⎪

⎪⎪

⎪⎩

w(Z) +b(J)−(1−k)z x−a−qp(f)L ifx<x¯ w(Z) +b(J)−(1−k)z x−a ifx≥x¯

(3.5.4)

In this case, the injurer will invest in the due-care levelx1=x, but will exercise ex-¯ cessive activityq1>q, as the standard model predicts. The privately optimal manual-¯ activity levelz1 and automated-activity level j1 are respectively given by:

wZ=x (3.5.5)

bJ=0 (3.5.6)

which are greater than the socially optimal levels (3.5.2) and (3.5.3), i.e.,z1>z¯, j1> j.¯

If it is possible and desirable to define the standard of negligence in terms of both

care level and automated-activity levels,33the privately optimal problem becomes:

minx,z,jT =

⎧⎪

⎪⎪

⎪⎪

⎪⎩

w(Z) +b(J)(1−k)z x−a−qp(f)L ifx<x¯or j> j¯ w(Z) +b(J)(1−k)z x−a ifx≥x¯or j≤ j¯

(3.5.7)

In this case, potential injurers have incentives to invest in the due-care levelx2=x,¯ and in a level of activityq2which remains excessive with respect to the socially optimal level ¯q, but mitigated with respect toq1, given the extended definition of the negligence standard.

Proposition 3.5.1. A negligence standard tailored to the automated-activity level cre-ates optimal incentives for automated-activity levels but dilutes the incentives to adopt and use automated devices.

Corollary 3.5.2. If the percentage of automated devices ought to be employed in the relevant activity is exogenously imposed, the negligence standard could be efficiently set in terms of both care and automated-activity levels. In particular, the due level of automated activity should increase with the automation level. If the percentage of automated devices ought to be employed is individually chosen by potential injurers, a negligence standard tailored to the automated-activity level potentially destroys the incentives to adopt automated technologies.

Proof. See Appendix 3.7.

33We assume that due levels of care and automated activities are set at the socially optimal levels (3.4.2) and (3.5.3)

Let us suppose that the percentage of automated devices ought to be employed in the relevant activity is fixed (e.g., through mandatory equipment regulation) or the automated technology is commonly and widespread adopted (e.g., aircraft autopilot).

Following our analysis, the users of these technologies might be held liable if their automated-activity level exceed the socially optimal standard.

In the limiting case of fully automated activities, the privately optimal activity level q2 approaches to the socially optimal level ¯q, thus narrowing the possibility of exces-sive activity levels. In this situation, the non-residual bearer of expected accident costs has optimal incentives not only for care but also for activity levels. This creates an exception for Shavell’s (1987) activity-level theorem, which states that no negligence-based regime can incentivize optimal activity levels for the non-bearer of residual lia-bility, i.e., the party that does not bear the expected accident costs in the non-negligence equilibrium. This is because the non-bearer of residual liability (e.g., the injurer in a negligence-based liability regime) wants only to avoid liability by demonstrating due care, whereas the bearer of residual liability (e.g., the victim in a negligence-based li-ability regime) wants to minimize expected harm. Therefore, only the bearer of resid-ual liability will have incentives to exercise the optimal activity level. This theorem should be revisited in the presence of partially ot totally automated activities. In this case, when it is possible and desirable to define the standard of negligence also in terms of automated-activity levels, also the non-residual bearer of the loss (e.g., the injurer under the rule of negligence) have incentives to mitigate excessive activity lev-els. The problem of the information costs related to activity levels might be therefore

partially solved by mandating automated equipments for the expected riskiest parts of an activity, leaving unobservable human actions for the less risky parts.

Defining the standard of negligence also in terms of automated-activity levels might be not desirable when the automated technology is not widely adopted or mandated by regulation, that is when individuals decide autonomously whether to adopt a new tomated technology. In this case, introducing an upper bound on the frequency of au-tomated activity under penalty of liability serves as a tax on auau-tomated-activity levels.

As such, this might induce individuals to prefer conducing non-automated activities, which remain unbounded from the negligence standard.

Therefore, the traceability of automated-activity levels should not implicitly lead courts to establish a negligent behavior also in terms of the observable activity level.

The decision to tailor negligence standards on the basis of automated-activity levels should result from the trade-off between two policy objectives: fostering the adoption of automated technologiesversuscreating incentives to mitigate excessive automated-activity levels. Especially relevant for new automated technologies, the first objective should lead courts to define a standard of negligence only in terms of care levels, even if a percentage of the activity is ex-post observable. As automated devices become widely adopted or mandated by regulation, the standard of negligence could be opti-mally defined in terms of automated-activity levels in order to mitigate excessive levels of automated activities.