- Reference Manager
- Simple TEXT file

People also looked at
Original research article, inertia and decision making.
- Department of Economics, University of Cologne, Cologne, Germany
Decision inertia is the tendency to repeat previous choices independently of the outcome, which can give rise to perseveration in suboptimal choices. We investigate this tendency in probability-updating tasks. Study 1 shows that, whenever decision inertia conflicts with normatively optimal behavior (Bayesian updating), error rates are larger and decisions are slower. This is consistent with a dual-process view of decision inertia as an automatic process conflicting with a more rational, controlled one. We find evidence of decision inertia in both required and autonomous decisions, but the effect of inertia is more clear in the latter. Study 2 considers more complex decision situations where further conflict arises due to reinforcement processes. We find the same effects of decision inertia when reinforcement is aligned with Bayesian updating, but if the two latter processes conflict, the effects are limited to autonomous choices. Additionally, both studies show that the tendency to rely on decision inertia is positively associated with preference for consistency.
1. Introduction
As described in Newtonian physics, the term “inertia” refers to the fact that, in the absence of external resistance, a moving object will keep moving in the same direction. This word has also been used across multiple fields as a metaphor to describe related characteristics of human behavior. For example, in management and organization science, the expression “cognitive inertia” describes the phenomenon that managers might fail to reevaluate a situation even in the face of change ( Huff et al., 1992 ; Reger and Palmer, 1996 ; Hodgkinson, 1997 ; Tripsas and Gavetti, 2000 ). In medical studies, “therapeutic inertia” or “clinical inertia” describe the failure of health care providers to intensify therapy when treatment goals are unattained ( Phillips et al., 2001 ; Okonofua et al., 2006 ). In sociology, “social inertia” depicts the resistance to change or the (excess) stability of relationships in societies or social groups ( Bourdieu, 1985 ). In psychology, the “inertia effect” describes individuals' reluctance to reduce their confidence in a decision following disconfirming information ( Pitz and Reinhold, 1968 ). The concept of “psychological inertia” has been proposed to describe the tendency to maintain the status-quo ( Gal, 2006 ). Suri et al. (2013) speak of “patient inertia” to describe the phenomenon that many patients stick to inferior options or fail to initiate treatment even after the diagnosis of a medical problem.
Summing up, the concept of inertia has been used to describe many different phenomena related to a resistance to change. The existence of these phenomena has been linked to status-quo bias ( Samuelson and Zeckhauser, 1988 ; Ritov and Baron, 1992 ), described as the tendency to maintain the defaults either by repeating a decision or avoiding action. So far, however, our understanding of the processes underlying inertia in decision making is rather limited. In the present study, we aim to contribute to this understanding by focusing on a particular facet of inertia, which we term “decision inertia:” the tendency to repeat a previous choice, regardless of its outcome, in a subsequent decision. We investigate whether this tendency significantly influences active decision making and explore the psychological processes behind it using a belief-updating task.
The phenomenon we explore here is consistent with previous evidence from the decision-making literature. For instance, Pitz and Geller (1970) observed a tendency to repeat previous decisions even following disconfirming information. In a study on reinforcement in belief-updating tasks, which was not focused on inertia, Charness and Levin (2005) nevertheless observed a “taste for consistency,” corresponding to the phenomenon that people were prone to repeat their choices, no matter whether these choices led to success or failure. In a study on perceptual decision making, Akaishi et al. (2014) showed that choices tend to be repeated on subsequent trials, even on the basis of little sensory evidence. Erev and Haruvy (in press) review studies on decision making from experience where, for instance, participants repeatedly choose between a risky prospect and a safe option, and receive immediate feedback (e.g., Nevo and Erev, 2012 ). Erev and Haruvy (in press) conclude that there exists a strong tendency to simply repeat the most recent decision, which is even stronger than the tendency to react optimally to the most recent outcome. Furthermore, Zhang et al. (2014) showed that the tendency to repeat previous decisions exists even for unethical behavior. There might also be a relation to the extensive literature on choice-induced preference change, which shows that earlier decisions alter preferences, and hence might result in repeated choices (see also Ariely and Norton, 2008 ; Sharot et al., 2010 ; Alós-Ferrer et al., 2012) .
The influence of previous decisions on subsequent choices has also been investigated in reinforcement learning research. For instance, Lau and Glimcher (2005) studied trial-by-trial behavior of monkeys in a matching task in which the reward structure favored alternating between two choice options. They observed that choosing a particular alternative decreased the probability of choosing that alternative again on the next trial, but increased the likelihood of choosing it again some time in the future, regardless of reward history. Studies in which participants worked on probabilistic “bandit tasks” that favored sticking with successful options showed that participants were prone to repeat their choices, independently of any effects due to previous rewards (e.g., Schönberg et al., 2007 , Supplemental Results). Accordingly, reinforcement learning models now account for the influence of past choices on subsequent ones by including a model parameter of “perseveration,” capturing the tendency to repeat or avoid recently chosen actions (e.g., Schönberg et al., 2007 ; Gershman et al., 2009 ; Wimmer et al., 2012 ; for an introduction to model-based reinforcement learning, see Daw, 2012) . The inclusion of such a parameter leads to more accurate predictions in contrast to models that merely incorporate the effect of past reinforcers (see Lau and Glimcher, 2005 ).
To understand decision inertia, we consider a multiple-process framework ( Evans, 2008 ; Sanfey and Chang, 2008 ; Weber and Johnson, 2009 ; Alós-Ferrer and Strack, 2014 ), that is, we consider individual decisions as the result of the interaction of multiple decision processes. Specifically, we follow the assumptions of parallel-competitive structured process theories, which propose that multiple processes affect behavior simultaneously, resulting in conflict or alignment among these processes (e.g., Epstein, 1994 ; Sloman, 1996 ; Strack and Deutsch, 2004 ). Whenever several decision processes are in conflict (i.e., deliver different responses), cognitive resources should be taxed, resulting in longer response times and higher error rates. These predictions were confirmed in a response-times study by Achtziger and Alós-Ferrer (2014) , which showed that more errors arise and responses are slower when Bayesian updating (i.e., normatively optimal behavior) is opposed to reinforcement learning of the form “win-stay, lose-shift.” We relied on a variant of the experimental paradigms employed in Achtziger and Alós-Ferrer (2014) , Achtziger et al. (2015) , and Charness and Levin (2005) but focused on the conflict with decision inertia, viewed as a further decision process. We measured error rates and response times to investigate the role of decision inertia in a belief-updating task. Specifically, we hypothesized that decision inertia is a further process potentially conflicting with optimal behavior and affecting decision outcomes and decision times. Accordingly, our main hypotheses were that more errors and slower choices would be made in cases of conflict between decision inertia and Bayesian updating.
To further explore decision inertia, we considered possible individual correlates of this decision process. We hypothesized that decision inertia would be associated with preference for consistency (PFC), which is a desire to be and look consistent within words, beliefs, attitudes, and deeds, as measured by the scale with the same name ( Cialdini et al., 1995 ). Cialdini (2008) argues that because of the tendency to be consistent, individuals fall into the habit of being automatically consistent with previous decisions. Once decision makers make up their minds about a given issue, consistency allows them to not think through that issue again, but leads them to fail to update their beliefs in the face of new information when confronting new but similar decision situations. Furthermore, Pitz (1969) observed that inertia in the revision of opinions is the result of a psychological commitment to initial judgments. Thus, we hypothesized that preference for consistency might be one of the possible mechanisms driving decision inertia, in which case an individual's behavioral tendency to rely on decision inertia should be positively associated with their preference for consistency (PFC).
Our last hypothesis concerns the kind of decisions leading to decision inertia. If this phenomenon arises from a tendency to be consistent with previous decisions, and hence economize decision costs, the effect should be stronger following autonomous decisions (free choices) than required ones (forced choices). The same prediction also arises from a different perspective. In general, human decision makers prefer choice options that they freely chose over options with equal value that they did not choose, as exemplified by the literature on choice-induced preference change (e.g., Lieberman et al., 2001 ; Sharot et al., 2009 , 2010 ; Alós-Ferrer et al., 2012 ). Relying on behavioral and genotype data, Cockburn et al. (2014) recently investigated the underlying mechanism of this preference. In a probabilistic learning task, their participants demonstrated a bias to repeat freely chosen decisions, which was limited to rewarded vs. non-rewarded decisions. Interindividual differences in the magnitude of this choice bias were predicted by differences in a gene that has been linked to reward learning and striatal plasticity. Cockburn et al. (2014) interpret these findings as evidence that free choices selectively amplify dopaminergic reinforcement learning signals, based on the workings of a feedback loop between the basal ganglia and the midbrain dopamine system. Given such an amplification of the value of freely chosen options, it again follows that decision inertia should be more pronounced after autonomous decisions compared to forced ones in our study. We make use of the fact that the standard implementation of the paradigms we rely on includes both forced and free choices to test this hypothesis.
2.1. Methods
2.1.1. experimental design.
Decision making under uncertainty or risk requires integrating different pieces of information on the possible outcomes in order to form and update probability judgments (beliefs). From a normative point of view, the correct combination of previous (prior) beliefs on the probability of an uncertain event and additional information is described by Bayes' rule ( Bayes and Price, 1763 ). The present study used a two-draw decision paradigm ( Charness and Levin, 2005 ; Achtziger and Alós-Ferrer, 2014 ; Achtziger et al., 2015 ), where of course Bayesian updating is the rational strategy to derive optimal decisions. There are two urns, the Left Urn and the Right Urn, each containing 6 balls, which can be black or white. The urns are presented on the computer screen, with masked colors for the balls. Participants are asked to choose which urn a ball should be extracted from (with replacement) by pressing one of two keys on the keyboard, and are paid for drawing balls of a predefined color, say black (the winning ball color is counterbalanced across participants). After observing the result of the first draw, participants are asked to choose an urn a second time, a ball is again randomly extracted, and they are paid again if the newly extracted ball is of the appropriate color. The payment per winning ball in our implementation was 18 Euro cents.
The urn composition (i.e., number of black and white balls) in Study 1 is given in Table 1 (left-hand side). The essence of the design is that the composition varied according to a “state of the world,” Up or Down, which was not revealed to participants. That is, participants knew the urn compositions in each state of the world and the fact that those were held constant for the whole experiment. They were also informed that the prior probability of each state was 1/2. Further, they knew that the state of the world was constant within the two-draw decision, but was randomized according to the prior for each new round. This means that the first draw is uninformed, but by observing the first ball's color the decision maker can draw conclusions about the most likely state of the world. Thus, for a second draw, an optimizer should choose the urn with the highest expected payoff, given the posterior probability of the state of the world updated through Bayes' rule. Given the urn compositions in Study 1, straightforward computations show that an optimizer should stay with the same urn as in the first draw after a win and switch after a loss. For example, if a black ball is extracted from the Left Urn, the updated probability of being in the state “Up” is (1/2)(2/6)∕((1/2)(2/6) + (1/2)(4/6)) = 1/3, hence choosing the Left Urn again delivers an expected payoff of (1/3)(2/6) + (2/3)(4/6) = 5/9, while switching to the Right Urn delivers a smaller expected payoff of (1/3)(4/6) + (2/3)(2/6) = 4/9. Note that optimizing behavior given this particular urn composition is fully aligned with that prescribed by an intuitive reinforcement rule (win-stay, lose-shift); we will return to this point in Study 2. Decision inertia, on the other hand, prescribes to always stay with the same urn as in the first draw, independently of whether that decision resulted in a win or a loss. Hence, Bayesian updating conflicts with decision inertia after drawing a losing ball in the first draw.

Table 1. Urn composition in Studies 1 and 2 .
Participants repeated the two-draw decision 60 times. Following Charness and Levin (2005) and Achtziger and Alós-Ferrer (2014) , we included both forced first draws (where the choice is dictated to the participant) and free first draws. This also allows us to explore the effect of decision inertia arising from previous autonomous choices as opposed to required choices. To avoid confounding forced choices and learning effects, participants made forced draws and free draws alternately.
2.1.2. Participants
Participants were recruited using ORSEE ( Greiner, 2004 ), a standard online recruitment system for economic experiments which allows for random recruitment from a predefined subject pool. Participants were students from the University of Cologne. 45 participants (29 female; age range 18–32, mean 23.51) participated in exchange for performance-based payment plus a show-up fee of 2.50 Euros. Three further participants had to be excluded from data analysis due to technical problems (missing data).
2.1.3. Procedure
The experiment was conducted at the Cologne Laboratory for Economic Research (CLER) using z-Tree ( Fischbacher, 2007 ). Experimental procedures were in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments, and also standard practices in experimental economics (e.g., no-deception rule). In agreement with the ethics and safety guidelines at the CLER, participants were all pre-registered in the laboratory through ORSEE and had given written informed consent regarding the laboratory's guidelines (no further informed consent is necessary for particular experiments). Potential participants were informed of their right to abstain from participation in the study or to withdraw consent to participate at any time without reprisal. Participants were randomly assigned to the two counterbalance conditions (winning ball color). Before the start of the experiment, participants read instructions and answered control questions to ensure they understood the experiment properly. Then the experimental task started, which lasted around 10 min. After the task, participants filled in questionnaires including the Preference for Consistency scale (brief 9-item version, continuously ranging from 0 to 10; Cialdini et al., 1995 ) and demographic questions. A session lasted about 1 h and average earnings were 13.85 Euros ( SD = 1.02).
2.2. Results
2.2.1. error rates.
Mean error rates are depicted in Figure 1 . The mean error rate in case of conflict between inertia and Bayesian updating was 21.98% ( SD = 20.91%), vs. just 10.18% ( SD = 17.43%) in case of alignment. To test for differences in the distribution of individual-level error rates, here and elsewhere in the paper we rely on non-parametric, two-tailed Wilcoxon Signed-Rank tests (WSR). The difference is highly significant (median error rate 15.63% in case of conflict, 5.26% in case of alignment; N = 45, z = 3.79, p < 0.001). When we split the tests conditional on forced draws and free draws, the result holds both for forced draws (median error rate 14.29% in case of conflict (mean 21.31%, SD = 23.12%), 6.67% in case of alignment (mean 11.56%, SD = 17.62%); WSR test, z = 2.89, p = 0.004) and free draws (median error rate 17.65% in case of conflict (mean 23.68%, SD = 22.56%), 0% in case of alignment (mean 8.84%, SD = 18.03%); WSR test, z = 3.94, p < 0.001). Paired t -tests provide similar results, but since error rates are not normally distributed we favor WSR tests, which are ordinal in nature. We rely on standard WSR tests that adjust for zero differences, but results are highly similar when using WSR tests that ignore zero differences. Furthermore, to test the robustness of the WSR results, we additionally ran a two-way ANOVA (Factor 1: conflict with inertia vs. alignment with inertia; Factor 2: forced draw vs. free draw) on log-transformed error rates. Since several participants had error rates of 0% (which is commonly observed in the present paradigm), we used the log( x + 1) transformation following Bartlett (1947) to be able to deal with zero values. The ANOVA results were consistent with the results based on the WSR test, showing a significant main effect of conflict with vs. alignment with inertia, but no main effect of forced vs. free draw and no interaction.

Figure 1. Study 1 . Mean of individual error rates in case of alignment (light gray) and conflict (dark gray) between Bayesian updating and inertia. Error bars represent standard errors. *** p < 0.01.
2.2.2. Response Times
Second-draw responses were significantly longer in case of conflict with inertia (median 973 ms, mean 1119 ms, SD = 447 ms) than in case of alignment (median 903 ms, mean 1001 ms, SD = 319 ms). We tested the difference in distributions with a WSR test on individual average response times ( N = 45, z = 2.13, p = 0.033). However, the result only holds for free draws (conflict: median 933 ms, mean 1048 ms, SD = 478 ms; alignment: median 787 ms, mean 840 ms, SD = 289 ms; z = 3.54, p < 0.001), but not for forced draws (conflict: median 1038 ms, mean 1191 ms, SD = 478 ms; alignment: median 1069 ms, mean 1179 ms, SD = 440 ms; z = −0.06, p = 0.951). We further ran a two-way ANOVA (Factor 1: conflict with inertia vs. alignment with inertia; Factor 2: forced draw vs. free draw) on log-transformed response times (since the distribution of response times was skewed). Results were consistent with the WSR test, showing significant main effects of both factors and a significant interaction effect.
2.3. Discussion
The results of Study 1 support the idea that decision inertia corresponds to an automatic process conflicting with Bayesian updating, thereby affecting decision performance and decision times. In particular, more errors were made when Bayesian updating and decision inertia delivered different responses. Additionally, after free draws decisions were significantly slower when Bayesian updating and decision inertia were opposed compared to when they were aligned. However, this decision-times evidence of a decision conflict was not observed after forced draws, suggesting that the effect of decision inertia might be stronger following voluntary choices.
Study 2 investigated decision inertia in a more complex setting, where more than two decision processes are in conflict. Previous studies ( Charness and Levin, 2005 ; Achtziger and Alós-Ferrer, 2014 ; Achtziger et al., 2015 ) have shown that in probability-updating paradigms, reinforcement processes (cued by winning or losing in the first draw) play a relevant role. Reinforcement roughly corresponds to the psychological Law of Effect: the propensity to adopt an action increases when it leads to a success and decreases when it leads to a failure ( Thorndike, 1911 ; Sutton and Barto, 1998 ). Charness and Levin (2005) introduced the “reinforcement heuristic” as a decision rule, defined as a simple “win-stay, lose-shift” behavioral principle which might give different prescriptions than Bayesian updating and thereby produce errors. In fact, Charness and Levin (2005) observed error rates above 50% when the heuristic conflicted with Bayes' rule, which demonstrates that reinforcement has a significant impact on individuals' decision making. By analyzing response times, Achtziger and Alós-Ferrer (2014) showed that reinforcement is a rather automatic process conflicting with the more controlled process of Bayesian updating. In Study 1, due to the distribution of balls in the two urns, Bayesian updating and reinforcement were always aligned, and hence our analysis could not be confounded by a conflict with reinforcement. In Study 2 we aimed to test if inertia still plays a role when reinforcement additionally conflicts with Bayesian updating. We used the same experimental paradigm as in Study 1, but with a different urn composition, resulting in two kinds of (endogenous) decision situations. In the first kind, Bayesian updating and reinforcement were aligned, allowing for a conceptual replication of Study 1. In the second kind, there was a conflict between Bayesian updating and reinforcement.
3.1. Method
3.1.1. experimental design.
The experimental task differs from Study 1 only (but crucially) in the urn composition, which is shown in Table 1 (right-hand side). Given this composition, choosing the Right Urn in the first draw reveals the state of the world and the decision for the second draw is straightforward, i.e., win-stay, lose-shift. That is, as in Study 1, both Bayesian updating and the reinforcement heuristic give the same prescription, but decision inertia conflicts with Bayesian updating after drawing a losing ball from this urn. Choosing the Left Urn in the first draw leads to a different situation. Given the posterior probability updated through Bayes' rule, Bayesian updating prescribes to stay after a loss and to switch after a win (win-shift, lose-stay), which is opposed to the prescriptions of reinforcement. Further, Bayesian updating conflicts with inertia after drawing a winning ball (but not after drawing a losing ball). Thus, after starting with Left there are situations where reinforcement and decision inertia are aligned and both conflict with Bayesian updating.
3.1.2. Participants and Procedure
Forty-four participants (25 female; age range: 19–31, mean 23.80) were recruited using the same enrollment method, experimental procedures, and payment rules as in Study 1. Average earnings were 14.29 Euros ( SD = 0.78). Four further participants had to be excluded from data analysis due to technical problems.
3.2. Results
3.2.1. error rates.
Figures 2 , 3 depict the means of individual-level error rates depending on the type of draws in Study 2. The results for situations with no conflict between Bayesian updating and reinforcement (the Right Urn situations; Figure 2 ) were analogous to those of Study 1. In this case, error rates are naturally very low, because reinforcement learning prescribes the correct answer (e.g., Achtziger and Alós-Ferrer, 2014 ). In situations where Bayesian updating conflicts with reinforcement (the Left Urn situations; Figure 3 ), error rates were considerably higher than under alignment with reinforcement. The observed error rates are highly similar to those found in previous studies using the same decision task ( Charness and Levin, 2005 ; Achtziger and Alós-Ferrer, 2014 ; Achtziger et al., 2015 ). The high error rates can be explained by the automaticity of the reinforcement process, which seems to be highly dominant for some participants. Interestingly, when looking at individual data, one observes only few error rates in the 50% range, but rather one cluster of participants with error rates below 1/3, and another cluster with rates above 2/3. This points to interindividual heterogeneity (see Achtziger et al., 2015 ), but speaks against the possibility that participants responded randomly.

Figure 2. Study 2 . Mean of individual error rates in case of alignment (light gray) and conflict (dark gray) between Bayesian updating and inertia, for the situations where Bayesian updating is aligned with reinforcement (first draw from the Right Urn). Error bars represent standard errors. ** p < 0.05.

Figure 3. Study 2 . Mean of individual error rates in case of alignment (light gray) and conflict (dark gray) between Bayesian updating and inertia, for the situations where Bayesian updating conflicts with reinforcement (first draw from the Left Urn). Error bars represent standard errors. *** p < 0.01.
Turning back to our hypotheses regarding decision inertia, consider first decision situations where Bayesian updating and reinforcement are aligned. In this case, the mean error rate in case of conflict between decision inertia and Bayesian updating (further supported by reinforcement) was 5.37% ( SD = 11.67%), while in case of alignment between those two processes (hence alignment among all three processes) it was only 1.36% ( SD = 5.31%). Although all medians were at a 0% error rate, the difference in distributions was significant (WSR test, N = 44, z = 2.57, p = 0.010). This result holds both for forced draws (conflict: median 0%, mean 6.19%, SD = 15.17%; alignment: median 0%, mean 1.29%, SD = 5.13%; WSR test, N = 43, z = 2.21, p = 0.027) and free draws (conflict: median 0%, mean 4.71%, SD = 10.52%; alignment: median 0%, mean 1.78%, SD = 7.14%; WSR test, N = 43, z = 2.31, p = 0.021). Note that for free draws N is reduced due to a participant who avoided starting with the Right Urn when first draws were free, and hence provided no data for this particular comparison. We also excluded this participant from the corresponding analysis for forced draws to ensure that the subset of participants for the analysis of both draw types was the same. As in Study 1, we additionally ran a two-way ANOVA on the transformed error rates as a robustness check. The pattern of results was consistent with the WSR tests, showing a significant main effect of conflict with vs. alignment with inertia, but no main effect of forced vs. free draw and no interaction.
Consider now the situations where Bayesian updating conflicts with reinforcement (the Left Urn situations; Figure 3 ). In this case, error rates for the cases of conflict and alignment of Bayesian updating with decision inertia were similar (mean 49.92% ( SD = 36.15%) and 55.61% ( SD = 31.44%), respectively). The difference was not significant (median in case of conflict 48.81%, in case of alignment 62.02%; WSR test, N = 44, z = −0.86, p = 0.391). If we consider only free draws, as we expected, there are more errors in case of conflict between Bayesian updating and inertia (mean 74.19%, SD = 34.08%) than in case of alignment (mean 48.08%, SD = 34.23%). The difference in distributions is significant (median in case of conflict 90%, in case of alignment 50%; WSR test, N = 25, z = 2.73, p = 0.006). The extraordinarily high error rates in case of conflict are revealing but intuitive, for in this case the correct normative response is opposed to positive reinforcement, that is, inertia actually prescribes to merely repeat a successful choice. Note that in this case the test needs to exclude the participants who avoided starting with Left when first draws were free, and hence provided no data for this particular comparison (indeed, this possibility is the original reason for including forced draws in the design). Again, we also excluded these participants from the corresponding analysis for forced draws. If we consider only forced draws, however, the difference of error rates between the case of conflict with inertia and the case of alignment with inertia is not significant (medians 66.67% in case of conflict (mean 63.26%, SD = 33.13%), 66.67% in case of alignment (mean 60.16%, SD = 27.61%); WSR test, N = 25, z = 0.59, p = 0.554). An additional ANOVA on the transformed error rates yielded results consistent with the WSR tests, showing a significant interaction effect, but no main effects of either factor.
3.2.2. Response Times
In situations where Bayesian updating is aligned with reinforcement (the Right Urn situations), as expected, responses were slower in case of conflict between Bayesian updating and inertia (median 878 ms, mean 1025 ms, SD = 503 ms) than in case of alignment (median 666 ms, mean 905 ms, SD = 698 ms). The WSR test was significant ( N = 44, z = 3.76, p < 0.001). This result holds for both forced draws (conflict: median 943 ms, mean 1136 ms, SD = 530 ms; alignment: median 744 ms, mean 961 ms, SD = 702 ms; WSR test, N = 43, z = 2.98, p = 0.003) and free draws (conflict: median 796 ms, mean 1031 ms, SD = 927 ms; alignment: median 623 ms, mean 896 ms, SD = 1141 ms; WSR test, N = 43, z = 3.48, p < 0.001). An additional ANOVA on log-transformed data yielded results consistent with the WSR tests, showing a significant main effect of conflict with vs. alignment with inertia and a significant main effect of forced vs. free draw, but no significant effect of interaction.
In situations where Bayesian updating conflicts with reinforcement (the Left Urn situations), there is no significant difference between the response times in case of conflict between Bayesian updating and inertia (median 1579 ms, mean 1919 ms, SD = 1237 ms) and the response times in case of alignment (median 1595 ms, mean 2123 ms, SD = 1449 ms; WSR test, N = 44, z = −0.70, p = 0.484). The same result holds when the test is made conditional on free draws (conflict: median 926 ms, mean 1543 ms, SD = 1295 ms; alignment: median 1019 ms, mean 1376 ms, SD = 770 ms; WSR test, N = 25, z = 0.58, p = 0.563). In forced draws, the results showed that the response times in case of conflict with inertia (median 1671 ms, mean 1971 ms, SD = 1438 ms) were faster than in case of alignment with inertia (median 2182 ms, mean 2427 ms, SD = 1347 ms; WSR test, N = 25, z = −2.19, p = 0.028). An additional ANOVA on the transformed error rates yielded results consistent with the WSR tests, showing a significant main effect of forced vs. free draw, but no main effect of conflict vs. alignment with inertia and no interaction.
3.3. Discussion
In decision situations without additional conflict due to reinforcement, which are comparable to Study 1, the results show that more errors and slower responses are made in case of conflict between Bayesian updating and decision inertia in both forced and free draws, confirming that decision inertia exists for both required and autonomous choices. This replicates the results of Study 1. When decisions are made in the presence of a conflict between Bayesian updating and reinforcement, our results suggest that decision inertia is only present for the case of voluntary (autonomous) previous choices, and even in that case evidence on response times is inconclusive. Our interpretation is that decision inertia is a subtle process, which might be partially washed out when reinforcement conflicts with Bayesian updating.
4. Decision Inertia and Preference for Consistency
We now investigate the proposed relationship between inertia and preference for consistency (PFC). At the same time, based on the insights from Studies 1 and 2, we examine more closely whether the effect of decision inertia varies according to the type of decisions (forced vs. free). We measured PFC through the corresponding scale after the decision-making part of the experiment was completed. The average PFC score in our data was 4.28 ( SD = 1.86). Internal consistency as measured by Cronbach's alpha was 0.83. To uncover the associations between decision inertia and PFC, and between decision inertia and decision autonomy, we ran random-effects probit regressions on second-draw errors for the data from both studies (see Table 2 ). These regressions allow us to control for a variety of other variables like round number, counterbalancing, and conflict with reinforcement (in Study 2). The results show that in both studies, the interaction effect of conflict with inertia and the PFC score is significantly positive, indicating that in case of conflict with inertia, a higher PFC score is associated with an increased probability of errors, which is consistent with our assumption. In addition, in both studies, the interaction effect of conflict with inertia and forced draws is significantly negative, that is, the effect of decision inertia is stronger in free draws than in forced draws.

Table 2. Random-effects probit regressions on second-draw errors (1 = error) in Studies 1 and 2 .
5. General Discussion
This study shows that decision inertia plays a role in human decision making under risk and investigates the underlying processes. We find a significant tendency to repeat previous choices in decision making with monetary feedback. Specifically, we found evidence for the existence of decision inertia in Study 1 and in decision situations without conflict with reinforcement in Study 2. In contrast, in the Left-Urn situations in Study 2, where reinforcement conflicts with Bayesian updating, we only found an effect of decision inertia after autonomous choices. We conclude that decision inertia seems to be subtle and easily overshadowed by stronger processes as e.g., reinforcement learning.
We hypothesized that decision inertia would be positively associated with PFC. The regression analysis confirms this hypothesis, indicating that the tendency to repeat past choices is a relevant part of the need to be consistent. This finding agrees with those of Pitz (1969) , who showed that the inertia effect in opinion revision results from a psychological commitment to one's initial judgments. It is not consistent with the results of Zhang et al. (2014 , Study 2b), who found no relation between repetition of earlier decisions and PFC scores. However, Zhang et al. (2014) targeted unethical decisions and hence their setting is hard to compare to ours. The moral framing of the decisions in that work might have interacted with the hypothesized need for consistency. Our results for free vs. forced draws provide further evidence that decision inertia might (at least partly) be based on a mechanism of consistency-seeking. Both of our studies suggest that the effect of decision inertia might vary according to the type of first-draw decisions. The results of the regression analyses confirm this idea, indicating that decision inertia is significantly stronger in autonomous choices than in required ones. Since one would assume that a psychological desire to be consistent with one's own decisions is stronger for self-selected compared to assigned decisions, this result further supports an interpretation of decision inertia as a facet of consistency-seeking.
Our results are also in agreement with the reinforcement learning literature (e.g., Schönberg et al., 2007 ; Gershman et al., 2009 ; Wimmer et al., 2012 ) which has pointed out the importance of perseveration as an additional factor. A direct comparison is of course difficult, because in our paradigm success probabilities are explicitly given (and priors are reset after every round), while in the quoted works they are discovered through experience. However, the basic messages are similar. As in those previous reports, we find that the mere repetition of previous choices plays a role even when behavior is mostly determined by the interaction of reinforcement and normative goals. In that sense, we confirm (in a different setting) that incorporating perseveration into models of reinforcement learning can improve our understanding of how errors occur.
In conclusion, we find clear evidence for the existence of decision inertia in incentivized decision making. Our study sheds light on the process underlying decision inertia, by showing that this behavioral tendency is positively associated with an individual's preference for consistency, and that the effect of decision inertia is stronger in voluntary choices than in required choices.
Author Contributions
All authors contributed equally to this work. The listing of authors is alphabetical.

Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
This research was financed by the Research Unit “Psychoeconomics,” funded by the German Research Foundation (DFG, FOR 1882). The studies were conducted at the CLER (Cologne Laboratory for Experimental Research). The CLER gratefully acknowledges financial support from the German Research Foundation (DFG).
Achtziger, A., and Alós-Ferrer, C. (2014). Fast or rational? A response-times study of Bayesian updating. Manage. Sci. 60, 923–938. doi: 10.1287/mnsc.2013.1793
PubMed Abstract | CrossRef Full Text | Google Scholar
Achtziger, A., Alós-Ferrer, C., Hügelschäfer, S., and Steinhauser, M. (2015). Higher incentives can impair performance: Neural evidence on reinforcement and rationality. Soc. Cogn. Affect. Neurosci. 10, 1477–1483. doi: 10.1093/scan/nsv036
Akaishi, R., Umeda, K., Nagase, A., and Sakai, K. (2014). Autonomous mechanism of internal choice estimate underlies decision inertia. Neuron 81, 195–206. doi: 10.1016/j.neuron.2013.10.018
Alós-Ferrer, C., Granić, Ð.-G., Shi, F., and Wagner, A. K. (2012). Choices and preferences: Evidence from implicit choices and response times. J. Exp. Soc. Psychol. 48, 1336–1342. doi: 10.1016/j.jesp.2012.07.004
CrossRef Full Text | Google Scholar
Alós-Ferrer, C., and Strack, F. (2014). From dual processes to multiple selves: Implications for economic behavior. J. Econ. Psychol. 41, 1–11. doi: 10.1016/j.joep.2013.12.005
Ariely, D., and Norton, M. I. (2008). How actions create–not just reveal–preferences. Trends Cogn. Sci. 12, 13–16. doi: 10.1016/j.tics.2007.10.008
Bartlett, M. S. (1947). The use of transformations. Biometrics 3, 39–52. doi: 10.2307/3001536
Bayes, T., and Price, R. (1763). An essay towards solving a problem in the doctrine of chances. Philos. Trans. R. Soc. Lond. 53, 370–418. doi: 10.1098/rstl.1763.0053
CrossRef Full Text
Bourdieu, P. (1985). The social space and the genesis of groups. Theory Soc. 14, 723–744. doi: 10.1007/BF00174048
Charness, G., and Levin, D. (2005). When optimal choices feel wrong: a laboratory study of Bayesian updating, complexity, and affect. Am. Econ. Rev. 95, 1300–1309. doi: 10.1257/0002828054825583
Cialdini, R. B. (2008). Influence: Science and Practice (5th Edn) . Boston, MA: Pearson Educations, Inc.
Cialdini, R. B., Trost, M. R., and Newsom, J. T. (1995). Preference for consistency: the development of a valid measure and the discovery of surprising behavioral implications. J. Personal. Soc. Psychol. 69, 318–328. doi: 10.1037/0022-3514.69.2.318
Cockburn, J., Collins, A. G., and Frank, M. J. (2014). A reinforcement learning mechanism responsible for the valuation of free choice. Neuron 83, 551–557. doi: 10.1016/j.neuron.2014.06.035
Daw, N. D. (2012). “Model-based reinforcement learning as cognitive search: neurocomputational theories,” in Cognitive Search: Evolution, Algorithms and the Brain , eds P. M. Todd, T. T. Hills, and T. W. Robbins (Cambridge, MA: MIT Press), 195–208.
Google Scholar
Epstein, S. (1994). Integration of the cognitive and the psychodynamic unconscious. Am. Psychol. 49, 709–724. doi: 10.1037/0003-066X.49.8.709
Erev, I., and Haruvy, E. (in press). “Learning the economics of small decisions,” in The Handbook of Experimental Economics, Volume 2 , eds J. H. Kagel, A. E. Roth (Princeton: Princeton University Press).
Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol. 59, 255–278. doi: 10.1146/annurev.psych.59.103006.093629
Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Exp. Econ. 10, 171–178. doi: 10.1007/s10683-006-9159-4
Gal, D. (2006). A psychological law of inertia and the illusion of loss aversion. Judgm. Decis. Making 1, 23–32.
Gershman, S. J., Pesaran, B., and Daw, N. D. (2009). Human reinforcement learning subdivides structured action spaces by learning effector-specific values. J. Neurosci. 29, 13524–13531. doi: 10.1523/JNEUROSCI.2469-09.2009
Greiner, B. (2004). An online recruiting system for economic experiments. Forschung und wissenschaftliches Rechnen, GWDG Bericht Goettingen: Gesellschaft fuer wissenschaftliche Datenverarbeitung 63, 79–93.
Hodgkinson, G. P. (1997). Cognitive inertia in a turbulent market: The case of UK residential estate agents. J. Manag. Stud. 34, 921–945. doi: 10.1111/1467-6486.00078
Huff, J. O., Huff, A. S., and Thomas, H. (1992). Strategic renewal and the interaction of cumulative stress and inertia. Strateg. Manag. J. 13, 55–75. doi: 10.1002/smj.4250131006
Lau, B., and Glimcher, P. W. (2005). Dynamic response-by-response models of matching behavior in rhesus monkeys. J. Exp. Anal. Behav. 84, 555–579. doi: 10.1901/jeab.2005.110-04
Lieberman, M. D., Ochsner, K. N., Gilbert, D. T., and Schacter, D. L. (2001). Do amnesics exhibit cognitive dissonance reduction? The role of explicit memory and attention in attitude change. Psychol. Sci. 12, 135–140. doi: 10.1111/1467-9280.00323
Nevo, I., and Erev, I. (2012). On surprise, change, and the effect of recent outcomes. Front. Psychol. 3:24. doi: 10.3389/fpsyg.2012.00024
Okonofua, E. C., Simpson, K. N., Jesri, A., Rehman, S. U., Durkalski, V. L., and Egan, B. M. (2006). Therapeutic inertia is an impediment to achieving the healthy people 2010 blood pressure control goals. Hypertension 47, 345–351. doi: 10.1161/01.HYP.0000200702.76436.4b
Phillips, L. S., Branch, W. T., Cook, C. B., Doyle, J. P., El-Kebbi, I. M., Gallina, D. L., et al. (2001). Clinical inertia. Ann. Inter. Med. 135, 825–834. doi: 10.7326/0003-4819-135-9-200111060-00012
Pitz, G. F. (1969). An inertia effect (resistance to change) in the revision of opinion. Can. J. Psychol. 23, 24–33. doi: 10.1037/h0082790
Pitz, G. F., and Geller, E. S. (1970). Revision of opinion and decision times in an information seeking task. J. Exp. Psychol. 83, 400–405. doi: 10.1037/h0028871
Pitz, G. F., and Reinhold, H. (1968). Payoff effects in sequential decision-making. J. Exp. Psychol. 77, 249–257. doi: 10.1037/h0025802
Reger, R. K., and Palmer, T. B. (1996). Managerial categorization of competitors: Using old maps to navigate new environments. Organ. Sci. 7, 22–39. doi: 10.1287/orsc.7.1.22
Ritov, I., and Baron, J. (1992). Status-quo and omission biases. J. Risk Uncertain. 5, 49–61. doi: 10.1007/BF00208786
Samuelson, W., and Zeckhauser, R. (1988). Status quo bias in decision making. J. Risk Uncertain. 1, 7–59. doi: 10.1007/BF00055564
Sanfey, A. G., and Chang, L. J. (2008). Multiple systems in decision making. Ann. N.Y. Acad. Sci. 1128, 53–62. doi: 10.1196/annals.1399.007
Schönberg, T., Daw, N. D., Joel, D., and O'Doherty, J. P. (2007). Reinforcement learning signals in the human striatum distinguish learners from nonlearners during reward-based decision making. J. Neurosci. 27, 12860–12867. doi: 10.1523/JNEUROSCI.2496-07.2007
Sharot, T., De Martino, B., and Dolan, R. J. (2009). How choice reveals and shapes expected hedonic outcome. J. Neurosci. 29, 3760–3765. doi: 10.1523/JNEUROSCI.4972-08.2009
Sharot, T., Velasquez, C. M., and Dolan, R. J. (2010). Do decisions shape preference? Evidence from blind choice. Psychol. Sci. 21, 1231–1235. doi: 10.1177/0956797610379235
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychol. Bull. 119: 3. doi: 10.1037/0033-2909.119.1.3
Strack, F., and Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personal. Soc. Psychol. Rev. 8, 220–247. doi: 10.1207/s15327957pspr0803_1
Suri, G., Sheppes, G., Schwartz, C., and Gross, J. J. (2013). Patient inertia and the status quo bias: when an inferior option is preferred. Psychol. Sci. 24, 1763–1769. doi: 10.1177/0956797613479976
Sutton, R. S., and Barto, A. G. (1998). Reinforcement Learning: An Introduction . Cambridge, MA: MIT Press.
Thorndike, E. L. (1911). Animal Intelligence: Experimental Studies . New York, NY: MacMillan. (see also Hafner Publishing Co., 1970).
Tripsas, M., and Gavetti, G. (2000). Capabilities, cognition, and inertia: evidence from digital imaging. Strateg. Manag. J. 21, 1147–1161. doi: 10.1002/1097-0266(200010/11)21:10/11<1147::AID-SMJ128>3.0.CO;2-R
Weber, E. U., and Johnson, E. J. (2009). Mindful judgment and decision making. Annu. Rev. Psychol. 60, 53–85. doi: 10.1146/annurev.psych.60.110707.163633
Wimmer, G. E., Daw, N. D., and Shohamy, D. (2012). Generalization of value in reinforcement learning by humans. Eur. J. Neurosci. 35, 1092–1104. doi: 10.1111/j.1460-9568.2012.08017.x
Zhang, S., Cornwell, J. F., and Higgins, E. T. (2014). Repeating the past: prevention focus motivates repetition, even for unethical decisions. Psychol. Sci. 25, 179–187. doi: 10.1177/0956797613502363
PubMed Abstract | CrossRef Full Text
Keywords: inertia, decision making, Bayesian updating, multiple processes, perseveration, preference for consistency
Citation: Alós-Ferrer C, Hügelschäfer S and Li J (2016) Inertia and Decision Making. Front. Psychol . 7:169. doi: 10.3389/fpsyg.2016.00169
Received: 22 September 2015; Accepted: 29 January 2016; Published: 16 February 2016.
Reviewed by:
Copyright © 2016 Alós-Ferrer, Hügelschäfer and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Carlos Alós-Ferrer, [email protected]
- Media Center
The Basic Idea
Theory, meet practice.
TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.
If you took a physics class in high school, you may remember learning about inertia, an object’s tendency to resist change in motion. 1 If the object is resting, it tends to stay at rest. If the object is moving, it will stay at its pace unless interrupted by an external force. Only with external resistance will the state of the object change.
Humans also experience inertia. 2 We prefer to keep behaving as we already are; we stick with the default option unless we are specifically motivated to change it. Inertia also applies to our beliefs; we tend to resist changes in our ways of thinking. After all, relying on predetermined mental models appears an efficient method for managing behaviors and decisions. 3 However, there is danger in overreliance on these defaults.
Before exploring inertia, there is an important distinction to be made between it and belief perseverance. Belief perseverance, also known as conceptual conservatism, is the tendency to maintain a belief despite being confronted with explicitly contradictory information. 4 5 Belief perseverance relies on justifying invalidated information, and is thus the perseverance of the belief itself, while inertia is the perseverance of how one interprets information. 2
Silence is the language of inertia. – Margaret Heffernan, business management expert and author of Willful Blindness: Why We Ignore the Obvious at Our Peril
In the 1960s, social psychologist William J. McGuire noticed a resurgence in suggestions that people tend to maintain logical consistency between their cognitions and behaviors. 2 As a result, the idea of cognitive inertia was influenced by two existing psychological theories:
- Balance theory , a theory of attitude changes by Fritz Heider. 6 This theory was based on the idea that there must be a balance between interpersonal relationships, such that all parties are harmonious in their thoughts, emotions, and social relationships. People are motivated to stay away from imbalance structures, so newly formed attitudes will typically strive to reduce tension.
- Cognitive dissonance, a theory proposed by Leon Festinger. 7 This theory proposed that humans strive for internal psychological consistency. Cognitive dissonance results in feeling uncomfortable, motivating people to reduce said dissonance. This reduction can be done by rejecting, avoiding, or changing perceptions of contradictory information.
McGuire assumed that people can hold a certain amount of cognitive inertia, such that we initially resist changing how we process information when presented with new and conflicting information. 2 To develop his work on cognitive inconsistencies and inertia, McGuidre conducted a study with 120 high school and college students.
Participants were presented with a variety of topics and asked how probable they thought each of these topics were. 2 One week later, the participants were called back to read information related to the topics they had previously predicted. The participants were immediately asked again how probable they thought each of these topics were, and were further asked one week after they had been presented with the new information.
McGuire predicted that participants would be motivated to shift their probability ratings to be more consistent with the facts that they were presented, which were inconsistent with their initial probability ratings of the topics. 2 However, McGuire was surprised to find that probability ratings did not immediately change to be consistent with the information presented. Rather, the shift toward consistency between original ratings and the factual information became stronger as time passed, which McGuire called a “continued seepage of change.”
Considering the temporal effects, McGuire termed this phenomenon “cognitive inertia”: the lack of immediate change was the result of participants’ existing thought processes and mental models persisting. 2 This persistence interfered with participants’ abilities to properly consider the new information and alter their initial responses.
William J. McGuire
American social psychologist who studied philosophy and psychology after serving in World War II. 8 Considered to be the “father of social cognition,” McGuire is most known for his work on persuasion and social influence, although he also contributed to the beginnings of cognitive inertia. McGuire co-founded the Society for Experimental Social Psychology and was president of the Personality and Social Psychology division of the American Psychological Association.
Consequences
Since the study of inertia in the 1960s, it has been applied to fields including business management, 9 10 11 12 13 criminal activity, 14 health, 17 and decision making and problem solving, 15 16 18 to name a few. It has been popularized in books like Willful Blindness: Why We Ignore the Obvious at our Peril, written in 2011 by business management expert, Margaret Heffernan. 19 Named one of the most important business books of the decade by the Financial Times, Heffernan explores psychological research related to ignorance and inertia. 20
Inertia is commonly referenced in the world of business management. 9 10 Research highlights how important it is for managers to pay attention to inertia in order to avoid missed opportunities or endangering their company’s competitive advantage. 11 For example, Greyhound was stuck in viewing itself as a bus company, preventing it from capitalizing on its chance to be a dominant player in the world of parcel transport. As for company endangerment, General Mills continued to operate mills long after they no longer held strategic importance. Due to the prevalence in business strategy, research has shifted to helping businesses overcome inertia, such as having managers consult with employees who can provide alternative perspectives. 12 13
Interesting work has been done on psychological inertia as it pertains to crime continuity, with past criminality often being the best predictor of future criminality. 14 Walters’ theory of inertia holds that crime continuity is due to six cognitive variables, all of which are slow to change and thus vulnerable to inertia:
Criminal thinking, including antisocial attitudes and irrational thought patterns;
Believing that engaging in criminal activity will have specific positive outcomes;
Attribution biases, such as the tendency to view the world as hostile and other people as malicious;
Low self-efficacy, resulting in low confidence that one will be able to avoid criminal activity in the future;
Focusing on short term goals opposed to long term goals; and,
Certain values, including immediate gratification and the pursuit of self-indulgent pleasure.
Inertia has also been found to play a role in decision making, especially when it comes to risky decisions. 15 Research has shown that humans have a significant tendency to repeat previous choices with monetary feedback, due to our need to be consistent. Additionally, the effects of inertia on decision making are stronger in voluntary choices than mandatory choices. Knowledge inertia has emerged as a distinct type of inertia, referring to people’s tendencies to problem solve with old, redundant knowledge without paying attention to new experiences. 16 The idea of knowledge inertia relates back to business management, as problem strategies that acknowledge new information are important for maintaining a competitive edge. 13
Health is another vital field in which inertia is a topic of discussion. Emotional inertia, the tendency for one’s affective states to be resistant to change, is one of two types of psychological inflexibility that characterizes depression. 17 Emotional inertia is related to rumination - the other type of inflexibility that characterizes depression - which refers to repetitively focusing on the causes and consequences of depressive symptoms. Aside from its role in health diagnoses, inertia can also be used to explain reactions to health concerns. 18
The Spanish flu, for example, was a deadly pandemic. 18 Yet, there was a universal lack of preparation or panic in response to the pandemic, despite extensive coverage of the flu’s progress. Researchers believe this was due to inertia: people had a widespread understanding of the flu as a seasonal infection that typically did not kill or severely harm people. This premeditated view of the flu was powerful enough to override any messages of the dangers of the Spanish flu, blinding people to its threat and thus resulting in lack of preparation for its spread.
Controversies
Some researchers have presented adjustments and alternative theories to cognitive inertia, which addresses how people maintain their ways of interpreting information and thinking about an issue. 21 These researchers hold that the cognitive emphasis should be replaced with a more holistic approach, accounting for the existing attitudes, emotions, and motivations that strengthen existing mental models.
In response, the theory of motivated reasoning has been presented as an alternative model to consider the phenomena associated with inertia. 21 This theory holds that people are cognitively and emotionally biased to justify an existing thought or behavior. Motivated reasoning focuses on people’s drives to view themselves in a positive light: it suggests that persistence in how people interpret incoming information is based on motivations to be correct, rather than the actual cognitive perspective itself. 22
Similar to the arguments for a more holistic approach to inertia, socio-cognitive inflexibility views inertia as more than just an inability to alter one’s way of interpreting information. 23 Compared to cognitive inertia, socio-cognitive inertia emphasizes the inability to adapt to environmental changes, including institutional changes. The emphasis on social influences is paramount in this discussion: when considering the persistence of the nuclear family, for example, factors such as media portrayals and gender wage differences must be considered. 24
Customer satisfaction and loyalty
Ensuring the commitment of existing customers is crucial for success in business. 25 In order to do so, companies will ask their customers to complete online satisfaction surveys, supported by the assumption that consumers are motivated to evaluate the products or services during the consumption phase. After all, customer satisfaction is linked to customer loyalty.
However, Anna Mattila was curious as to whether consumers consciously process these mundane consumption experiences, which would have implications for the utility of their satisfaction ratings. 25 According to the existing literature on social cognition, people do not always evaluate stimuli. Whether someone formulates a judgment online as they acquire information, or whether they pull judgments from their memory as needed, these judgments are influenced by their information processing goals.
The distinction between online and memory based judgments matters. Most satisfaction surveys are delivered remotely, yet consumers’ everyday judgments tend to be memory based. 25 Mattila found that unless satisfaction surveys were administered immediately after their purchase, consumer responses were often based on their existing opinions of a company, rather than the actual quality of their recent experience. Unless the product or service was significantly negative or positive, existing inertia was not suppressed.
Mattila’s findings suggest that satisfaction surveys can lack the necessary information for businesses to assess their services and products, especially when they hope to use such data to improve their competitive edge. 25 If consumers’ experiences cannot suppress their inertia, then the utility of satisfaction responses falls. Thus, businesses who hope to rely on satisfaction data must collect this information at the point of service delivery. Businesses could also consider repeatedly measuring customer satisfaction over time, to account for the effects of inertia.
Digital transformation
As digital technologies continue to change the way traditional companies interact in established markets, many digital transformation projects have failed because of companies’ inability to adapt. 26 This inertia, in the form of socio-cognitive inertia, is an important factor inhibiting organizational transformation. In fact, organizational transformations have a success rate of 30%. As a result, researchers have explored ways that organizations can overcome their socio-cognitive inertia.
Decentralized organizations - which rely on teamwork at multiple levels of the business - can be successful when combined with high participation. 26 The inclusion of different types of workers, such as business and IT professionals, can help combat inertia from one level of the business. Participation is an important success factor in digital transformation, both for general success and for reducing employee resistance. As a result, companies are encouraged to include employees in the change process to overcome socio-cognitive inertia and to facilitate digital transformation.
Related TDL Content
Status quo bias
Inertia refers to humans’ inability to alter the ways they process information, sticking with default mental models. As a result, inertia has also been linked to the status quo bias, which describes our resistance to change. Both inertia and the status quo bias include a reliance on defaults, although inertia focuses on inhibiting change while the status quo bias focuses on general avoidance of change. If you’re interested in learning more, take a look at this piece!
- Inertia . (2021, May 27). Encyclopedia Britannica. https://www.britannica.com/science/inertia
- Mcguire, W. J. (1960). Cognitive consistency and attitude change. Journal of Abnormal and Social Psychology, 60 (3), 345-353.
- Hodgkinson, G. P. (1997). Cognitive inertia in a turbulent market: The case of UK residential estate agents. Journal of Management Studies, 34 (6) ,926-945.
- Guenther, C. L., & Alicke, N. D. (2008). Self-enhancement and belief perseverance. Journal of Experimental Social Psychology, 44 (3), 706-712.
- Nissani, M. (1994). Conceptual conservatism: An understated variable in human affairs? The Social Science Journal, 31 (3), 307-318.
- Heider, F. (1958). The Psychology of Interpersonal Relations.
- Festinger, L. (1962). Cognitive dissonance. Scientific American, 207 (4), 93-106.
- McGuire, W. J. (2013). An additional future for psychological science. Perspectives on Psychological Science, 8 (4), 414-423.
- Miller, D., & Chen, M. (1994). Sources and consequences of competitive inertia: A study of the U.S. airline industry. Administrative Science Quarterly, 39 (1), 1-23.
- Habersang, S., Küberling, J., Reihlen, M., & Seckler, C. (2019). A process perspective on organizational failure: A qualitative meta-analysis. Journal of Management Studies, 56 (1), 19-56.
- Narayanan, V. K., Zane, L. J., & Kemmerer, B. (2011). The cognitive perspective in strategy: An integrative review. Journal of Management, 37 (1), 305-351.
- Huang, H., Lai, M., Lin, L., & Chen, C. (2012). Overcoming organizational inertia to strengthen business model innovation. Journal of Organizational Change Management, 26 (6), 977-1002.
- Carrington, D. J., Combe, I. A., & Mumford, M. D. (2019). Cognitive shifts within leader and follower teams: Where consensus develops in mental models during an organizational crisis. The Leadership Quarterly, 30 (3), 335-350.
- Walter, G. D. (2016). Proactive and reactive criminal thinking, psychological inertia, and the crime continuity conundrum. Journal of Criminal Justice, 46 , 45-51.
- Alós-Ferrer, C., Hügelschäfer, S., & Li, D. (2016). Inertia and decision making. Frontiers in Psychology, 7 , 169.
- Liao, S. (2002). Problem solving and knowledge inertia. Expert Systems with Applications, 22 (1), 21-31.
- Koval, P., Kuppens, P., Allen, N. B., & Sheeber, L. (2012). Getting stuck in depression: The roles of rumination and emotional inertia. Cognition and Emotion, 26 (8), 1412-1427.
- Dicke, T. (2015). Waiting for the flu: Cognitive inertia and the Spanish influenza pandemic of 1918-19. Journal of the History of Medicine and Allied Sciences, 70 (2), 195-217.
- About Margaret. (2021). Margaret Heffernan. https://www.mheffernan.com/biography.php
- Heffernan, M. (2011). Willful Blindness: Why We Ignore the Obvious at Our Peril. Simon & Schuster.
- Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108 (3), 480-498.
- Stanley, M. L., Henne, P., Yang, B. W., & De Brigard, F. (2020). Resistance to position change, motivated reasoning, and polarization. Political Behavior, 42 (1), 891-913.
- Stein, J. (1997). How institutions learn: A socio-cognitive perspective. Journal of Economic Issues, 31 (3), 729-740.
- Uhlmann, A. J. (2005). The dynamics of stasis: Historical inertia in the evolution of the Australian family. The Australian Journal of Anthropology, 16 (1), 31-46.
- Mattila, A. S. (2003). The impact of cognitive inertia on postconsumption evaluation processes. Journal of the Academy of Marketing Science, 31 (3), 287-299.
- Ertl, J., Soto Setzke, D., Böhm, M., & Krcmar, H. (2020). The role of dynamic capabilities in overcoming socio-cognitive inertia during digital transformation - A configurational perspective. In 15th International Conference on Wirtschaftsinformatik.

Charismatic Leadership

The COM-B Model for Behavior Change

Choice Architecture

Gaslighting

Eager to learn about how behavioral science can help your organization?
Get new behavioral science insights in your inbox every month..
INVENTIVE PROBLEM SOLVING
Inventive Problem Solving
PSYCHOLOGICAL INERTIA
Hi, everyone!! Today i would like to share about Psychological Inertia (PI).
WHAT IS PSYCHOLOGICAL INERTIA
Psychological Inertia deals with resistance to change due to human programming.
In other word, it creates barriers to innovative brainstorming during problem solving.
Besides, PI is resulted from what a person has learnt before in the form of rules and regulations. Furthermore, the knowledge or experience would creates boundaries and restrictions in a person’s mind.
TECHNIQUE TO OVERCOME PSYCHOLOGICAL INERTIA
Using TRIZ procedures.
The TRIZ procedures can be used methodically to eliminate or reduce the effects that PI has on one’s personal creativity.
EXAMPLE CASE:
CASE 1: Connect the dots with no more than 4 straight lines without lifting your hand from the paper.

Most of the people (include myself) cannot solve this problems because we implicitly impose constraints that have not come with the problem !!
Hence, to solve the problem, we need to think outside the boxes.

CASE 2: Connect the vase with no more than 2 straight lines.

It is impossible to connect all of the vase using 2 lines only. To solve this problem, we can MOVE THE VASES, because it does not mention the position for the vase must keep in the same positions.

That is all for my sharing… Don’t forgot to like this blog if you like this post.
Share this:
Leave a reply cancel reply.
Fill in your details below or click an icon to log in:
You are commenting using your WordPress.com account. ( Log Out / Change )
You are commenting using your Facebook account. ( Log Out / Change )
Connecting to %s
Notify me of new comments via email.
Notify me of new posts via email.

- Already have a WordPress.com account? Log in now.
- Follow Following
- Copy shortlink
- Report this content
- View post in Reader
- Manage subscriptions
- Collapse this bar

IMAGES
VIDEO
COMMENTS
[1] Psychological inertia is similar to the status-quo bias but there is an important distinction in that psychological inertia involves inhibiting any action, whereas the status-quo bias involves avoiding any change which would be perceived as a loss.
In psychology, the “inertia effect” describes individuals' reluctance to reduce their confidence in a decision following disconfirming information ( Pitz and Reinhold, 1968 ). The concept of “psychological inertia” has been proposed to describe the tendency to maintain the status-quo ( Gal, 2006 ).
In the 1960s, social psychologist William J. McGuire noticed a resurgence in suggestions that people tend to maintain logical consistency between their cognitions and behaviors. 2 As a result, the idea of cognitive inertia was influenced by two existing psychological theories:
WHAT IS PSYCHOLOGICAL INERTIA Psychological Inertia deals with resistance to change due to human programming. In other word, it creates barriers to innovative brainstorming during problem solving. Besides, PI is resulted from what a person has learnt before in the form of rules and regulations.
The objective of any intervention on an individual or group of individuals would be to optimize psychological momentum, which has been characterized in past research by heightened perceptions of self-efficacy (e.g., Iso-Ahola & Dotson, 2014; Jones & Harwood, 2008) and goal progress (e.g., Markman & Guenther, 2007 ), as well as increased efficien...
To use nine windows, write the problem and the current system for solutions in the center of a 3 x 3 matrix, as shown in Figure 1. Figure 1: Nine Windows Matrix. Next, explore the problem at each of the three levels: Super-system (or Macro system): External environment and components that the problem or system interacts or may interact with