Essay Sample on Stochastic modeling

Overall, some research emphasizes the peril of too little structure (Eisenhardt and Okhuysen, 2002), other research highlights the peril of too much (Siggelkow, 2001), and still other research focuses on the balance (Miner et al. , 2001). However, these and other studies of organizations collectively emphasize the critical role of moderate structure for varied performance outcomes including innovation, survival, coordination, knowledge integration, and growth. Network Sociology

The best essay writers are ready to impress your teacher.
Make an order now!


Proceed

Much research suggests that an organization’s network of relationships creates unique structural constraints and opportunities which in turn profoundly effect organizational outcomes (Galaskiewicz, 1985; Powell, 1990; Fligstein, 2001). For example, one group of studies examines the impact of a moderately structured egocentric network on focal actor performance (e. g. , Burt, 1992; Krackhardt, 1992). Uzzi’s (1997) ethnographic study of garment manufacturers is a good illustration.

He distinguishes between shallow arms length (focused on simple market transactions) and embedded ties (possessing high trust, communication, and joint problem solving) and finds that firms with a balanced mix of ties (i. e. , arms length and embedded) are higher performing than those with only embedded or arms length, a result he attributed to temporal efficiencies and flexible access to unique sources of information (Uzzi, 1997: 57-60).

Other research also describes how networks with moderate connectivity generate better system-level outcomes than those that are disconnected or overly connected (e. g. , Rowley, Behrens, and Krackhardt, 2000). For instance, Owen-Smith and Powell (2003) find that members in a loosely linked, but relatively cohesive biotechnology network (characterized as “leaky” by the authors) enjoy the benefits of information spillovers that increase innovation within the network. In more recent work on Broadway musical networks, Uzzi and Spiro (2005) note similar results.

Specifically, the authors find an inverted U-shaped relationship between the degree of connectivity and cohesion of musical teams, and the artistic and financial success of the industry. They conclude that networks with a moderate amount of connectivity and cohesion encourage artists to bring together novel ideas, but also create enough stability to engender trust so that artists are willing to continue collaborating (Uzzi and Spiro, 2005).

Finally, recent theoretical research focuses on small world networks (i. e., moderate structure with some highly connected nodes, but most with only a few clustered connections). Computational studies find that these networks are easily searchable and tolerant to high degrees of connectivity error because of built-in redundant connections (Albert, Jeong, and Barabasi, 2000; Watts, Dodds, and Newman, 2002). Taken together, research in network sociology illustrates that moderately structured networks produce superior outcomes for both organizations and networks. Competitive Strategy Studies of strategy in dynamic markets are also concerned with the effects of structure on performance.

Early work focuses on the importance of maintaining a balance between “deliberate strategy” that is top-down, coherent, and organized, and “emergent strategy” that spontaneously arises, is often bottom-up, and is less structured (Mintzberg and Waters, 1982; Mintzberg and McHugh, 1985; Burgelman 1994). Similarly, research also examines balancing between the exploitation of old resources that are tightly structured within the firm and the exploration of new resources that are outside the firm in order to create new products and businesses (March, 1991; Karim and Mitchell, 2000; Katila and Ahuja, 2002).

Other research examines the importance of loosely coupled structures among business units to achieve successful diversification (Galunic and Eisenhardt, 1996, 2001; Williams and Mitchell, 2004; Gilbert, 2005). Of particular interest here is research that observes that simple rules within moderately structured capabilities are important for high performance (Burgelman, 1996; Gersick, 1994; Galunic and Eisenhardt, 2001; Rindova and Kotha, 2001).

For example, Brown and Eisenhardt (1997) find that while computer firms use widely varying amounts of structure in their product development processes, firms with a moderate amount of structure in this process create high quality and innovative products that are consistently on time and on target. These structures, which include partial rules and semi-structured priorities, roles, and responsibilities, enable firms to improvise new products in real-time.

In contrast, firms with too much structure lack the flexibility to meet changing industry demands while firms with too little structure become too disorganized and are consequently unable to create a consistent portfolio of products. Similarly, Burgelman (1996) describes a strategic process at Intel that used a simple rule in semiconductor manufacturing to avoid inertia and reorient the firm towards the execution of new opportunities. The rule directed mid-level managers to allocate scarce manufacturing capacity on the basis of product profit-margin.

This rule was constraining enough to prioritize manufacturing capacity in a manner that fit Intel’s strategic goals, but simple enough to be flexibly applied across a wide variety of semiconductor products whose margins were likely to change over time in this volatile industry. For example, adherence to the rule allowed Intel to effectively shift from DRAMs to microprocessors without the explicit intervention of the firm’s senior executives (Burgelman, 1996). More recent studies also focus on simple rules and moderately structured capabilities in dynamic markets.

For example, Rindova and Kotha (2001) described how Yahoo! managers used three partnership rules to help capture new opportunities in the emergent Internet industry: (1) basic service or product must be free; (2) do a deal only if it enhances the customer experience; and (3) make no exclusive deals. These three modest rules provided coherence and direction for the alliance process, yet did not prescribe the types of alliances that needed to be formed. As a consequence, managers had the flexibility to pursue a wide variety of partnerships depending on the opportunity at hand. This allowed Yahoo!

to morph over time from its original emphasis on search engines to more profitable interactive services such as chat rooms, auctions and e-commerce. Overall, these studies indicate that moderate structure is associated with high performance. Indeed while prior literature suggests that superior performance ensues from tightly linked organizational processes that become complicated routines (Nelson and Winter, 1982), an emerging perspective on strategy in dynamic markets suggests that, as markets become more dynamic, success stems from loose capabilities that remain purposefully simple (Eisenhardt and Martin, 2000).

Complexity Theory The tension between too much and too little structure also plays a prominent role in the complexity sciences. In particular, complexity theory seeks to understand how system level adaptation to the environment emerges from the actions of its agents (Anderson, 1999; Eisenhardt and Bhatia, 2002). A distinguishing and counter-intuitive feature of complexity theory is the argument that systems composed of a few simple structures give rise to adaptive behaviori (Prigogine and Stengers, 1984; Reynolds, 1987; Kauffman, 1989; Langton, 1992).

By condensing past learning about the environment into simple structures – often called ‘simple rules’ or ‘schemata’ – these systems are able to enjoy a balance of order and disorder that enables adaptation (Holland, 1992; Gell-Mann, 1994). Systems balancing order and disorder are adaptive because they are efficient, yet not too rigid, in their response to change (Langton, 1992; Kauffman, 1993; Simon, 1996). As Kauffman (1993) notes, systems exhibiting these behaviors (often called ‘complex adaptive systems’), “appear to be best able to coordinate complex, flexible behavior and best able to respond to changes in their environment (p.29).

” Much of complexity theory focuses on explaining the features of complex adaptive systems – i. e. , how systems composed of unique and yet partially connected agents respond to changes in their environment through the use of simple rules or schemata (Holland, 1992; Kauffman, 1993; Gell-Mann, 1994). Several features of complex adaptive systems are particularly useful in understanding the tension between too much and too little structure. One is the relevance of simple rules or schemata for effective adaptation.

An example is Reynolds’ (1987) computer simulation study. The author showed that systems composed of three very simple rules could produce the adaptive flocking behavior that is observed in bird migration. These rules were simple in two ways: 1) the number of rules was small – i. e. , only three rules were necessary to produce the behavior, and 2) each rule guided only a few, direct actions – e. g. , one rule stated that if too close to another bird, then the bird should move away by a fixed amount.

In addition to rules, another feature of complex adaptive systems that is relevant to the tension between too much and too little structure is the edge of chaos, a state of balance between order and disorder (Langton, 1992; Kauffman, 1993; Carroll and Burton, 2000). In the language of nonlinear dynamics, systems on the edge of chaos are at an ‘unstable critical point’ (Strogatz, 2001) tending towards less adaptive states that are either too ordered or too chaotic.

The edge of chaos is also ‘dissipative,’ meaning that it requires energy and/or attention to maintain its position (Prigogine and Stengers, 1984). As a bi-product of this dissipative state, systems on the edge of chaos produce frequent mistakes from which they must quickly recover in order to improve the likelihood of continued high performance (Brown and Eisenhardt, 1998). Another feature that is relevant is the sensitivity of complex adaptive systems to the environment.

For example, Langton’s (1992) artificial life simulations showed that dramatically different system-level behaviors can emerge from small perturbations of environmental conditions. For instance, small changes led to three unique states which might be called “highly ordered”, “edge of chaos”, and “highly chaotic” (Langton, 1992). While highly ordered and highly chaotic system states resulted in failure, systems that evolved to the “edge of chaos” state produced the type of complex and adaptive responses that allowed life to thrive.

In fact, the differences between these states were quite salient, which implied that these systems appear to have an internal threshold function that produces abrupt ‘tipping point’ transitions between adaptive and maladaptive states. Synthesis and Recapitulation In each literature, we found evidence for an inverted U-shaped relationship between the amount of structure and performance. We were particularly struck by the commonality across diverse structures in the organizational studies, network sociology, strategy, and complexity theory streams of research.

Whether the structures are roles, linkages, rules, or schemata, a common logic of adaptation appears to underlie all of these observations – i. e. , that a moderate amount of structure leads to higher performance (Kauffman, 1993; Gell-Mann, 1994). The logic that underlies these observations leads to two hypotheses that form the theoretical core of the emerging theory that we seek to explore and extend. The first hypothesis links the amount of structure with high performance.

Consistent with research in organization studies, network sociology, strategy, and complexity theory, we argue that over-structured systems constrain behavior by impeding improvisational response to dynamic environments (Weick, 1976; Reynolds, 1987; Langton, 1992; Kauffman, 1993), whereas systems that are under-structured lack the coherence to efficiently respond to changes in these environments (Brown and Eisenhardt, 1998; Weick, 1998). This points to the existence of an optimal level of organizational structure – i. e. , a region of moderate structural complexity giving rise to the highest performance.

H1: Organizational performance has an inverted U-shaped relationship with the amount of structure. The second proposition links environmental dynamism to the optimal level of the amount of structure, and builds upon the inverted U-shaped relationship of performance with structure (H1). Consistent with extant research (Haveman, 1992; Pisano, 1994; Eisenhardt and Tabrizi, 1995), we argue that as environmental dynamism increases, the need to quickly and flexibly respond to changing opportunities becomes more critical than efficiency for achieving effective performance.

Therefore, simpler structures become more useful because they are applicable to a broad array of opportunities (Brown and Eisenhardt, 1998; Rowley et al. , 2000). Simpler structures enable improvised actions that are likely to be better suited to the specific demands for capturing any particular opportunity (Miner, et al, 2001; Rindova and Kotha, 2001). Conversely, as environmental dynamism decreases, more structure becomes more effective (Miller and Shamsie, 1996).

In these settings, managers can establish complicated and dense structures that are customized to the environment because change takes place so infrequently and often incrementally (Tushman and Anderson, 1986; Miller and Shamsie, 1996; Siggelkow, 2001). Thus, while structure is less effective in more dynamic environments where the flexibility to adjust to new opportunities is key, it is more effective in less dynamic markets where efficiency is critical and possible. H2: As environmental dynamism increases the optimal amount of structure decreases.

While these hypotheses capture the core theoretical relationships regarding the tension between too much and too little structure that appear in the extant literature, they leave open key theoretical issues. For example, it is unclear whether the inverted U-shaped relationship is symmetric. That is, it may be advantageous to err on the side of too much or too little structure. It is also unclear whether there is a wide range of optimal structures that suggests the optimal structure is easy to find and manage, or whether there is a very narrow range of optimal structures that suggests a managerially challenging edge of chaos.

It is also unclear how, if at all, various attributes of market dynamism (e. g. , velocity, complexity, ambiguity, unpredictability) might influence the tension between too much and too little structure. Finally, the role of mistakes has not been explored. Thus, our research has two objectives: confirm the internal validity of the above hypotheses using the precision of simulation, and more significant, conduct virtual experiments to probe the open theoretical issues, thereby elaborating and extending the theory. METHODS

We conduct this research using simulation methods. Specifically, we model the environment as a flow of heterogeneous opportunities, and the organization as a collection of rules for executing those opportunities. We chose simulation because it is a particularly effective method for research such as ours where the basic outline of the theory is understood, but its underlying theoretical logic is limited (Davis et al. , 2006). That is, simulation “facilitates the identification of structures common to different narratives” (Rudolph and Repenning, 2002: 4).

Given its computational precision, simulation is useful for internal validation of underlying theoretical logic as well as elaboration and extension of theory through experimentation (March, 1991; Zott, 2003). The result is often a more complete theory with greater clarity in its assumptions, theoretical relationships, constructs and underlying logic. As Sastry (1997: 237) notes, simulation helps “examine the completeness, consistency, and parsimony of the causal explanation laid out in an existing theoretical model. ”

Simulation is also a particularly useful method when the focal phenomenon is non-linear (Carroll and Burton, 2000; Davis et al, 2006). While inductive case and statistical methods may indicate the presence of non-linearities, they are less precise than simulation in the calibration of these non-linearities, particularly complex ones such as tipping points, cusps, and skews. Fundamental tensions (such as in our study) often exhibit these non-linear effects (Rudolph and Repenning, 2002). Simulation is also a particularly useful method for research such as ours in which empirical data are challenging to obtain (Davis, et al, 2006).

For example, simulation enables us to study mistakes that are often difficult to measure (e. g. , people are reticent to acknowledge them), and to unpack the distinct effects of environmental dimensions that may be difficult to disentangle in actual environments. Finally, simulation is particularly useful for understanding longitudinal and process phenomena such as ours because simulation can be used to track these processes over longer time periods than is realistically possible with empirical data (Sastry, 1997; Zott, 2003).

While several simulation approaches (e. g., systems dynamics, genetic algorithms) are available, we use stochastic process modeling. This approach enables researchers to custom-design the simulation because it makes no particular assumptions about the research question, and is not constrained by an explicit problem structure (e. g. , cellular automata grid). Rather, it allows the researcher to piece together processes that closely mirror the focal theoretical logic, bring in multiple sources of stochasticity (e. g. , arrival rates of opportunities), and characterize them with a variety of stochastic distributions (e. g. Poisson, gamma, normal) (Davis et al., 2006).

Stochastic modeling is a particularly effective choice for our research because the problem structure does not fit well with any structured approach. Stochastic modeling enables us to accurately represent the phenomenon rather than force-fitting it into an ill-suited structured approach. Further, since the rough outline of our base phenomenon is established in the empirical literature, we enhance the likelihood of a realistic model by using stochastic modeling (Burton and Obel, 1995; Carroll and Harrison, 1998; Burton, 2003), and thus mitigate a key criticism of simulation.

Finally, stochastic modeling is also a particularly effective choice for our research because it enables the inclusion of multiple sources of stochasticity that offer greater latitude to experiment to develop new theoretical insights than more structured approaches (Davis et al. , 2006). For example, we are able to include several sources of stochasticity that are theoretically important in this research (e. g. , improvisation of behaviors, flow of opportunities) and to experiment with theoretically relevant environmental attributes (e.g. , velocity, ambiguity).

Modeling the Environment and Organization There are two major components of our simulation model: environment and organization. We model the environment as a flow of heterogeneous opportunities. This is consistent with the conceptualization of market dynamism in the entrepreneurship (Shane, 2000) and Austrian economics (Hayek, 1945; Kirzner, 1997) literatures where such environments are focal. We model the organization as a collection of rules for executing these opportunities.

This conceptualization is consistent with the organization theory and strategy literatures which indicate that rules are an important type of structure in dynamic environments (Brown and Eisenhardt, 1997), and can be used to capture key market opportunities (Rindova and Kotha, 2001; Bingham and Eisenhardt, 2005). While we could have operationalized structure in several ways (e. g. , nodes, linkages, roles), we chose rules because of their importance in dynamic markets (Burgelman, 1994) and their relevance in organization theory (Cyert and March, 1963) and strategy (Nelson and Winter, 1982; Teece et al.

, 1997), our two primary areas of interest. A brief outline of the model follows. The organization has a set of rules to capture opportunities in its surrounding environment. In each time period, the organization takes some rule-based actions to capture a given opportunity (e. g. , selection of opportunities, execution of opportunities). But typically the organization does not have rules to cover all aspects of an opportunity, and so it also takes some flexibly improvised actions. Thus, the firm takes both rule-based and improvised actions to capture opportunities.

When these actions match the opportunity, the opportunity is captured and firm performance increases by the value of opportunity. A key point is that the firm’s actions (both rule-based and improvised) require managerial attention which is bounded (March and Simon, 1958) and further, since improvised action involves real-time sensemaking (Weick, 1993) and thoughtful convergence of design and action (Miner et al. , 2001), improvised action requires more attention than rule-based action. Therefore, the organization has a limited number of actions that it can take in any one time period.

When the firm’s attention budget is used up, the firm cannot take actions until its attention is replenished in the next time period. As in all research, we make several assumptions. First, in order to focus on the effects of the amount of structure on performance, we assume that all rules are appropriate for at least some opportunities. In addition, since recent empirical research indicates that simple heuristics such as ours are learned quickly and stabilize rapidly (Bingham and Eisenhardt, 2005), we assume that the rules have already been learned, and that adaptation to new environments occurs through improvised action.

Finally, we assume that the effects of competitors are realized through the flow of opportunities, an assumption that mirrors the Austrian economics argument that market dynamism is endogeneously created by the actions of competitors (Kirzner, 1997). Research on competitive action suggests that this is likely to be a valid assumption in dynamic environments (D’Aveni, 1994; Roberts, 1999; Hill and Rothaermel, 2003). Nonetheless, while reasonable, these assumptions could be explored in future research.