1 Introduction

Computer supported collaborative work (CSCW) was coined as a broadly conceived, interdisciplinary research stream (Greif 1988). As Schimdt and Bannon (1992, p. 13) wrote in the inaugural issue of the CSCW journal, “[a]t the core of this conception of cooperative work is the notion of interdependence in work” thus suggesting that this nascent field did not take as its starting point a presumption that computer systems were discrete or stand-alone.

In this paper we argue that, despite its early and explicit recognition of an open-ended agenda for cooperative work, CSCW has de facto—as evident from its principal academic outlets, such as the CSCW journal, the CSCW and ECSCW conferences—privileged more restricted, confined and specialised forms of cooperative work. Thus, two meanings of the field have emerged: CSCW-in-use consisting of dominant theoretical concepts, methodological approaches and empirical scope as mirrored in CSCW’s principal outlets; and espoused-CSCW referring to the more open agenda initially accompanying CSCW. Our interest is in revitalising the agenda of espoused-CSCW as a more salient part of CSCW-in-use.

The aim of this paper is thus to move beyond a critique of dominant perspectives on CSCW to contribute by outlining an alternative perspective on CSCW addressing what we call Information Infrastructures (IIs) (Hanseth et al. 1996). As a working definition, IIs are characterised by openness to number and types of users (no fixed notion of ‘user’), interconnections of numerous modules/systems (i.e. multiplicity of purposes, agendas, strategies), dynamically evolving portfolios of (an ecosystem of) systems and shaped by an installed base of existing systems and practices (thus restricting the scope of design, as traditionally conceived). IIs are also typically stretched across space and time: they are shaped and used across many different locales and endure over long periods (decades rather than years).

The aim of our paper is not merely to note that CSCW has drifted from its early founding conceptions. It is also to flag the deep technological, economic and institutional trends whereby collaborative technologies are increasingly taking on II qualities, which means that IIs are more central to the CSCW agenda than 20 years ago. Related to this, scholars have begun to question whether popular methodological and analytical frameworks have kept pace with these changes. The arguments developed in the paper have their roots in both CSCW as well as a number of fields allied to it—such as Information Systems Research, Science and Technology Studies, Organisation Studies and Social Informatics—which, in one way or another, have begun to problematize ‘localised’ explanations of technology. Williams and Pollock (2012), for instance, have criticised the preoccupation of information system researchers with the ‘single site implementation study’. Karasti et al. (2010, p. 407) have described how there is a bias towards studying ‘short-term temporal aspects’ of workplace information technologies. Kallinikos (2004) has cautioned against the study of information artefacts predominately (or only) at the place where the user encounters them.

What each of these scholars point to, albeit in different ways, is how, when we focus on one specific locale or time period, important influences from other levels and moments of technological design and evolution may be ignored. Focusing exclusively on implementation and use, for instance, means that a range of (equally) important actors and factors in the shaping of a technology are relegated to the background. An II perspective by contrast, would contribute by supplementing a local view with what might be thought of as an ‘extended design’ perspective to capture how workplace technologies can be shaped across multiple contexts and over extended periods of time. For instance, below we will illustrate how in the design of an Enterprise Resource Planning (ERP) module a vendor brought together several geographically dispersed clients from different organisational contexts not simply to identify requirements but also to align and ‘smooth over’ differences. We also show how, within a large multi-national company, despite sustained attempts at integration over a period of 15 years, and the introduction of MS Sharepoint collaboration technology, information is still fragmented across a number of IIs.

Our paper contributes by identifying, illustrating and discussing important research themes for CSCW, foregrounded by our II perspective, and then goes on to spell out some implications that stem from it, notably by outlining a re-conceptualisation of the role and nature of ‘design’ as related to CSCW. The relevance of our paper is that the notion of design we will put forward is one suitable for discussion of modern IIs and is thus distinctly more distributed, multi-purpose and long-term than the one dominating CSCW to date.

The remainder of the paper is organised as follows. Section 2 outlines historically how cooperative work has been conceptualised within CSCW before discussing previous contributions in CSCW related to II. Sections 3 and 4 address two key themes in II: standardisation and embeddedness. Standardisation deals with how the design, implementation and customisation of a technology at one local site interacts with, and is constrained by, implementation of ‘the same’ technology elsewhere. Embeddedness addresses the way the implementation of an II often becomes entangled with apparently unrelated IIs. Both ideas are illustrated with relevant empirical work. Whilst these topics are of course empirically intertwined, for analytical purposes, we treat them separately. Section 5 revisits the debate on how CSCW might inform design, and section 6 concludes by arguing that CSCW should develop a more prospective outlook in order to adequately study new, emerging technologies.

2 Conceptualising cooperative work

There has been from its inception what some scholars have dubbed a ‘localist’ sensibility in CSCW (Pollock et al. 2007). This sentiment is manifest in the CSCW emphasis on workplace studies and especially single site-implementations (in a specific department or an organisation) (see for instance Fitzpatrick 2003), the influence of ethnographically inspired research methods (Rooksby et al. 2009), a focus on the adoption of a given system rather than on how users collaborate using multiple systems (Berg 1999), and studies of relatively small numbers of users (tens or hundreds rather than thousands).

A telling illustration of CSCW’s preoccupation with the local is the level of interest in the notion of and technologies supporting ‘awareness’ (see for example Gutwin and Greenberg 2002). The Journal of Computer Supported Cooperative Work (JCSCW), for instance, has published 176 papers addressing awareness (in the title and/or abstract) out of an estimated total of 350 in its first 20 volumes.Footnote 1 The notion of awareness is about the micro-mechanics of collaboration and, as a result, necessarily privileges ethnographically inspired modes of empirical inquiry. There is accordingly a reciprocal relationship in CSCW between key theoretical constructs and a methodological bias that are mutually reinforcing. A localist sensibility has for good reasons been important to the historical research agenda of CSCW.Footnote 2

2.1 Design studies and the local sensibility

An important ambition of CSCW is directed towards improving the design of computer-based systems by acquiring a deeper understanding of the collective and collaborative character of work processes and how they may be better supported by more appropriately designed systems. This ambition has benefited greatly from the growth of localist perspectives (inspired, for example, by the penetrating work of authors such as Lucy Suchman (1987)) and an associated enthusiasm for ethnographic studies, which has been very effective as a research methodology in producing rich pictures of particular design settings. However, in doing this kind of research, ethnographers have often opted for relatively restricted notions of the ‘local setting’ and simple research designs. These mostly involve the study of single sites or closely-related settings (Marcus 1995). However, the ‘localist turn’ within the social sciences has been criticised, most notably by Harris (1998, p. 295), who has argued that an overly confined notion of the ‘local’ or ‘situation’ is unhelpful:

[w]hether one speaks of ‘local’, ‘situated’, or ‘embedded’ knowledge, the implication is that the narrative is somehow confined to a small ‘space’—if not in the literal sense of a geographical metric, then at least in the sense of restricted social, cultural, and temporal metrics.

Furthermore, if one looks back to some of the key works of ethnomethodology, for instance, from which CSCW has often drawn much initial inspiration for emphasising the importance of local context (see for instance Jirotka et al. 1992), one finds a similar sentiment:

The term local organization (or local production) enjoys currency in ethnomethodology as well as related areas in the social sciences and philosophy. Unfortunately, to speak of local organization or local production is often understood to imply a kind of nominalism or, worse, a kind of spatial particularism. In ethnomethodology, the adjective local has little to do with subjectivity, perspectival view points, particular interests, or small acts in restricted places (Lynch 1993, p. 125, emphasis in original)

The ethnomethodological point about ‘local organization’ was that social interactions were patterned not by a ‘homogenous’ context but a ‘heterogeneous’ one. Interactions were seen to be “bound to a local contexture of relevancies” rather than a “single orderly arrangement” (Lynch 1993, p. 125). Local here was meant to point to ‘heterogeneity’ in contextual issues rather a single or specific site.

Interestingly, we find something like these same concerns expressed in Schimdt and Bannon’s (1992) early attempt to define an agenda for CSCW where they explicitly argued that CSCW should expand its empirical basis. They noted specifically how the prevailing notion of a ‘group’ confined the relevance of CSCW:

It has—implicitly or explicitly—been the underlying assumption in most of the CSCW oriented research thus far that the cooperative work arrangement to be supported by a computer artifact is a small, stable, egalitarian, homogeneous, and harmonious ensemble of people—a ‘group’ (ibid., p. 15).

There are clearly a number of exceptions to the critique of the CSCW field offered by Schmidt and Bannon (we discuss these in the next section). Yet it could also be argued that some aspects of their critique still remain valid today. This is particularly the case for what we want to describe below as the twin focus on ‘standardisation’ and ‘embeddedness’.

2.2 Initial work on information infrastructures

Some years ago, from the field of Participative Design, Greenbaum and Kyng (1991) put forward the idea of ‘tailoring’ to capture the quite practical kinds of rework necessary to get workplace information technologies to function within a particular organisational setting. In particular, they showed how complex organisational technologies often needed to be taken apart, broken down, adapted and reconfigured before they could became useful for an organisation. Greenbaum & Kyng (ibid.) argued that this evidenced how technological development was carried out within the later stages of implementation and use, and proposed the notion of ‘continuing design in use’ to capture this. The relevance of this insight—but not something subsequently developed in CSCW—was that it was an early indication of how design could become stretched out in space and time.

Karasti et al. (2010, p. 407) turned to and attempted to broaden this idea some years later. They argued that the CSCW bias (their word) on short-term temporal aspects of workplace technologies was at the expense of a longer-term view which they saw as important when dealing with II type developments. Building on Greenbaum & Kyng they attempted to blur the distinction not simply between design, implementation and use but also later stages like maintenance and redesign. They coined the term ‘continuing design’, which was apparently a “development orientation where the relation between short-term and long-term—traditionally seen as a tension—is addressed and accounted for from the point of view of infrastructure time by incorporating it as a foundational design consideration” (2010, p. 407). Such a re-orientation was necessary, they argued, because IIs work according to different timescales than traditional IT projects. Whilst traditional IT projects operate over periods of 3 to 5 years, what they called ‘project time’, IIs endure over periods of multiple decades, or ‘infrastructure time’. Their work begs the question as to just how CSCW type analysis might go about finding ways to focus on extended temporal scales as well as the other particularities raised by IIs.

This was also the central issue of a recent special issue of the Journal of the Association for Information Systems (JAIS) which flagged how IIs result from large-scale, protracted investments, and therefore tend to have long lifespans, but also how they are typically erected on the foundations of earlier systems development and implementation work (Edwards et al. 2009). This means infrastructures come into contact with other systems based around different purposes, standards or classification systems. Edwards et al. (2007) suggest it may be more helpful to apply a metaphor of ‘growing’ rather than designing or building infrastructures. The term attempts to capture, in their words, the “sense of an organic unfolding within an existing (and changing) environment” (2007, p. 369). They go on to suggest that within infrastructures there is a “recurring issue of adjustment in which infrastructures adapt to, reshape, or even internalize elements of their environment in the process of growth and entrenchment” (2007, p. 369). Edwards et al. (2007) also note how these are the ‘points’ at which systems may fail to become infrastructures:

They cannot successfully link with other systems, adapt to a changed environment, or reshape or internalize elements of the environment in order to grow and consolidate. A principal reason most systems fail is that these critical moments are difficult to anticipate or plan for, and rarely lend themselves to deliberate design (2007, p. 369).

Edwards et al. (2007) have described how difficulties in aligning the entrenched differences between local systems generate pressures of competition or accommodation between systems that may be resolved through the creation of ‘gateways’, which allow multiple divergent systems to interoperate. The tensions and discrepancies between local systems may in due course generate pressures leading to periodic adjustments and redevelopments to accommodate changing internal and external circumstances (Ribes and Finholt 2009).

Infrastructure development and evolution thus involves simultaneous work on many fronts. For example, infrastructure design needs to serve as a link forwards towards future anticipated users/uses. At the same time, infrastructure implementation involves building workable bridges between necessarily generic features of the e-infrastructure and the particular locales of use. In this respect Ribes and Finholt (2009) note that those trying to initiate, promote and grow infrastructures need to integrate the ‘demands of the present’ with those imagined as important in ‘the future’—which they describe, after Braudel (1949), as ‘The Long Now’ (longue durée). Their article points to the way attempts to grow infrastructures are often highly unpredictable and prone to failure. The sought-after future proof systems of today—which strive to cater for all purposes including those not yet envisaged—often end up becoming tomorrow’s legacy systems.

Ribes and Finholt (2009) also develop a much broader focus than typical CSCW in that they see the work of growing IIs (or “cyberinfrastructures” for research) as altogether more multi-purpose:

The problems participants articulate span much broader scales than technology development: they speak of encountering difficulties in the spheres of science policy, funding, organizing work and maintaining technical systems (ibid., p. 376).

To capture the range of activities that II participants might be engaged in they talk of three ‘scales of infrastructure’. There are those to do with ‘creating bridges between, for example, experimental and production systems or design intents and user requirements’ (ibid., p. 378) (what they describe as enacting technology); the organizational arrangements that make long term projects possible (various forms of organising work); and linking projects to wider longer term goal beyond those of the project team (which are issues to do with institutionalizing technology).

A further JCSCW contribution explicitly employing an infrastructure perspective investigates data curation practices longitudinally within an e-Science data collection project (Karasti et al. 2006). Key to this analysis is the role of standards for terminology (metadata) and the relationship with the different local ‘dialects’ used by the involved communities of scientists. Karasti et al. conclude by underscoring the importance of understanding the “unavoidable tensions and conflicts” that occur when “balancing multiple timeframes or local–global options” (ibid., p. 352) but refrain from spelling out strong theoretical, methodological or practical implications from their study.

In a series of studies within the healthcare sector published in JCSCW (Aanestad 2003; Winthereik and Vikkelsø 2005; Hanseth and Ljungberg 2001; Ellingsen and Monteiro 2003, 2006) various authors explicitly draw on an II perspective to analyse how IIs involve a mutual shaping of both technology and work practice. With an II perspective, the adoption of technology standards is not portrayed as the forcing of an ‘iron grid’ on mutable work practices. Rather, this perspective suggests a co-constructed process involving the reciprocal shaping of both the standard and the work practice. For instance, Winthereik and Vikkelsø analyse tensions arising in attempts to standardise communication between hospitals and general practitioners, underscoring inherent differences in interests. They conclude by arguing that because standards are not an iron grid ‘imposed on’ situated practices “it may be possible to further the integration of healthcare organisations through standardisation” (ibid., p. 61).

Much of this work, in one way or another, calls into question the traditional privileging of design in CSCW (particularly the treatment of design as a discrete, prior episode in isolation from implementation). It also questions its emphasis on the immediate circumstances surrounding workplace information technologies (referred to above as CSCW’s ‘here and now’ focus). It also potentially broadens out CSCW’s core ambitions. As Ribes and Lee (2010, p. 232) suggest: “[r]ather than supporting teams or groups [II] practitioners speak in terms of communities, disciplines and domains”.

With this in mind, we argue for a revitalised agenda for CSCW and put forward the developing II perspective(s) as a constructive response to Schimdt and Bannon (1992). Their principal object was to broaden the empirical phenomena that CSCW studies to include:

systems that support cooperative work arrangements that are characterized by a large and maybe indeterminate number of participants, incommensurate conceptualizations, incompatible strategies, conflicting goals and motives, etc. (ibid. pp. 17–18).

With such a widening of the empirical phenomenon, there is an accompanying need to develop broader perspectives. Schimdt and Bannon (1992, p. 17, emphasis added) exemplified what an empirical broadening of CSCW might imply by discussing what was back then a growing and highly influential technology field—that of Computer Integrated Manufacturing (CIM). Seemingly in the way they saw it CIM had disappeared from the CSCW radar despite the fact that it adhered to many of the field’s core ideas:

The ambition of the efforts of the Computer Integrated Manufacturing (CIM) field is to link and fuse the diverse information processing activities of the various manufacturing functions such as design and process engineering, production planning and control, process planning and control, purchasing, sales, distribution, accounting, etc. into a unitary information system.... A CIM system embracing these information processing activities on a company-wide scale should be seen as a unified database system facilitating and supporting the horizontal and hierarchical, indirect and direct, distributed and collective cooperation of a heterogeneous ensemble of distributed decision makers throughout all functions of manufacturing. CIM is thus faced with issues that are crucial to CSCW…. However, despite the large amount of work on CIM, and its obvious pertinence to the CSCW field, this domain is almost totally absent in the work of the CSCW community. In our view, this is a loss to the field.

Since then CIM has since developed into an industry and become a central topic for scholarly study for fields like Information Systems Research—albeit under its current name, Enterprise Resource Planning (ERP) systems or simply Enterprise Systems (ES) (for a review of the evolution of CIM into the modern day terminology of ERP see Pollock and Williams 2009). Recalling Schimdt and Bannon’s (1992) words, we believe it is a loss to CSCW that workplace information technologies like these—integrated, interconnected, large-scale systems, that we describe as IIs—have not attracted more attention from CSCW scholars. In the 20-year history of the CSCW field, for instance, we find that there are few mentions of the terminology of ERP (based on a free-text search of online archives) (but see Taylor and Virgili 2008). The lack of ERP studies is, on one hand, perhaps understandable. The ambition of much CSCW research is geared towards improving the collective and collaborative character of work processes through more appropriately designed systems. ERP seems to contradict this goal at a number of levels (Pollock and Williams 2009). For instance, on the one hand, the technology hardly appears amenable to detailed study let alone shaping. Its design occurs at some distance in time and space from adoption and use. Thus influencing ERP appears outwith the capacities of CSCW research. On the other hand, the absence of ERP studies illustrates a more general and important issue which that might explain the relative neglect of ERP. CSCW lacks the analytical tools to develop an II agenda.

In the next section we present and discuss cases of collaborative work (interdependence of work) stemming from integrated, interconnected technologies exemplified but not restricted to ERP/ES.

3 Standardisation

Enterprise systems illustrate how IIs are used (at least rhetorically) as vehicles for promoting economy of scale effects in (especially larger) private and public business organisations. Economy of scale is to be achieved by technologically mediated standardised ‘best practices’ (Wagner and Newell 2004). The key challenge in implementing standardised packaged enterprise solutions has been seen in terms of a misalignment between the various standard organisational presumptions embedded in the generic software and the particular ‘non-standard’ structure and practices of the adopting organisation. The emphasis on ERP as a driver towards best practice may be in tension with another claim, which is that these processes are diverse and flexible enough to be able to cater for the full range of possible organisational practices and thus able to be adapted to any particular organisation. Localist accounts and practice-based research on standardised work practices, by contrast, tend to document how users improvise around imposed constraints. Scott and Wagner (2003) in their study of a US university describe how the standard templates in the ERP package were ‘compromised’ through ‘skirmishes’ and user resistance and this allowed the emergence of a much more ‘local information system’.

For all its merit, however, this work leaves unaccounted the extent to which degrees of standardised work is achieved. As Pollock et al. (2007, p. 256) write, localist accounts emphasise how “technologies are ‘imported’ (‘domesticated’, ‘appropriated’ or ‘worked-around’) into user settings, while there is comparative lack of emphasis on the reverse process through which an artefact is ‘exported’ from the setting(s)”. In support of localist accounts we acknowledge the inherent and crucial abilities of users to tinker with technology, but, unlike these accounts, we wish to supplement them with analysis of how (and to what extent) IIs mediate standardised work practices. As Leonardi and Barley (2008, p. 161) note, the malleability of the use of collaborative technology is by now well-rehearsed and “more can be gained by asking why different [contexts] experience similar outcomes of the same technology”.

With an II perspective, collaborative technologies serving multiple, similar use communities will necessarily rely on forms of standardisation (standardisation is the only way these similarities in demand can be met). The question within II, then, becomes an empirical one of how standardisation plays out. In other words: what exactly is standardised (and what is not); when in the biography of the technology does standardisation occur; and for whom does standardisation apply? We illustrate some of these issues with examples from our own work. An essential insight with IIs is that standardised solutions are never identical but are made to be similar enough for given purposes or tasks.

3.1 Reflexive standardisation

The introduction of electronic patient record (EPR) systems in hospitals was intended to improve the quality of patient care by replacing the existing fragmented and often unavailable paper based patient record by an electronic one which would make any information instantly available to anybody, anywhere and anytime. The Medakis project (1996–2003) was to establish Siemens’ EPR (named DocuLive) in all the five largest hospitals in Norway. We focus here on the implementation at ‘Rikshospitalet’ (see Hanseth et al. 2006), where the DocuLive system was intended to serve several ambitions. This was that it should include all clinical patient information, covering the needs of all users; it should be built as one single integrated system; it should enable better collaboration and coordination of patient treatment and care through electronic information sharing and exchange. Finally, a more general and important aim with regard to the arguments developed in this paper, was that the system should be a standard EPR solution for all Norwegian hospitals. The deadline set for the delivery of the final system was the end of 1999. The project started with the best intentions of involving users, acknowledging current work practices, and favouring a bottom-up development strategy. However, as we illustrate below, the standardisation of both technology and work practices was dramatically underestimated.

Shortly after it began, project members became aware that within Siemens other EPR projects were also underway in Sweden, UK, Germany, and India. It also became increasingly apparent that the Norwegian project was not at the top of Siemens’ priorities, since Norway represented the smallest market amongst the countries included. In practice, this meant that Siemens prioritised the projects of the countries with potentially more profitable markets. Resources were funnelled to more lucrative areas, and this increased the risk of overrun in the Norwegian project. As a consequence, the Norwegian project members decided to make moves towards ‘internationalizing’ their project, first to a Scandinavian level and later to a European one. However, this decision weakened the consortium’s position with respect to Siemens, since now the requirements from all projects across countries would need to be merged and a new architecture designed (called IntEPR).

In 1999 Siemens acquired a large US software development company developing software solutions for health care. As a consequence, Siemens’ medical division’s headquarters was moved from Europe to the US, and the project’s scope became global (this meant the IntEPR architecture was dropped in favour of a new system called GlobEPR). And as the number of involved users grew, large-scale participatory development became unmanageable. Subsequently, after a few years, only a small number of user-representatives from each hospital continued to actively participate in the development. Moreover, the need to continuously find common agreements between the hospitals turned the intended bottom-up approach into a top-down one.

On top of this, the efforts aimed at solving the fragmentation problem with a complete and smoothly integrated EPR system turned out to be more challenging than initially foreseen. Paradoxically, the volume of paper records increased and the patient record became even more fragmented for a variety of reasons. First, this was because new laws on medical documentation required detailed records from professional groups not previously obliged to maintain a record (such as nurses, physiotherapists and social workers). Second, for both practical and legal reasons the hospital had to keep updated versions of the complete record. And, as long as lots of information only existed on paper, the complete record had to be paper based. Thus, each time a clinical note was written in the EPR, a paper copy was also printed and added to the paper record. Printout efficiency was not a design principle for the current EPR, causing non-adjustable print layouts that could result in two printed pages for one electronic page form. Third, multiple printouts of preliminary documents (e.g. lab test results) were often stored in addition to final versions. The result was that the volume of paper documents increased. This growth created a crisis at the paper record archive department. The hospital had moved into new facilities designed with a reduced space for the archive (since it was supposed to handle only electronic records). By 2003, the archive was full and more than 300 shelf meters of records were lying on floors (this also affected the time needed to find records and many requests were simply not found).

When the implementation of DocuLive system started, five local systems containing clinical patient information existed. The plan was to replace these with DocuLive so as to have the EPR as one single integrated information system. Despite this aim, the number of local systems was still growing. This was based on well-justified needs of the different medical specialties and departments.Footnote 3

The final system was originally planned to be delivered in 1999. Four years later, towards the end of 2003, the version of DocuLive in use included information types covering between 30 % and 40 % of a patient record. This meant that the paper record was still crucial, but unfortunately more fragmented and inaccessible than ever. The increased fragmentation was partly due to the large volume of paper caused by DocuLive printouts, but also the high number of specialized systems containing clinical patient information. By 2004, Rikshospitalet decided to change their strategy and approach the complexity in quite a different way. They realized that the idea of one complete EPR system had failed. Instead, they decided to ‘loosely couple’ the various systems containing clinical patient information underneath a ‘clinical portal’ giving each user group access to the relevant information in a coherent way.

How should the project trajectory of DocuLive at Rikshopitalet be analysed? Key issues were the complexities involved and their handling. The primary complexity was that of the work practices related to patient treatment and care. By trying to make one integrated system that could cover the needs of all hospitals in Norway, it was realised that the number of practices potentially to be handled by the system would be unrealistic. Accordingly, they sought to involve a supplier and Siemens was chosen. However, Siemens’ involvement added a further layer of complexity. The medical division within Siemens was large, with a traditional base within medical imaging technologies. As imaging instruments have become digital, supplementary software systems have been built. As the EPR development activities were increasing within Siemens, it became more and more important to align and integrate the EPR strategy and product(s) with other Siemens products and strategies. Within this world, Norway was marginalised, as the appetite for larger markets escalated in a self-feeding process. A side effect of this expansion of ambitions and scope was increased complexity: the larger the market Siemens sought to procure, the more diverse the user requirements, and accordingly, the more complex the system had to become in order to satisfy its expanding customer base. Because development costs were growing, this demanded an even larger market (what we describe below as ‘generification strategies’) to ensure necessary profits. This meant the project went through this spiral of escalating complexity until it eventually collapsed.Footnote 4

3.2 Generification

There is no more vivid description of the ambition for standardisation than Enterprise Resource Planning (ERP) systems. The key aspect of ERP is that they are explicitly designed for a ‘market’ (a business sector) rather than a designated user organisation. Some of the largest ERP vendors have been phenomenally successful in this endeavour (Pollock and Williams 2009). The extension of ERP packages across organisations, sectors and countries is all the more intriguing when one considers that some CSCW proponents insist that information systems must be built around the unique exigencies of particular organisations (Hartswood et al. 2003). We have explored how, in their design and development decisions, suppliers of standard ERP packages were able to build viable ‘bridges’ to diverse organisational users through various kinds of generification work (Pollock et al. 2007).

3.2.1 Generifying the needs of different users in the design of an ERP module

Generification involves a complex set of interactions and alignment efforts between the developer and its wider user community. Our studies, carried out over more than a decade on one specific ERP package, revealed a number of linked strategies deployed by ERP suppliers to manage this process including segmenting their market, enrolling selected user organisations as development sites and as members of ‘user groups’, and subsequently by sorting, aligning and prioritising user requirements. For example, when a major ERP supplier moved into the Higher Education sector, we saw how it enlisted a number of ‘pilot sites’ from around the world to help it identify homologies of practice that would be implemented in its new potentially global student management module. The supplier invited these pilot sites to regular week-long requirements gathering sessions at its European headquarters. There they would sit together in one large room and articulate in detail their requirements for the new system. This was often a laborious and frustrating process where the sites would highlight differences in work practice from each other. In the excerpt that follows a supplier employee is discussing the storing of student transcripts and whether universities need to store details on both passed and failed courses:

ERP Supplier: Does everyone want the ability to store two records?

America South Uni: We would maintain only one record…

ERP Supplier: Is there a need to go back into history? If transcript received and courses are missing, do you need to store this?

America North Uni:…no record is needed.

America South Uni: We need both to update current record and then keep a history of that…

Belgium Uni: In our case, things are completely different!

Later, through what we describe as ‘process alignment work’ and ‘smoothing strategies’, we saw how the pilot sites were encouraged and incentivised to shift from highlighting their differences in work practice between each other to actively searching for similarities. This was not so much a search for identical ways of doing things but an attempt to establish limited spreads of practice that could be handled in similar ways by their necessarily generic software package. For instance, one supplier employee asks participants to describe their rules for progressing students from 1 year to another and to explain how a student’s grades contribute to his/her overall programme of study. A complicated conversation develops, with various people interjecting. The supplier struggles to bring the discussion back on topic by attempting to summarise and ‘name’ the particular process being described:

ERP Supplier: We’ve got one aspect now. Just want to get some common things. How [do] we name the baby? Let’s go to the grading issue. Want to specify if module will contribute to programme of study in any way as a credit or grade. Is there any rule how it contributes? Is it linked to students? What is it linked to that it gives credit?

Swiss Uni: Could be a rule or a decision given by someone?

South African Uni: The student can still do the exam and be graded, but it might be true that the grade or credit did or did not influence the student’s progression…

Canadian Uni: We wouldn’t use these rules: we take all courses into progression. We have rules based on courses students take.

ERP Supplier: It is the same at [America North Uni]. It is the US model. It is the difference between the European and the US model.

When faced with diverging requirements, the establishment of generic features seems impossible. However, the supplier does not admit defeat but accepts the next best thing to a single generic process: ‘two’ generic templates (the US and European model of doing things). In other words, the supplier was able to establish equivalences between disparate organisations around a manageable level of diversity which their package would cater for in its generic functionality.

3.2.2 Not all requests for generification are answered

We do not wish to suggest that every request for new functionality is included in the generic module. Certain requirements were by contrast rejected as ‘user specific’. A university from Belgium, for instance, reports in a series of emails to the other pilots how:

We have the feeling that it’s becoming a strategy to try to label issues as ‘university specific until proven differently’. Should it not be the other way around? Should [the supplier] not search for generic concepts behind the specific situations at the different pilot universities? (email from Belgium University to pilots)

These were design choices that reflected the operation of a complex ‘political economy’ as the supplier established boundaries around the market for its product. In practice, this meant that product enhancement proposals were assessed not only against the needs of the other pilots but also in relation to the standing of that particular customer, its representativeness and importance to the supplier. Importance in this respect was related to whether the market the user organisation inhabited was large or growing and/or seen as strategic for future development. There was very much a ‘hierarchy of users’. At one end of the spectrum were those who could command the attention of the vendor and exert influence over product development strategies. At the other end, for instance, were the ‘transactional users’. These were the user organisations which the supplier treated on a more strictly commodified basis—offering to install additional functionality only if they paid for it. Our study revealed the array of techniques that suppliers had developed to manage this process—to align and organise its relationship with their user markets and achieve effective closure around product features.

Clearly the strategies described above by vendors are far more sophisticated than the dismissive portrayals of suppliers neglecting specific user requirements that we tend to read about. Nor do we think the strategies adopted by this supplier to be unique. There are obvious parallels with the ‘Rikshospitalet’ case above. In addition, in their JCSCW article, Johannessen and Ellingsen (2009) note the different challenges in the supply of health information systems that were designed initially for one setting but then transferred to other contexts and subsequently to a larger market. The generification strategies adopted by the supplier in their study bear striking resemblances to the generification strategies articulated over time by ERP suppliers:

Often vendors want to reuse much of their software code in order to optimize payback and to manage different implementations at different sites in a relatively similar way. It is therefore imperative for vendors to carefully balance the boundary between the particular and the general functionality while at the same time trying to expand…the general part of it. A basic strategy is to try to align the user communities in order to obtain common requirements… (ibid. pp. 627-8).

3.3 Standardised nursing reports/codes

We now turn our attention to more subtle forms of standardisation—that can be found in nursing practices. The ideology and practices of nursing have traditionally rested firmly on providing personalised (i.e., unique and local) care to every patient. Reporting is essential to nurses’ work (e.g. during hand-over: the reports are crucial in achieving a continuity-of-care across time and space during a patient trajectory). Historically, nurses’ documentation has been informal (free-text, in part oral). The increased formalisation of nurses’ documentation is, on the one hand, part of nurses’ on-going efforts to ‘professionalise’ nursing (i.e. adopting physicians’ practices) and, on the other hand, an effort to promote quality and efficiency of care.

Ellingsen et al. (2007) describe how a nursing reporting module incorporating the American Nursing Association’s standards for diagnoses and interventions (so-called NANDA and NIC, respectively) was embedded into a Norwegian vendor’s system implemented in a hospital in Northern Norway. Beyond numerous instances of ‘working around’ the imposed standards, a theme well rehearsed in the CSCW literature, Ellingsen et al. (ibid.) also demonstrate how the users modify the standards to make them work. One illustration was the users’ modifications to the NANDA/NIC classifications to allow more fine-grained categories. Both NANDA and NIC are constructed as general-purpose classification schemes (intended to cover all types of (Western) hospitals). As the ward we studied is highly specialized, this implies that only a subset of the total NANDA and NIC codes are relevant (the codes used are clustered around only a small proportion of the 167/514 that are available). To compensate for too broad NANDA categories, the nurses started specialising given categories by filling in ‘- -’ in fields before adding their own free text amendments. One nurse explains the need to add details to the NANDA diagnosis “Risk for violence against others”:

Look here, this is the diagnosis “Risk for violence against others,” but we have to add “verbal threats,” “threatening behaviour when we activate restrictions for him.” We have to add these things to understand the patient

Another illustration of modifying the given standards was the re(introduction) of redundancy to make the standardised reporting more robust. Certain instances of redundancy of information fill productive and practical roles in on-going work. Although some information was already contained in the plans, the daily report might repeat the content of the plan:

Sometimes things are registered twice, that is, what is in the report you may also find in the nursing plan. This has to do with experience. . . I know that the report is read aloud at the change of shift meeting while the nursing plan is not.

In response to an inability to decide uniquely how to classify interventions, a common strategy is to duplicate the information by entering it in both possible places, but slightly rephrased to ‘cover up’ the duplication.

Moreover, the system may induce surprising consequences, as the fitting of the system to some users’ situation simultaneously makes it unfitting for others. The standardisation of nursing plans unintentionally subverted the possibilities for interdisciplinary cooperation. That is, benefits for nurses simultaneously produced disadvantages for the psychologists and physicians. The ward relied on interdisciplinary work between the nurses on one hand and the physicians and psychologists on the other. The previous, free-text narrative contained in the old reports had been the glue in this collaboration:

Several of the nurses sum up in their own words after we have had a treatment meeting [for a patient]…they write good and extensive notes, especially when something extraordinary has happened…Therefore, when I write my own report I often refer to the report written by the nurse (psychologist)

This illustrates the important phenomenon, regularly left out in localist accounts, whereby interactions with the technology at one site produce ‘ripple effects’ that influence interactions elsewhere/for others.

4 Embeddedness

A central source of materially mediated ‘interdependence of work’ is those stemming from the interconnectivity of multiple in part overlapping collections or portfolios of modules/systems. Again we quote from Schimdt and Bannon (1992, pp. 17–18) who pointed out the relevance for CSCW of ‘packaged’ information systems:

[w]e certainly want CSCW to address the aspects of computer support for cooperative work wherever they occur. In this sense, established research and development fields such as, for example, Computer Integrated Manufacturing (CIM), Office Information Systems (OIS), Computer-Aided Design (CAD), and Computer-Aided Software Engineering (CASE) are all legitimate and indeed necessary fields for CSCW as domains of inquiry… [O]therwise CSCW will tend to ignore or even dismiss the major challenges posed by the design of systems that support cooperative work arrangements that are characterized by a large and maybe indeterminate number of participants, incommensurate conceptualizations, incompatible strategies, conflicting goals and motives, etc.

The II theme captured by the concept of ‘embeddedness’ is the way one II always gets entangled with other IIs that initially were deemed outside the scope of the first II. Mirroring the early descriptions of II, Vaast and Walsham (2009, p. 560) point out how an II is always “embedded with other [information] infrastructures”. The important implication for IIs is that one effort (typically a project) introducing an II in an organisation gets entangled with other, ‘embedded’ IIs outside the scope of the first.

4.1 The hydro bridge infrastructure

The Hydro Bridge project is an illustration of the embeddedness of IIs (see Hanseth and Braa 2001). The project was ostensibly about defining a standard suit of desktop applications within a global multi-national oil corporation (Norsk Hydro). However, the project became entangled in a number of other IIs that had different aims to the Bridge project.

In 1992 poor integration and communication across and between the divisions and the corporate headquarters of Norsk Hydro was widely acknowledged as a major obstacle for the smooth operation of the company. Developing a corporate standard for desktop applications was seen as the solution and the Bridge project was established to deal with this task. Given a choice between Microsoft and Lotus products, the Bridge project team decided, mainly due to costs, to go for the Lotus SmartSuite set of applications. Having decided on the content of the standard, there were still more issues to take care of. Among these was the scope of the standard. Who should use it and in which functions or use areas? Initially, those advocating the Bridge standard intended it to cover anybody inside Hydro for functions which Lotus SmartSuite products covered. However, to obtain the requisite acceptance for the decision, the Bridge project group had to be open to using Microsoft products in certain areas. These included areas where large software applications were developed in MS Excel for applications for the interpretation of data from lab equipment and for currency transformations in some budgeting support systems. MS Word was also accepted as the preferred word processor in several joint projects with other oil companies where the others required MS Word (or other Microsoft products) to be used as a shared platform.

4.1.1 Product development: opening Pandora’s box

The step following the formal approval of the standard was its implementation into a ‘product’. As the standard specified only a set of commercial products to be used, this might seem unnecessary. That was far from being the case. Products such as those involved here may be installed and configured in many different ways. To obtain the benefits in terms of lower costs of installation, maintenance and support of these products, they had to be installed coherently on all computers. Such a coherent installation is also crucial for establishing a transparent infrastructure where information may smoothly be exchanged between all users. To reach these objectives, a considerable development task had to be carried out. This included developing scripts that installed the applications in the same way ‘automatically’ on all computers. Developing these scripts turned out to be a challenge with lots of unforeseen problems.

To work as a shared II, this infrastructure itself required an extensive underlying and supporting infrastructure. Throughout the implementation phase, this included several underlying and highly heterogeneous or un-standardized layers which the project team then tried to include into the Bridge standard (operating system, PC specifications, networking operating systems, Local Area Networks, etc.). For each of these, defining a strict standard was impossible. The project had to allow for different varieties for each ‘layer’ and adapt the rest of the standard to this. In parallel with the implementation of the Bridge infrastructure, communication generally became more important. This implied that the global IP based network being built, Hydro InterLAN, also was included into Bridge.

4.1.2 Diffusion, adoption and use: meeting the local

The common view is that a standard is just one thing equal for all. That is not how Bridge appeared as the adoption process unfolded. It was seen very differently by the different units due to differences concerning existing computing environment, available resources in terms of money and competence, cultures concerning management styles as well as use of technology, perceived need for improved infrastructure, etc. The adoption speed and style also depended on the distance from the main office of Hydro Data. For those already using Lotus products, adopting Bridge meant changing almost nothing, whilst other had to change considerably. Bridge soon came to encompass several different systems. This meant that some implemented the whole package, whilst others just a few components. The latter case was exemplified by the smaller company offices in Africa which typically had just a few stand-alone PCs.

Differences in strategies among the different units had the consequence that the Bridge was not implemented as one coherent universal package, but as many different ones, which needed to be integrated and linked together to make the overall infrastructure work. This became an increasingly more salient and challenging issue as new versions were specified and adopted by some units while other wanted to stick to the version they were using for as long as possible.

4.1.3 Applications integration: including the environment

The users of Bridge application also use a series of other, ‘external’ applications. Some of these applications share data with Bridge applications or are used together with Bridge applications in an integrated way. For these reasons, many users wanted their applications to be integrated into the Bridge standard. Some were tightly integrated with the original Bridge applications and included in the standard. Typical examples included various Internet applications such as a web browser. Some applications were integrated with Bridge applications through mechanisms for exchanging data. Yet other applications emerged as tightly coupled to Bridge in completely different ways. An important example here was the ERP implementation in the European Fertilizer Division. The basic Bridge infrastructure (PC’s, network, OS, etc.) was also the infrastructure on top of which the ERP system was running. The ERP implementation was an extremely large one, so the IT manager in charge concluded that Hydro itself did not have the resources and competence to take responsibility for the required data processing and operations services. They outsourced these functions to a major global company offering such services.

The ERP transaction processing would run on computers physically located at a large processing centre in UK. When the decision about outsourcing ERP processing was taken, IT management thought it would be an advantage if the same service provider also delivered the required network services connecting the client software on local PC’s to the servers. So they decided to outsource that as well. Later on, they also concluded it would be beneficial to have just one provider responsible for the whole chain from the servers running the ERP databases through the network to the hardware equipment and software applications used locally. Accordingly, a contract was signed covering all three areas. At this time Bridge had been extended to include Hydro’s global network. This contract meant than that the design and operation of the Bridge network was handed over to the service provider, as was the responsibility for installation and support of all elements of Bridge locally (PC’s operating system, desktop applications, the Notes infrastructure and applications, Internet software and access, etc.).

The outsourcing was a mixed blessing. The network and processing services worked well, but site management (i.e. local support) was highly problematic. The major problems related to the fact that the actual global service provider had organized its business in independent national subsidiaries, and was not able to carry out the required coordination across national borders. In addition, some problems were related to the fact that the site management contract specified that users should call the help desk in UK when they needed support. The barriers in doing this were high for large groups of users uncomfortable in speaking English (despite the fact the help desk had promised to have people speaking all major European languages). Problem solving was more difficult through the help desk than previously was the case through local support personnel. This meant the support of Bridge was far more complex than desired.

4.2 Subsurface ecology

Contrary to much of the focus in CSCW, collaboration between actors is often not supported by a single system. Interactions emerge from being ‘juggled’ between multiple ones. Collaboration is patched together by the criss-crossing of a collection of systems (or, to say the same thing in the frame developed in this paper, the use of one II gets entangled with other IIs). Hepsø et al. (2009) provide an illustration where they study the ‘subsurface community’ of an international oil and gas company. The subsurface community consists of different disciplines including geologists, geophysicists, reservoir engineers, production engineers, well engineers and process engineers. In their daily work, they rely on about 20 highly specialised (‘niche’) systems for collecting, analysing and presenting information that feed on-going discussions and collective decision making.

To encourage more collaboration across disciplines and assets (i.e. geographical sites) within the subsurface community, three major infrastructures for collaboration have been introduced of the last 15 years. This community relied on shared network disk drives organised to mirror geographically dispersed assets, then a strategic initiative to migrate to Lotus Notes before the on-going migration to MS Sharepoint.

Following the company’s listing on the New York Stock Exchange, there has also been renewed and vigorous attention placed on systematic documentation. In response to major financial crises (notably Enron), new legislation both in the US and in Europe has emerged to increase the traceability and accountability of business transactions and critical decision processes.Footnote 5 In the company, this implied increased pressure for the subsurface community—the principal source of value generation in an oil and gas company—to make business relevant decision making more transparent and auditable.

Despite intentions of migration (i.e. gradual substitution of the new for the old infrastructure), what has taken place in recent years is a historical stratification resulting from the superimposing of the new on top of the old. For members of the subsurface community this entails that locating information relies on a combination of ‘indices’ and heuristics to navigate. As noted by an experienced production engineer, this is not a straightforward task for new staff:

If you didn’t follow the well from its inception, there is no way you can know where to find the information or what kind of information that is available. Thus, it is also impossible to just use the search engine

As expressed by a production engineer:

The G-drive is a good alternative. You can always expect it to exist. But, again, the problem is that we have a complex tree-structure [of folders] and you need to have been working here for some years in order to find something.

As stated by an experienced production engineer responsible for a number of sub-sea wells, the problem is “that you don’t get all the data needed in one single system”. For example, when conducting ‘well testing’, the production engineers use a front end to the logging system in order to survey the wells’ temperatures, pressures and rates. If a test is successful, information about a certain well is transferred to the production systems. However, this information is neither sufficient nor specific enough to calculate production rates. To compensate, engineers will typically select a data set representing a certain period of time (for example a month), and then import this into a spreadsheet using a pre-programmed macro function. However, since each subsea installation consists of a template with four to six wells, and rates need to be estimated for each well, this is done more or less ‘by hand’ in the spreadsheet. As one production engineer reports: We have to manually assign production to the different wells”.

The subsurface community is thus not so much using one II as multiple ones. They competently devise routines and mechanisms to assess, compare and triangulate the results from one II with that of others.

5 Discussion: ‘implications for design’, revisited

CSCW has spent considerable energy on discussions about how to inform design from micro-oriented, typically ethnographically inspired work place studies (Hughes et al. 1992; Plowman et al. 1995; Voss et al. 2009; Karasti 2001; Randell and Shapiro 1992). The rich empirical picture of workplace activities that can be achieved by ethnographic research is envisaged as helping overcome the difficulties encountered with traditional methods of ‘requirements capture’ that only engage with the formal descriptions of how work tasks are supposed to be undertaken. By drawing attention to the range of informal procedures through which work goals are carried out, including dealing with frequent ‘abnormal instances’, such studies were seen as providing the information required for designing tools and systems that could better support the ways in which work activities are actually performed (Plowman et al. 1995; Luff et al. 2000). However this goal has proved somewhat elusive for a number of reasons (Ackerman 2000; Schmidt 2000; Stewart and Williams 2005). In particular, those involved in design stressed the difficulties of packaging sociological understandings into a form that could inform workplace analysis and design (Dourish 2006; Fitzpatrick 2003). The social scientists involved conversely raised two sets of questions. The first revolved around what kind of empirical investigation was needed to acquire an adequate understanding of work settings. The developers’ need for timely information about potential users and uses, and the prohibitively high costs of protracted labour intensive ethnographic research, prompted suggestions for the adoption of ‘quick and dirty’ ethnographies, which could yield information better targeted to designers’ needs and in more manageable volumes (Hughes et al. 1992; Anderson 2000; Martin et al. 2006).

As Karasti (2001, p. 235) points out, “[m]any studies have described ethnographers as mediators between the work place and systems development”. In a critical response to such a role for ethnography, Dourish (2006, p. 543) complains that this would unduly reduce work place studies and ethnography to a mere “toolbox of methods for extracting data from settings,” so “aligned with the requirements-gathering phase of a traditional development model”. Dourish (ibid.) goes on to suggest an almost ritual adherence in research papers with an ending section entitled ‘Implications for design’.

Despite differences in position within this debate, there is a shared—and from an II perspective, highly problematic—assumption that ‘design’ is a relatively local activity (i.e. confined in time and space). This is exactly the opposite of what an II perspective would provide. Rather—and as was shown with the examples above—infrastructure design is an activity distributed in both time and space that involves a large number of actors of various kinds. Infrastructures are not designed from scratch—they are normally designed by modifying and extending what already exists. So the infrastructure as it currently is—the installed base—wields a strong influence on what it may become in the future. Accordingly, some of us have called infrastructure design ‘installed base cultivation’ (Hanseth et al. 1996; Hanseth and Lyytinen 2010). We supplement our critique of ‘localist’ understanding of design with a constructive alternative. The first (‘extended views of design’) are analytical tools for capturing how technologies are shaped across multiple spaces and timeframes, particularly relevant for an II influenced methodological approach. The second (‘enabling infrastructure growth’) are some concepts for informing infrastructure design, i.e. the identification of key design qualities of II influenced technological development processes.

5.1 Extended views of design

Pollock and Williams (2009) have argued for the study of the ‘biography’ of a technology. This idea points to how the career of workplace technology is often played out over multiple time frames and settings. In setting out the Biography of Artefacts (BoA) perspective, which is closely tied to an II perspective, Williams and Pollock (2012) highlight how current conceptions of the development and evolution of ERP systems have limitations in respect of time, space, actors, as well as the broad institutional context.

5.1.1 Temporal framing

Longitudinal (‘biography’) perspectives of collaborative technology are scant (Karasti et al. 2010). Temporal framing is important because current research designs tend to be relatively short term compared to the extended timeframes involved in technological developments, which are often not confined to one short period (such as the initial development phase) but can extend across years and even decades. One only has to look back to how user organisations played a significant role in the development and evolution of new industrial IT applications.

When generic systems like ERP were implemented, they inevitably had to be adapted and tailored to fit the technical and operational circumstances of the adopting organisation. The recent history of information systems has shown that the process of ‘adaptation’ and ‘domestication’ that occurred as users attempted to make these generic system useful for their organisation often threw up useful innovations that were then fed back into future technology supply (Williams et al. 2005). Industrial automation artifacts were thus seen to evolve through successive cycles of technical development and industrial implementation and use—what Fleck et al. (1990) called a ‘spiral of innovation’—as the generic package was developed and applied in ever more settings, and over extended periods of time.

5.1.2 Spatial framing and actors

What Pollock and Williams (2009) have also shown is that the design, development and evolution of large ERP packages do not only occur in one space (such as the vendor organisation); a number of other sites—arenas that do not normally receive CSCW’s attention—are also highlighted. For instance, in the past this spiral of innovation was between a single (or small number of) user(s) and relied on the individual ability of vendor programmers to identify a feature or piece of functionality that might be of interest to other user firms. Today, however, there is evidence to suggest that various players have become more systematic about identifying such developments (Pollock and Williams 2009). The ‘user group’ in particular has taken on a more formal role in finding and diffusing local innovations. When a vendor solution is seen to lack functionality, for instance, it is now common for the user group itself to seek a solution. Sometimes this is through pressurising the vendor (much like a ‘political lobby’), or, increasingly, it can decide to directly develop software. Mozaffar (2011) observed this in her study of an UK ERP User Group when a major deficiency of the system became apparent. The system lacked functionality for an essential task called ‘encumbrance accounting’. The group’s response to this was to produce a ‘white paper’ describing the problem and then to ask members from within the group to develop a new piece of software. This solution was then passed to other member organisations within the community and back to the vendor (where it is now included as a standard feature in the generic ERP application).

5.1.3 Technological field

Pollock and Williams (2011) suggest that one implication of the current temporal and spatial framing of ERP is that studies are often blind to the ways in which the take-up of packaged solutions and their subsequent evolution are shaped by developments within the wider terrain or technological field (Swanson and Ramiller 1997). In particular, this is how certain actors seek to establish boundaries around a technological field and draw up signposts about the state of the industry/technology and its future development (Pollock and Williams 2010). In particular, they have shown how visions of emerging technologies help mobilize the material and intellectual resources needed for innovation. To give one important example, it was the industry analyst Gartner that initially coined the terminology of the technological field that became known as ‘enterprise resource planning’ back in the 1990s. Not only did Gartner promote this name, it went on to set out what functionality should be contained within ERP systems. This was important in relation to technology design because there is evidence that vendors often adjust their technologies and product road maps in line with the development trajectories set out by these and other experts (Pollock and Williams 2011).

5.2 Enabling infrastructure growth: bootstrapping and generativity

What, then, are key qualities of the process of design—initiation, cultivation and growth—of IIs? Infrastructures are complex, so managing their evolution is largely about managing network effects and path dependency (see Shapiro and Varian 1999). Network effects emerge because an infrastructure is a shared resource used by a large community of users. This is particularly notable and visible in relation to infrastructures like e-mail, Facebook and inter-organisational network systems that have emerged for example for electronic business. But network effects are also playing major roles in the evolution of IIs of the kinds presented above. Network effects are generated, primarily because the value of an infrastructure for users is largely derived from the numbers of users using it. Accordingly, the value of an infrastructure is high for an individual when a large number of users using it, but less for initial adopters where there are fewer users (later adopters are able to use knowledge, experience and in some cases innovations not available to earlier ones). This means most will find value from a particular infrastructure because other users have already adopted it first.

Network effects create a chicken and egg problem not addressed by traditional design approaches. Designing new infrastructures requires that we break this deadlock. To promote initial uptake it is important to design the first version of the new service so that it delivers sufficient value and ease of implementation and use to the very first users so as to motivate adoption even before network affects are achieved. As numbers grow, further users will adopt the infrastructure because of the value generated by the numbers of users already on-board. An infrastructure therefore, once established, will grow in an automatic way, because of network effects, the larger it is. However, the critical challenge is to bring the network into existence and to make it grow (i.e. bootstrapping the network) (Hanseth and Aanestad 2003; Hanseth and Lyytinen 2010). Successful bootstrapping of a new infrastructure also requires that it is as cheap and simple to adopt as possible for the first users as for the last. That can normally best be achieved by using existing infrastructures (i.e. the installed base) as much as possible.

When an infrastructure (or network) starts to grow, another challenge emerges. Because of its complexity it becomes harder and harder to change, and because of network effects, it becomes less and less attractive to start using an alternative one. To help overcome this path dependency problem, it is important to keep the infrastructure as simple and flexible as possible at the same time and gateways (Hanseth et al. 1996; Hanseth 2001; Edwards et al. 2007) may be important tools to enable a smooth transition from one infrastructure to a new improved one. Another strategy for stimulating continuous growth and improvement of an infrastructure is to make it as generative as possible (Zittrain 2006). This formulation should be seen as an attempt to generalise from the insights about why the design of the Internet has been such a success.

Generativity is “the essential quality animating the trajectory of information technology innovation” (Zittrain 2006, p. 1980). It “denotes a technology’s overall capacity to produce unprompted change driven by large, varied, and uncoordinated audiences” (ibid., p. 1980) (a form of design that contrasts sharply with the previously described ERP vendor strategy of ‘generification’). Zittrain argues that the grid of computers connected by the Internet has developed in such a way that it is consummately generative. More specifically, the Internet has proved to be a generative infrastructure by enabling the continuous and rapid development of new innovations extending its overall functionality and range of services provided. The generative capacity of the Internet is attached, by Zittrain and others, to the combination of its end-to-end architecture and the programmability of its terminal nodes (i.e. the computers linked to the network). End-to-end architecture means that the network’s functionality is located in the networks ends.

The way in which the internet caters for diversity of user requirements differs in many ways from the ERP model where the functionality has been located in what is described as a ‘generic kernel’ which cannot easily be modified or programmed after its production. The idea has been to paint the organisational reality of adopters onto this kernel by developing numerous ‘templates’, which users can then select and tailor to meet their local conditions. These templates that form the ‘outer layer’ of the package and that are built up by vendors over time, through interactions with past customers. Whilst in principle, anybody having a computer connected to the Internet may develop and provide new services, in the case of ERP it has predominately been the vendor who has controlled the development of the generic kernel and which innovations are included in this. However, the more recent developments where also ERP users are joining forces and developing add-on modules to an ERP system independent of the vendor (see 5.1.2.), illustrated also the need for making ERP systems more generative.

Zittrain defines generativity in a more detailed manner as a function of a technology’s capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility. Leverage describes the extent to which systems enable valuable accomplishments that otherwise would be either impossible or not worth the effort to achieve. Adaptability refers to the breath of a technology’s use without change and the readiness with which it might be modified to broaden its range of uses. A technology’s ease of mastery reflects how easy it is for broad audiences to adopt and adapt it: how much skill is necessary to make use of its affordances for tasks they care about, regardless of whether the technology was designed with those tasks in mind. Accessibility—the more readily people can come to use and control a technology, along with what information might be required to master it, the more accessible the technology is.

The concepts of bootstrapping and generativity are closely associated with platform centric architectures whose importance is illustrated by the growing popularity “platform centric ecosystems,” typical examples found in the domains of smartphones, social media, web browsers, web services, etc. (Gawer 2009; Tiwana et al. 2010). The way the Bridge infrastructure evolved can very well be seen as a need driven process where it, as it bootstrapped, evolved from an II composed of a core set of basic desktop applications and towards a more generic platform centric ecosystem. The same is the case with the development of add-ons to ERP systems done independently by users. And more recently we have also observed that vendors of Electronic Patient Record systems are redesigning their products from monolithic solutions towards a platform containing the core functionality to which both user organizations and the vendor can add new modules.

This perspective involves a radical respecification of what is entailed by design, as traditionally conceived. This includes acknowledging how vendor offerings are always ‘unfinished’ in relation to final user requirements and uses (which are evolving in response to the availability of new technical capabilities and affordances), and must be completed by various intermediaries (e.g. system implementers) as well as through the creative engagement of final users (users organisations, work groups and individuals) in domesticating it to their purposes (Williams et al. 2005). In this extended view of design, a wide range of actors are contributing to this design process, as well as systems engineers, through processes of artful ‘configuration’—selection from a range of already developed elements and option to meet particular local contingencies and purposes (Stewart and Williams 2005).

6 Conclusion

In writing this paper we are not on a crusade against ethnographic or workplace studies—far from it. Rather we are pushing for the broadening of the CSCW agenda to adequately capture new technological developments. We think that CSCW scholars are potentially well placed to contribute to the understanding of Information Infrastructures (IIs). However, in common with a number of other scholars (Ribes and Lee 2010), we suggest that CSCW might benefit from giving more attention to issues of methodology and analytical framework. Our call is all the more pertinent because it appears that the CSCW community has not responded adequately to the challenge embedded in Schimdt and Bannon’s (1992) paper some 20 years ago. We do not offer a thorough/robust explanation of why this is the case—that is beyond the scope of this paper—other than to note that the CSCW field has acquired some of the qualities of ‘normal science’, locked in to particular taken-for granted approaches. The CSCW-in-use community has developed entrenched practices, methods and perspectives which are routinely reproduced institutionally through its dominant outlets. These run the risk of generating reduced forms of analysis or what Kallinkos (2004) has provocatively deemed the ‘here and now’ problem of practice-oriented studies.

We argue therefore for a gentle weaning of CSCW-in-use from its initial and founding preoccupations (the rather restricted, confined and specialised forms of cooperative work witnessed over two last decades) towards a second wave of analyses that reflect the more open-ended agenda initially set out by Schimdt and Bannon (1992) but also now being reflected in the studies that are beginning to appear in JCSCW and other socially oriented computing outlets) (Ribes and Lee 2010). If, as Schimdt and Bannon’s (1992) review suggests, espoused-CSCW was constituted around a particular ‘problem situation’, today, we would argue, because of important empirical developments, this problem situation has shifted somewhat.

Early CSCW scholars usefully drew attention to the gap between the formalised representations of organisational processes embedded in supplier offerings and the diverse circumstances of the user organisation and its complex, heterogeneous and difficult to formalise practices. CSCW has established without doubt that workplace information technology—even for instances of the same technology—have different effects depending on the organisational contexts in which they are implemented. These accounts have principally emphasised the importance of local action and contingency. In so doing they have drawn attention to the need for local discretion by user-organisation members to repair the deficiencies of computer-based systems which remain generic in comparison to the intricacy of organisational practice. As a result, many writers have problematised the claimed effectiveness of standardised (e.g. packaged) supplier offerings, stressing instead the need for extensive customisation or tailoring of computer-based systems to get them to match specific organisational practices. Some have even carried this argument further, to insist that information systems must be built around the unique exigencies of particular organisations (Hartswood et al. 2003).

However, there have been increasing signs that new kinds of issues and problems are beginning to emerge. We have argued the need for CSCW to take account of the ways in which information systems at work have changed over the last 20 years—a period in which the systems environment (ecosystem?) has become widely populated by an array of increasingly integrated intra-organisational and inter-organisational systems that we have referred to as Information Infrastructures. The fact we have been forced to articulate this in 2012 as a programmatic statement suggests that CSCW needs to have a more prospective viewpoint—and should look forward to current and emerging developments in technology and work. Developments that the CSCW agenda needs to take on board over the coming decade(s) may include (some of which we are happy to note are included in this jubilee issue of the JCSCW):

  • New models for provision of compute services—Software as a Service (SaaS) and ‘cloud computing’—which potentially alter the relationship between the developer and user organisation as well as the scope for local customisation and adaptation;

  • Web 2.0 based approaches and Social CRM like systems which make it easy for ‘users’ with limited computing skills to share information across multiple sites;

  • Ubiquitous and ambient computing: networking of devices and systems that generate huge volumes of information; the application of knowledge management and modelling techniques to make this deluge of information more manageable;

  • Ideas about ‘social computing’ that integrate several of these trends into a longer term vision of hybrid-human computer information system—with particular emphasis on the emergence of new forms of work;

  • Establishment of platforms for ecosystems/ecologies inspired by Apple’s App store/iPhone or iPod/iTunes example that seek to cultivate communities of external developers—simultaneously tying in—to add value and visibility to the platform.

From our review of the literature in Section Two it is evident that there exist a few CSCW contributions drawing on an II perspective. It would be thus too crude to simply criticise CSCW for not relating to II as such. This would not be fair. However it is also true that the bulk of these contributions employ II in a fairly modest manner, typically as a convenient vocabulary providing useful concepts that subsequently are ‘labelled’ to selected empirical events and episodes. What is lacking is a more ambitious agenda using II to draw out stronger and more compelling implications of a theoretical, methodological and practical nature. We believe this to be the proper challenge for a renewed CSCW agenda.