Check out the new USENIX Web site.

Designing Tools for System Administrators: An Empirical Test of the Integrated User Satisfaction Model

Nicole F. Velasquez, Suzanne Weisband, and Alexandra Durcikova - University of Arizona

Pp. 1–8 of the Proceedings of the 22nd Large Installation System Administration Conference (LISA '08)
(San Diego, CA: USENIX Association, November 9–14, 2008).

Abstract

System administrators are unique computer users. As power users in complex and high-risk work environments, intuition tells us that they may have requirements of the tools they use that differ from those of regular computer users. This paper presents and empirically validates a model of user satisfaction within the context of system administration that accounts for the needs of system administrators. The data were collected through a survey of 125 system administrators and analyzed using structural data modeling techniques. The empirical results of this preliminary investigation demonstrate that user satisfaction models are appropriate in the context of system administration and support the idea that system administrators have unique system and information needs from the tools they use.

Introduction

System administrators (sysadmins) are becoming increasingly important as organizations continue to embrace technology. With responsibilities that can include the installation, configuration, monitoring, troubleshooting, and maintenance of increasingly complex and mission-critical systems, their work distinguishes them from everyday computer users, and even from other technology professionals. As technology experts and system power users, sysadmins are clearly not novice users; however, most software is designed with novices in mind [Bodker, 1989]. Their broad areas of responsibility often result in a ``juggling act'' of sorts, quickly moving between tasks, and often not completing a given task from beginning to end in one sitting [Barrett, et al., 2004].

Also differentiating system administrators from regular end users of computer systems is the environment in which they work. As more business is conducted over the Internet, simple two-tier architectures have grown into complex n-tier architectures, involving numerous hardware and software components [Bailey, et al., 2003]. Because this infrastructure must be managed nearly flawlessly, the industry has seen system management costs exceed system component costs [IBM, 2006; Kephart and Chess, 2003; Patterson, et al., 2002]. In addition, any system downtime can result in significant monetary losses. Although many vendors are exploring automated system management to cope with these complex and risky environments [HP, 2007; IBM, 2006; SunMicrosystems, 2006], these tools offer little comfort to system administrators, as the sysadmins are often held responsible for any system failures [Patterson, et al., 2002].

Citing the unique problems they face because of the complex systems they manage, their risky work environment, and their power-user access, authorities and skills, Barrett, et al. [Barrett, et al., 2003] call for a focus on system administrators as unique users within HCI research. By examining the work practices of sysadmins, practitioners can design and develop tools suited to their specific needs. With the human cost of system administration now exceeding total system cost [IBM, 2006], the importance of catering to these specialized users is apparent.

To investigate tool features important to system administrators, we utilized a multi-method approach, including semi-structured interviews and a review of previous system administrator research. Our study participants included both junior and senior system administrators whose work responsibilities included the administration of networks, storage, operating systems, web hosting, and computer security. The system administrators we studied worked in enterprise or university settings. Our observations of and conversations with our participants allowed us to gain a better understanding of how the work is accomplished. Semi-structured interviews gave us the opportunity to ask more pointed questions about the sysadmin's motivations and reasons for their particular work routines and allowed us to collect their opinions on why they choose to use or not use a given tool to accomplish their work. With the insights we gained from these investigations, we turned our efforts to a review of the existing system administrator studies to confirm our findings.

Important Characteristics

The strength of a focused investigation of technology-in-use lies in its ability to identify realistic solutions and guide potential designs [Button and Harper, 1995]. By examining the work of system administrators and reviewing previous studies of system administrators (e.g., [Bailey, et al., 2003; Bailey and Pearson, 1983; Barrett, et al., 2004; Button and Harper, 1995; Fitzpatrick, et al., 1996; Haber and Bailey, 2007; Haber and Kandogan, 2007], we have generated the following list of attributes that appear to be important to system administrators. (The reader should note that many attribute definitions were refined throughout the project, referencing the attribute definitions provided in [Wixom and Todd, 2005].)

Upon further inspection, these characteristics seem to fall into categories of attributes pertaining to attributes of the information supplied by the system and attributes of the system itself. This classification of characteristics can be seen in Table 1.


Information AttributesSystem Attributes
LoggingFlexibility
AccuracyScalability
CompletenessMonitoring
FormatSituation Awareness
CurrencyScriptability
VerificationAccessibility
Integration
Speed
Reliability
Trust
Table 1: Information and system attributes.

Model and Theory

Although the above list of characteristics important to system administrators is interesting, it does little more than summarize observations and offer untested guidance to practitioners. Without evidence that these characteristics will influence a system administrator to use a particular tool, practitioners will be reluctant to invest the time and money needed to implement these features. The goal of this study is to understand the link between these characteristics and their impact on system administrator perceptions and ultimately, use of the system.

[Wixom and Todd, 2005] present a modification of DeLone and McLean's original user satisfaction model [DeLone and McLean, 1992] that links system and information satisfaction with the behavioral predictors found in technology acceptance literature [Davis, 1989], perceived ease of use and usefulness. They argue that the object-based attitudes and beliefs expressed in system quality, information quality, system satisfaction, and information satisfaction affect the behavioral beliefs that are captured in ease of use and usefulness. These behavioral beliefs, in turn, influence a user's behavior (i.e., their use or non-use of a system). Essentially, this new model represents a theoretical integration of user satisfaction and technology acceptance theories. The strength of the model lies in its ability to guide IT design and development and predict system usage behaviors. System and information quality antecedents offer concrete attributes important to the user that can be addressed and tested throughout the system development lifecycle (see Figure 1).


Figure 1: Modified user satisfaction model.

Because system administrators are still computer users in the general sense, we expect the overall theoretical model to hold. Their unique work environment, technical background and job requirements, however, suggest that they may have different needs when using computers or software applications to do their jobs. Previous studies (e.g., [Bailey and Pearson, 1983; Baroudi and Orlikowski, 1987; Davis, 1989]) have focused on a relatively small number of characteristics that, although telling in their underlying structure [Wixom and Todd, 2005], have been criticized for investigating arbitrary system attributes [Galletta and Lederer, 1989]. The analysis of system administrator work practices above identifies system and information quality attributes (i.e., antecedents) that are meaningful and important to system administrators.

To summarize, research suggests that system administrators may be unique users with system and information requirements that are different from the requirements of regular computer users. We have presented a modified user satisfaction model that links system design attributes to end user satisfaction and system use, presenting an opportunity to measure the impact that these identified attributes have on system administrator beliefs and tool usage. We believe that this model provides researchers guidance for adapting existing user information satisfaction models for tools used by system administrators. Next, we present the methodology used to empirically test the model.

Methodology

System administrators use a self-selected suite of tools to do their work. Our interviews showed that many system administrators within the same organization and even on the same team use different tools and different sets of tools to perform the same tasks. Given this variability of tool choice and use, the difficulty in gathering survey responses from hundreds of system administrators on one particular tool was apparent. As such, we opted to administer the survey to sysadmins of all types (e.g., network administrator, operating system administrator, web administrator, etc.); we asked each participant to identify the tool they used most often in their jobs and complete the survey with that one particular tool in mind. Because the surveys were completed for a tool used most often by the participants, their intention to use the tool is implied; as such, our survey instrument tested all aspects of the model leading up to and including the sysadmin's behavioral attitude towards use of the tool. That is, we did not test the intention to use a tool, because we know the tool is already in use.

Instrument Development

A survey methodology was utilized to collect the data for this study. Once the constructs were identified (i.e., the information and system attributes identified above), corresponding measurement items were researched. When possible, previously validated measures were used. Measurement items for the new constructs (i.e., credibility, scalability, scriptability, situation awareness, and monitoring) were developed following Churchill's [Churchill, 1979] methodology. Items were created based on construct definitions and components identified in the literature. Next, a sorting task was used to determine face and discriminant validity. Each measurement item was written on a 3x5 note card and all cards were shuffled. Three professional system administrators were asked to sort the cards into logical groups and name each group. Each sysadmin sorted the items into the five groups and specified similar identifying terms. Based on participant feedback, the wording on some items was slightly modified. These constructs used a seven-point scale anchored on ``Very strongly disagree'' and ``Very strongly agree,'' as described above.

Before implementing the survey, paper-based surveys were created with input from colleagues in academics and IT. Next, the instrument was pre-tested with three system administrators. While some wording was edited for clarity, no major issues were reported with the survey instrument. An online version of the survey instrument was then pre-tested by 24 system administrators. Based on feedback and responses to the pilot survey, minor modifications were made. The final survey included 64 items representing the 23 constructs, as well as demographic information. Table 2 summarizes the constructs, number of items, and references.


ConstructsItemsRefsConstructsItemsRefs
Completeness2W&TScalability3New
Accuracy3W&TScriptability3New
Format3W&TSituation Awareness4New
Currency2W&TMonitoring3New
Logging2NewInformation Quality2W&T
Verification2NewSystem Quality2W&T
Reliability3W&TInformation Satisfaction2W&T
Flexibility3W&TSystem Satisfaction2W&T
Integration2W&TEase of Use2W&T
Accessibility2W&TUsefulness3W&T
Speed2W&TAttitude2W&T
Credibility5New
Table 2: Constructs (W&T = Wixom and Todd, 2005).

Sample

To obtain survey participants, an announcement was posted on professional system administrator association message boards (e.g., LOPSA and SAGE) and emailed to participants as requested. In order to reach as many system administrators as possible, participants were also invited to refer fellow system administrators to the study. A web-based survey method was selected because of ease of distribution and data collection and the targeted respondents' access to the Internet and familiarity with web-based applications and tools.

Survey respondents were professional system administrators who were solicited through professional association message board postings. After removing incomplete responses, 125 surveys were fully completed. The average time to complete the survey was 23 minutes. Of the survey respondents, 91.2% were male and 8.8% were female. The age of respondents ranged from 21 to 62, with an average age of 37.5. Participants reported working at their current organization for an average of 5.40 years (ranging from three weeks to 26 years) and reported working as a system administrator for an average of 12.39 years (ranging from two years to 29 years). Participant demographics were similar to those found in the 2005-2006 SAGE Salary Survey [SAGE, 2006], considered the most comprehensive survey of system administrator personal, work, and salary demographics. These similarities suggest our survey sample is representative of system administrators. Almost half of our survey participants worked for for-profit organizations and companies (49.6%), including manufacturing, high tech, and finance. The next largest number of respondents (38.4%) worked in academic settings, while others worked for non-profit organizations (5.6%), government agencies (5.6%), or in research (0.8%).

Descriptive statistics for the importance of each attribute, as reported by the participants, can be seen below in Table 3.


AttributeMinimumMaximumMeanStd. Deviation
Accuracy354.740.506
Accessibility253.980.762
Completeness153.740.870
Credibility154.570.700
Currency254.230.709
Flexibility153.920.947
Format153.580.900
Integration153.500.947
Logging153.620.982
Monitoring253.780.906
Reliability354.680.576
Situation Awareness153.720.876
Scalability253.790.927
Scriptability154.120.993
Speed253.660.782
Usefulness254.310.745
Verification153.380.904
Table 3: Importance of attributes identified.

Results

The strength of the measurement model was tested through its reliability, convergent validity, and discriminant validity. Reliability is established with Cronbach's alpha [Nunnally, 1978] and Composite Reliability [Chin, et al., 2003] scores above 0.70; though Composite Reliability is preferred [Chin, et al., 2003] and Cronbach's alpha can be biased against short scales (i.e., 2-3 item scales) [G. Carmines and A. Zeller, 1979]. Following factor analysis, six items that loaded below the 0.70 level were dropped, resulting in constructs with Composite Reliability scores greater than 0.70, as shown in Table 4. Therefore, our measures are reliable. Convergent validity is established when average extracted variance (AVE) is greater than 0.50 and discriminant validity is established when the square root of AVE is greater than the correlations between the construct and other constructs. Table 5 shows the correlation matrix, with correlations among constructs and the square root of AVE on the diagonal. In all cases, the square root of AVE for each construct is larger than the correlation of that construct with all other constructs in the model. Therefore, we have adequate construct validity.


#Cronbach'sComposite
ItemsAlphaReliabilityAVE
Currency20.770.900.81
Completeness20.550.820.69
Accuracy20.630.840.73
Format30.940.960.90
Logging20.900.950.90
Verification20.850.930.87
Reliability30.900.940.83
Flexibility30.800.880.71
Integration20.800.910.83
Accessibility20.690.870.76
Speed20.810.910.84
Scriptability30.860.910.78
Scalability30.780.870.70
Credibility20.810.910.84
Situation Awareness30.780.870.65
Monitoring20.790.880.78
Information Quality20.840.930.86
System Quality20.880.940.89
Information Satisfaction20.860.940.88
System Satisfaction20.910.960.92
Usefulness30.770.870.69
Ease of Use20.720.870.78
Attitude20.880.940.89
Table 4: Reliability and validity analysis.

Discriminant and convergent validity are further supported when individual items load above 0.50 on their associated construct and when the loadings within the construct are greater than the loadings across constructs. Loadings and cross-loadings are available from the first author. All items loaded more highly on their construct than on other constructs and all loaded well above the recommended 0.50 level.

The proposed model was tested with Smart PLS version 2.0 [Ringle, et al., 2005], which is ideal for use with complex predictive models and small sample sizes [Chin, et al., 2003]. R2 values indicate the amount of variance explained by the independent variables and path coefficients indicate the strength and significance of a relationship. Together, R2 values and path coefficients indicate how well the data support the proposed model. User interface type (purely GUI, purely CLI, or a combination of GUI and CLI) was used as a control variable and was linked to both Information Quality and System Quality. A significant relationship was found to System Quality (path = 0.13, p < 0.05), but not to Information Quality.

Figure 2 shows the results of the test of the model. All paths in the high-level user satisfaction model are supported. Only four attributes were significant: accuracy, verification, reliability, and credibility.

The results of the test of the research model can be interpreted as follows: Usefulness (0.40) and Ease of Use (0.50) both had a significant influence on Attitude, accounting for 63% of the variance in the measure. Information Satisfaction (0.53) and Ease of Use (0.22) had a significant influence on Usefulness and accounted for 48% of the variance in Usefulness. System Satisfaction (0.66) had a significant influence on Ease of Use and accounted for 44% of the variance in Ease of Use. Information Quality (0.61) and System Satisfaction (0.29) both had significant influences on Information Satisfaction, accounting for 74% of the variance in Information Satisfaction. System Quality (0.81) significantly determined System Satisfaction and accounted for 67% of the variance in that measure. Accuracy (0.58) and Verification (0.22) were significantly related to Information Quality and accounted for 55% of the variance in the measure. Reliability (0.36) and Credibility (0.38) were significantly related to System Quality and accounted for 75% of the variance in System Quality.


Figure 2: Research model results.

Accuracy
ACC0.85Accessibility
ACCESS0.510.87Attitude
ATT0.550.580.94Completeness
COMPL0.590.640.500.83Credibility
CRED0.630.510.660.370.92Currency
CURR0.630.340.250.590.290.90Ease of use
EOU0.470.590.720.480.540.270.88Flexibility
FLEX0.370.420.540.340.560.100.340.84Format
FMT0.540.580.430.630.290.520.470.050.95Integration
INT0.210.460.370.330.270.130.350.540.220.91Info Quality
IQUAL0.700.630.720.520.750.450.590.460.480.380.93Info Satisfaction
ISAT0.600.730.730.530.680.370.610.460.490.430.850.94
LOG0.220.150.240.330.130.210.150.370.250.330.170.12
MON0.270.340.230.270.260.290.190.280.150.320.300.27
REL0.620.430.630.340.800.270.510.480.270.200.670.59
SA0.300.440.320.410.350.320.250.390.200.430.420.46
SCALE0.430.220.440.270.590.130.330.490.070.190.440.38
SCRIPT0.210.150.360.090.37-0.020.150.77-0.100.490.230.21
SPEED0.460.380.540.340.540.200.430.430.120.220.520.43
SQUAL0.590.460.710.300.800.210.570.600.270.330.740.66
SSAT0.650.600.850.470.770.310.660.570.410.350.810.78
USEF0.450.580.670.400.630.190.550.600.300.400.600.67
VERI0.150.180.280.270.160.130.170.330.230.330.220.16
Logging
LOG0.95Monitoring
MON0.140.88Reliability
REL0.180.230.91Situation Awareness
SA0.160.560.280.81Scalability
SCALE0.110.170.570.220.84Scriptability
SCRIPT0.460.160.320.220.390.88Speed
SPEED0.180.210.620.150.420.340.92System Quality
SQUAL0.240.230.780.250.530.460.570.94System Satisfaction
SSAT0.230.220.750.340.500.370.550.830.96Usefulness
USEF0.160.310.490.470.420.420.420.570.640.83Verification
VERI0.770.220.180.270.110.410.210.200.210.200.93
Table 5: Correlation between constructs. Bold numbers on the diagonal are sqrt(AVE).


Discussion

These results suggest that at the macro level, system administrators are similar to regular computer users; the user satisfaction model is significant and predictive of their attitude towards computer system use. These results also confirm our intuition that at the micro level, system administrators have specific needs of a computer system that differ from regular users.

When looking at Information Quality, only one attribute found significant in other studies (e.g., [Wixom and Todd, 2005]) was supported, Accuracy. Other attributes previously found significant (Currency, Completeness, and Format) were not. Furthermore, one new attribute was found significant, Verification. Some of these findings may be explained by the work practices of system administrators.

Findings show that accuracy and verification explain 55% of the variance for information quality. Information accuracy is a very real need for system administrators, and was found to be significant in this study. System planning, updating, and debugging is often done with only the information supplied by the system; rarely is a system administrator lucky enough to have a system failure physically apparent, and thus must rely on the accuracy of the information supplied to them. Verification information was found to be a significant influence on information quality. This echoes the findings of the study reported in Chapter 4. While a log of previous actions taken on the system may be relatively simple to access, a list of the outcomes of previous actions may be more difficult to generate.

When looking at System Quality, again only one attribute found significant in other studies (e.g., Wixom and Todd, 2005) was supported, Reliability. Other attributes previously found significant (Flexibility, Integration, Accessibility, Speed) were not. One new attribute, Credibility, was found significant.

Findings show that reliability and credibility explain 75% of the variance for system quality. The reliability of a system is of utmost importance; downtime in a large system can cost $500,000 per hour [Patterson, 2002]. It should come as no surprise, then, that the tools used to manage, configure, and monitor those systems need to be just as reliable. The credibility of a tool was also a significant finding in our study. Another study has found similar results [Takayama and Kandogan, 2006], reporting that trust was an underlying factor in system administrator user interface choice.

Conclusions

The purpose of this study was twofold: One, to empirically test the user satisfaction model in the context of system administration, and two, to identify and empirically test system and information attributes important to system administrators. We found that the theoretical model does hold for system administrators, and that they do, in fact, have unique needs in the systems they use.

This study has implications in both tool evaluation and design. By validating the appropriateness of the user satisfaction model in the context of system administration, researchers can utilize this method to evaluate systems. This research has also identified four tool features that are significant to system administrators - accuracy, verification, reliability, and credibility - and should strive to design tools with these attributes in mind.

Author Biographies

Nicole Velasquez is a Post-doctoral Research Associate at the University of Arizona and an enterprise systems tester with IBM. She has experience as a sysadmin, programmer, and systems analyst and earned her Ph.D. in Management Information Systems from the University of Arizona in 2008. Her research focuses on knowledge management systems, information systems success, usability, and system administrators. She can be reached at .

Suzie Weisband is an Eller Fellow and Associate Professor of Management Information Systems at the University of Arizona. She received her Ph..D from Carnegie Mellon University in 1989. Her research focuses on collaboration and coordination in face-to-face and computer mediated contexts, with a current focus on the dynamics of large-scale collaborations across multiple people, projects, and resources. She can be reached at .

Alexandra Durcikova is an Assistant Professor of Management Information Systems at the University of Arizona. She has experience as an experimental physics researcher and received her Ph.D. from the University of Pittsburgh in 2004. Her research focuses on knowledge management systems (KMS), the role of organizational climate in the use of KMS, and IS issues in developing countries. She can be reached at .

Bibliography

[Bailey, et al., 2003] Bailey, J., M. Etgen, and K. Freeman, ``Situation Awareness and System Administration,'' System Administrators are Users, CHI, 2003.
[Bailey and Pearson, 1983] Bailey, J. E. and S. W. Pearson, ``Development of a Tool for Measuring and Analyzing User Satisfaction,'' Management Science, Vol. 29, Num. 5, pp. 530-545, 1983.
[Baroudi and Orlikowski, 1987] Baroudi, J. and W. Orlikowski, ``A Short Form Measure of User Information Satisfaction: A Psychometric Evaluation and Notes on Use,'' https://dspace.nyu.edu,1987.
[Barrett, et al., 2003] Barrett, R., Y. Chen, and P. Maglio, ``System Administrators Are Users, Too: Designing Workspaces for Managing Internet-Scale Systems,'' Conference on Human Factors in Computing Systems, 2003.
[Barrett, et al., 2004] Barrett, R., et al., ``Field Studies of Computer System Administrators: Analysis of System Management Tools and Practices,'' Proceedings of the 2004 ACM conference on Computer Supported Cooperative Work, pp. 388-395, 2004.
[Bodker, 1989] Bodker, S., ``A Human Activity Approach to User Interfaces,'' Human-Computer Interaction, 1989.
[Button and Harper, 1995] Button, G. and R. Harper, ``The Relevance of `Work-Practice' for Design,'' Computer Supported Cooperative Work (CSCW), 1995.
[G. Carmines and A. Zeller, 1979] Carmines, E. G. and R. A. Zeller, Reliability and Validity Assessment, 1979.
[Chin, et al., 2003] Chin, W. W., B. L. Marcolin, and P. R. Newsted, ``A Partial Least Squares Latent Variable Modeling Approach for Measuring Interaction Effects: Results from a Monte Carlo Simulation Study and an Electronic-Mail Emotion/ Adoption Study,'' Information Systems Research, Vol. 14, Num. 2, 189-217, 2003.
[Churchill, 1979] Churchill, G., ``A Paradigm for Developing Better Measures of Marketing Constructs,'' Journal of Marketing Research, 1979.
[Davis, 1989] Davis, F. D., ``Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology,'' Management Information Systems Quarterly, Vol. 13, Num. 3, pp. 319-340, 1989.
[DeLone and McLean, 1992] DeLone, W. H. and E. R. McLean, ``Information Systems Success: The Quest for the Dependent Variable,'' Information Systems Research, Vol. 3, Num. 1, pp. 60-95, 1992.
[Fitzpatrick, et al., 1996] Fitzpatrick, G., S. Kaplan, and T. Mansfield, ``Physical Spaces, Virtual Places and Social Worlds: A Study of Work in the Virtual,'' Proceedings of the 1996 ACM conference on Computer Supported Cooperative Work, 1996.
[Galletta and Lederer, 1989] Galletta, D. F. and A. L. Lederer, ``Some Cautions on the Measurement of User Information Satisfaction,'' Decision Sciences, Vol. 20, Num. 3, pp. 19-439, 1989.
[Haber and Kandogan, 2007] Haber, E. and E. Kandogan, ``Security Administrators in the Wild: Ethnographic Studies of Security Administrators,'' SIG CHI 2007 Workshop on Security User Studies: Methodologies and Best Practices, 2007.
[Haber and Bailey, 2007] Haber, E. and J. Bailey, ``Design Guidelines for System Administration Tools Developed through Ethnographic Field Studies,'' Proceedings of the 2007 Symposium on Computer Human Interaction for the Management of Information Technology, 2007.
[HP, 2007] HP, Adaptive Infrastructure, 2007, https://h71028.www7.hp.com/enterprise/cache/483409-0-0-0-121.aspx.
[IBM, 2006] IBM, Autonomic Computing: IBM's Perspective on the State of Information Technology, 2006, https://www.research.ibm.com/autonomic/manifesto/autonomic_computing.pdf.
[Kephart and Chess, 2003] Kephart, J. O. and D. M. Chess, ``The Vision of Autonomic Computing,'' IEEE Computer, Vol. 36, Num. 1, pp. 41-51, 2003.
[Nunnally, 1978] Nunnally, J. C., Psychometric Theory, 1978.
[Patterson, 2002] Patterson, D., ``A Simple Way to Estimate the Cost of Downtime,'' Proceedings of LISA '02, 185-188, 2002.
[Patterson, et al., 2002] Patterson, D., et al., ``Recovery-Oriented Computing (ROC): Motivation, Definition, Techniques, and Case Studies,'' Technical Report CSD-02-1175, 2002, https://roc.cs.berkeley.edu/papers/ROC_TR02-1175.pdf.
[Ringle, et al., 2005] Ringle, C. M., S. Wende, and S. Will, ``Smart PLS 2.0 (M3) Beta,'' 2005, https://www.smartpls.de.
[SAGE, 2006] SAGE, SAGE Annual Salary Survey 2005-2006, 2006.
[SunMicrosystems, 2006] SunMicrosystems (2006), N1 Grid System, https://www.sun.com/software/gridware.
[Takayama and Kandogan, 2006] Takayama, L. and E. Kandogan, ``Trust as an Underlying Factor of System Administrator Interface Choice,'' Conference on Human Factors in Computing Systems, 2006.
[Wixom and Todd, 2005 Wixom, B. H. and P. A. Todd, ``A Theoretical Integration of User Satisfaction and Technology Acceptance,'' Information Systems Research, 2005.