About the Author(s)


Renier Steyn Email
Graduate School of Business Leadership, University of South Africa, South Africa

Citation


Steyn, R., 2017, ‘The psychometric properties of a shortened corporate entrepreneurship assessment instrument’, Southern African Journal of Entrepreneurship and Small Business Management 9(1), a123. https://doi.org/10.4102/sajesbm.v9i1.123

Original Research

The psychometric properties of a shortened corporate entrepreneurship assessment instrument

Renier Steyn

Received: 06 Feb. 2017; Accepted: 19 June 2017; Published: 18 Aug. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: The entrepreneurial climate in organisations is often seen as an important antecedent to innovation and organisational success. Assessing the nature of the climate in a reliable and valid manner is essential, as this will guide the implementation of appropriate interventions where necessary as well as assessing the effects of such interventions.

Aim: The aim of this research was to evaluate the psychometric properties of a measure of entrepreneurial climate. Entrepreneurial climate was measured using a shortened version of the Hornsby, Kuratko and Zahra (2002) instrument, called the Corporate Entrepreneurship Assessment Instrument (CEAI). Making information on the psychometric properties of the instrument available directly relates to its utility.

Setting: The setting was medium to large South African companies. A random sample of employees was drawn from 53 selected companies across South Africa, with 60 respondents per company (N = 3 180).

Methods: A cross-sectional survey design was used. Several instruments were administered, including the shortened version of the CEAI. Cronbach’s alpha was used to test for reliability and several methods were used to test for validity. Correlation analysis was used to test for concurrent validity, convergent validity and divergent validity. Principle component factor analysis was used to test for factorial validity and a t-test to test for known-group validity.

Results: The results showed that the reliability for the total score of the shortened version of the CEAI was acceptable at 0.758. The results also showed some evidence of concurrent validity, as well as homogeneity among the items. With regard to factorial validity, all items loaded in accordance with the subscales of the instrument. The measure was able to distinguish, as expected, between government organisations and private business entities, suggesting known-group validity. Convergent validity and divergent validity were also assessed. Interesting to note was that entrepreneurship climate correlates more with general employee attitude (e.g. employee engagement; R = 0.420, p < 0.001 and organisational commitment, R = 0.331, p < 0.001) than with self-reported innovation (R = 0.277, p < 0.001 and R = 0.267, p < 0.001).

Contribution: This paper not only provided information on the reliability and validity of the shortened version of the CEAI in the South African context but also provides norms to be used when researchers or consultants work with smaller groups. Recommendations on the appropriate use of the instrument are offered and this contributes to the responsible use of the instrument.

Introduction

Though some confusion exists on the exact meaning of innovation in the workplace (Hind & Steyn 2015), definitions of the concept are abundant. García-Morales, Lloréns-Montes and Verdú-Jove (2008) describe innovation as new ideas, methods or devices, or acts of creating new products, services or processes. Similarly, Golla and Johnson (2013) use the term in relation to products and define it as the introduction to the market of new goods or services with distinct characteristics. Overstreet et al. (2013) describe innovativeness as the propensity of an organisation to deviate from conventional industry practices by creating or adopting new products, processes or systems.

Irrespective of the differences in the exact definition of innovation, it is seen as important and considered to be an essential component for competitiveness and survival, embedded in organisational structures, processes, products and services within the organisation (Gunday et al. 2011). It is therefore not surprising that innovation is perceived by many scholars as one of the most important determinants of firm performance (Adegoke, Walumbwa & Myers 2012; Durán-Vázquez, Lorenzo-Valdés & Moreno-Quezada 2012; Grant 2012).

The climate in organisations is appreciated by many as an important antecedent to innovation and organisational success (Choi, Moon & Ko 2013; Crespell & Hansen 2008; Hornsby, Kuratko & Zahra 2002; Nusair 2013, Nybakk & Jenssen 2012; Panuwatwanich, Stewart & Mohamed 2008). Assessing the nature of the climate accurately is necessary, as the absence of effective measures may be detrimental to making informed decisions. This is particularly true in instances where (costly) interventions are considered or when the effects of such interventions are evaluated. Additionally, accurate and valid measurement should underpin all responsible decisions that are based on psychometric instruments (American Educational Research Association, American Psychological Association & National Council on Measurement in Education 1999; Cohen, Swerdlik & Sturman 2013; Moerdyk 2015).

Objectives

The primary objective of this research was to evaluate the psychometric properties of a measure of entrepreneurial climate. The Hornsby et al. (2002) measure of entrepreneurial climate (Corporate Entrepreneurship Assessment Instrument, CEAI) is very often referred to and used (Bhardwaj 2012; Brazeal, Schenkel & Kumar 2014; De Villiers-Scheepers 2012; Hajipour & Mas’oomi 2011; Holt, Rutherford & Clohessy 2007; Hornsby et al. 2013; Karimi et al. 2011; Kuratko & Audretsch 2013; Marzban, Seyed & Ramezan 2013; Nikolov & Urban 2013). This specific measure forms the focus of this research. In the study, the psychometric properties of a shortened version of this instrument, as proposed by Strydom (2013), are assessed. The shortened version of the CEAI consists of 20 items, compared to the 48 items of the original instrument. Little is known about the psychometric properties of this instrument. Some evidence supports the replicability of the CEAI structure in a Western context (Holt et al. 2007; Hornsby et al. 2002) and other studies investigated the replicability of the model in Africa (Kamffer 2004; Strydom 2013; Van Wyk & Adonisi 2011). The results were mixed and Van Wyk and Adonisi (2011) fail to replicate the CEAI structure among African participants. Aforementioned points furthermore necessitate this research.

Literature review

The literature review comprises two parts, namely reliability and validity. Both reliability and validity are essential for effective measurement (American Educational Research Association et al. 1999; Cohen et al. 2013; Gregory 2011; Moerdyk 2015). The aim of the literature review was to explain the way reliability and validity are conceptualised and assessed.

Reliability

Many types of reliability are reported in literature, including test–retest reliability, half-split reliability, parallel-forms reliability and internal consistency (Cohen et al. 2013; Moerdyk 2015). Irrespective of the name or method used to calculate the value, the primary aim of a reliability measure is to assess the constancy of the scores generated (Shaughnessy, Zechmeister & Zechmeister 2009; Tredoux & Durrheim 2013). The type of reliability most often used is internal consistency (Cronbach 1951; Cronbach & Shavelson 2004; Novick & Lewis 1967; Kaiser & Michael 1975; Lord & Novick 1968), and it is expressed as coefficient alpha. Coefficient alpha, also known as Cronbach’s alpha, is the mean of all the possible half-split reliability coefficients, corrected by the Spearman–Brown formula (see Gregory 2011). Though Cronbach’s coefficient alpha is widely used to measure reliability (Cronbach & Shavelson 2004; Peterson 1994), it is also often criticised (Cho & Kim 2015; Sijtsma 2009), including for being seen as a comprehensive measure of reliability (Cronbach & Shavelson 2004). Coefficient alpha, an index of the internal consistency, is used because a test with high internal consistency tends to have stable scores, similar to those achieved by tests with high test–retest reliability (Gregory 2011). Furthermore, its use is widespread, its calculation standard to most statistical packages, and Cronbach’s alpha is well debated in academic literature (Cronbach & Shavelson 2004; Nicholls et al. 2017).

What an acceptable coefficient alpha constitutes is a matter of debate. Guilford and Benjamin (1978) suggest that very accurate measures of personal differences require reliability above 0.90, but add that scales with reliabilities as low as 0.70 prove to be very useful. They also state that reliabilities lower than 0.70 can be helpful in research, where accuracy is not as important as when personal decisions are made. Hair, Black, Babin and Anderson (2010) suggest 0.60 to 0.70 as the lower limit. This is in line with what Clark and Watson (1995) and Nunnally and Bernstein (1994) suggest. Spatz and Kardas (2008) set the mark at 0.80. Field (2009) notes that coefficients of 0.70 and 0.80 are often mentioned as acceptable in publications, and that the type of instrument used and the number of items in the scale should play a role in interpreting the calculated values. He reports that for cognitive tests 0.80 could be set as the lower value, while a value of even below 0.70 could be acceptable for measures of psychological constructs. However, high coefficients are difficult to obtain when a scale consists of only a few items (Field 2009; Pallant 2010). From the aforementioned points, it is clear that 0.60 may be the ultimate cut-off point and that at a practitioner level, where real-life decisions are made, 0.70 should constitute the cut-off score.

Validity

Definitions of validity vary. The latest standards for educational and psychological testing, jointly developed by the American Educational Research Association, American Psychological Association and the National Council on Measurement in Education, emphasise the use of tests in their definition of validity: ‘Validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests’ (American Educational Research Association et al. 1999:9). Commenting on this definition, Newton (2012) refers to a test as valid if the assessment-based decision-making procedure, following from interpreting the contextual assessment outcomes, is a measure of the attribute involved in the decision. Others focus more on the test itself and state that it is ‘… the judgement or estimate of how well a test measures what it purports to measure in a specific context’ (Cohen et al. 2013:181) or that it ‘… is a unitary concept determined by the extent to which a test measures what it purports to measure’ (Gregory 2011:111). The definition of Moerdyk (2015:47) similarly focuses on the instrument itself: ‘Validity is the ratio of the relevant score to the total or observed score’ (Moerdyk 2015:47). The definitions differ. Some researchers emphasise the appropriate use of the tests while others focus on the appropriateness of the test itself. These authors, however, do not fundamentally differ in their viewpoints, as Gregory (2011) refers to ‘appropriateness’ of use, Cohen et al. (2013) to ‘appropriateness of inferences’ and Moerdyk (2015) to ‘validity generalisation’. As such, the validity of a test could be seen as the capability of the test (or test procedure) to assess a construct in such a way as to allow a responsible professional the means to apply the obtained scores in an appropriate manner.

Three types of validity have traditionally been identified, namely content validity, criterion-related validity and construct. The classic trinitarian view of validity is still common among contemporary authors on psychometrics (Cohen et al. 2013; Gregory 2011; Moerdyk 2015) and was also followed in this review. Although separable, content validity and criterion-related validity could be viewed as supportive evidence in the cumulative quest for construct validity (Gregory 2011).

Content validity: is reflective of the judgement of degree to which questions, tasks or items on a test are adequately representative of the universe of behaviour the test was designed to sample (Cohen et al. 2013; Gregory 2011). Face validity is a special case of content validity. Where face validity is concerned with appearance of the assessment technique as appropriate to those who are assessed (Moerdyk 2015), content validity is normally judged by subject matter experts (Cohen et al. 2013; Gregory 2011). Though techniques used to assess magnitude of content validity differ (see Lawshe 1975; Martuza 1977; Polit & Beck 2006; Wilson, Pan & Schumsky 2012), they basically consist of measures of agreement between experts on appropriateness of items. Important to note is that these ratios or coefficients are reflective of the validity of the items included in the assessment and tell us nothing about the items which should be included to make the existing pool of items representative of the universe of behaviour that the test was designed to assess (Gregory 2011).

Criterion-related validity: is demonstrated when a measure is effective in estimating the test-takers performance on some outcome measure (Gregory 2011), with the outcome measure being the criterion. Stated differently, it is ‘a judgement of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest’ (Cohen et al. 2013:190). Many authors (Cohen et al. 2013; DeVellis 2012; Gregory 2011; Moerdyk 2015) state that concurrent and predictive validity subsume under criterion-related validity. Consensus exists among the aforementioned authors that the basic difference between the types is the time when data on the criterion is collected. For concurrent validity, criterion measures are obtained at approximately the same time as the test scores, while for predictive validity the criterion measures are obtained at a later stage. Concurrent validity therefore ‘indicate(s) the extent to which test scores may be used to estimate an individual’s present standing on a criterion’ (Cohen et al. 2013:191) and predictive validity determines how accurate the measure can predict future events. Though a simple correlation between the test score and the criterion is often referred to as a validity coefficient (Cohen et al. 2013; DeVellis 2012; Gregory 2011; Moerdyk 2015), the standard error of estimate (see Gregory 2011), sensitivity tests (see DeVellis 2012) and calculating the coefficient of determination (Moerdyk 2015) are also suggested. Correlation coefficients are most often mentioned and used. Moerdyk (2015:52) states that ‘in practice, validity coefficients above 0.5 are acceptable, and in case of selection criteria, validity coefficients as low as 0.3 and even 0.2 are acceptable’. Cohen et al. (2013:195) refer to the seminal work of Cronbach and Glesser (1965) and caution against the use of rules and state that ‘validity coefficients should be large enough to enable the test user to make accurate decisions within the unique context in which the test is being used’.

Construct validity: is the extent to which a measure ‘behaves’ in the way that the construct it purports to measure should behave in relation to other constructs (DeVellis 2012). Moerdyk (2015:47) uses theoretical validity as a synonym for construct validity and states that the basic question of construct validity is whether the assessment procedure results are in line with what is already known (or theorised). A similar notion is presented by Cohen et al. (2013), who explain that a test is valid when individuals with high scores and low scores on a test behave as predicted by the theory about the construct. As indicated earlier, content validity and criterion-related validity could be viewed as supportive evidence in the cumulative quest for construct validity (Gregory 2011). In fact, ‘to evaluate the construct validity of a test, we must amass a variety of evidence from numerous sources’ (Gregory 2011:119). The following are (further) measures of construct validity:

  • The homogeneity of the test (Cohen et al. 2013) or subtest (Gregory 2011). Such an analysis will reveal if a single construct is measured (Cohen et al. 2013; Gregory 2011). The correlation of the individual items with the total score (Cohen et al. 2013; Gregory 2011) and the coefficient alpha (Cohen et al. 2013) could be used in estimating how uniform a test is in measuring the construct of interest.
  • Factorial validity assessment is based on the results of factor analysis. The primary purpose of a factor analysis is to define the underlying structure among the variables included in the analysis (Hair et al. 2010). The variables included in the analysis could be items from a single test, items from multiple tests or (total) scores from a battery of tests. When the instrument internally displays the expected structure, this could be indicative of construct validity (Moerdyk 2015). Furthermore, when items of various tests load on different factors, or when scores of a battery of tests load on factors in a theoretically consistent manner (Gregory 2011), it could be indicative of construct validity.
  • Construct validity can also be derived from temporal changes. If temporal changes in test scores are consistent with theory, for example, when test scores differ as a function of developmental changes or increase or decrease resulting from an intervention to which the person was exposed (Cohen et al. 2013; Gregory 2011), construct validity could be argued. In the first case, mentioned above, we may expect that young people score higher on an intelligence test than older people, and in the second case we may expect that following a therapeutic intervention individuals score lower on depression than before.
  • Closely aligned to the aforementioned discussion is what Moerdyk (2015) called known-group validation. Known-group validity is demonstrated when a scale differentiates between existing groups in accordance with theory (Cohen et al. 2013; Gregory 2011; Moerdyk 2015). It is to be expected, for example, that individuals in positions of authority show higher scores on an effective leadership scale than those who have just started their careers. Related to this is the ability of a test to accurately classify individuals, which leads to the matter of test sensitivity and test specificity (Gregory 2011). Test sensitivity, within the context of selection, refers to the percentage of correctly selected individuals, whereas test specificity is reflected in the percentage of correctly rejected individuals.
  • A fourth test of construct validity is when ‘test scores correlate with scores on other tests in accordance with what would be predicted from a theory that covers the manifestation of the construct in question’ (Cohen et al. 2013:199). Here the terms convergent and divergent validity are used. The first mentioned refers to a high correlation with a construct with which the measure overlaps (Gregory 2011), including an older version or an alternative version of the test (Cohen et al. 2013). Gregory (2011) does not set any rules on what an acceptable correlation could be, but refers to 0.5 as an example of a hefty correlation (suggesting that this could be highly acceptable). The latter, discriminant validity, refers to a situation where the achieved score does not correlate, in line with theory, with a construct unrelated to that construct. Cohen et al. (2013) write about a non-significant correlation as evidence of discriminant1 validity. Cohen et al. (2013) also state that factor analysis could be used to judge convergent and discriminant validity. Similar constructs (or items) should load on the same factor and items from dissimilar constructs (or items) should load on other factors. The ambitious and seldom emulated multitrait–multimethod matrix (proposed by Campbell and Fiske [1959]) is an alternative to consider in making judgements on convergent and discriminant validity (Gregory 2011).

From the aforementioned discussion it is clear that construct validity is a complex matter and judgement on the construct validity of a test ought to be the result of integrating several sources of information.

Methods

In this section, the respondents, the procedure, the measuring instruments, the data analysis and the ethical considerations are discussed.

Respondents

To be included in the study, respondents needed to be employed at a large South African organisation, with a workforce of at least 60 employees. Several organisations were approached and 53 companies were eventually willing to participate. In total 60 respondents were randomly selected from each of these organisations. This presented a convenient sample of South African organisations, but a random sample of employees. More detail about the respondents is reported in the findings section.

Procedure

The sampling of the respondents is discussed above. The data were generated from paper-and-pencil tests, completed in organisations where permission was granted by the appropriate authorities, and all respondents gave consent. These data were not primarily collected for this research and are archival data collected by the author as part of a larger research project. The ethical clearance obtained allows him to collect and use the data in further analysis and to publish academic articles based on the data. This use of the data was clearly stated in the permission letters as well as the consent forms. After cleaning up the data appropriate statistics were calculated. Cleaning up the data was limited to removing of limit data and replacing it with missing values. The statistical techniques used are described in the section ‘Data analysis’.

Measuring instruments

Eight instruments were administered.

A shortened version of the CEAI (Hornsby et al. 2002) was used. This instrument measures five constructs, namely the level of management support, work discretion and autonomy, rewards and reinforcement, time availability and organisational boundaries (Hornsby et al. 2002). Kuratko, Hornsby and Covin (2014:119) explain what is measured with each factor:

  • Top management support: The extent to which employees perceive that top managers support, facilitate and promote entrepreneurial behaviour. This includes top management’s championing innovative ideas and providing the resources required for entrepreneurial actions.
  • Work discretion: The extent to which employees perceive that the organisation tolerates experimentation (and failure). Furthermore, work discretion relates to decision-making autonomy and freedom from unwarranted oversight and also management, which delegates authority and responsibility to lower-level managers and workers.
  • Rewards and reinforcement: The extent to which employees perceive that the organisation uses systems which reward entrepreneurial activity and success.
  • Time availability: The extent to which employees experience their job’s structure in such a way that unstructured or free time is available to allow individual employees or groups to pursue innovations.
  • Organisational boundaries: The extent to which employees perceive that organisational boundaries are flexible and allow the flow of information within the organisation and beyond the organisation and the external environment. Flexile but clear boundaries are tested for. Boundaries induce, direct and encourage coordinated innovative behaviour.

The shortened version proposed and used by Strydom (2013) was applied in this research. Where the original questionnaire consists of 48 items, the shortened version consists of 20 items, 4 items per construct. The items in the shortened version were selected from the 48 items based on their high loading on the particular factor, which represents the subscale. The four items with the highest item load per factor were selected. Substantial work on the factorial validity of the original instrument was done. Hornsby et al. (2002) report the results of an analysis of the five-factor CEAI solution, which showed Cronbach’s alpha of 0.92, 0.86, 0.75, 0.77 and 0.69, for the dimensions as listed above. Kamffer (2004) found similar alphas of 0.88, 0.80, 0.62, 0.71 and 0.77. Strydom (2013), using his shortened version of the CEAI, found alphas of 0.73, 0.82, 0.74, 0.68 and 0.57. The items of the CEAI were presented as statements, such as the following: ‘Individual risk takers are often recognised for their willingness to champion new projects, whether eventually successful or not’. Respondents were asked to respond to the statements by selecting one of five options, namely: strongly agree (5), agree (4), undecided (3), disagree (2) or strongly disagree (1). A high score on any particular factor of the CEAI would be indicative of a climate that is conducive to entrepreneurial activity, and a low score would suggest circumstances that hamper entrepreneurial activity. An overall high score would suggest the presence of a positive entrepreneurial climate.

The Utrecht Work Engagement Scale-9 (UWES-9; Schaufeli & Bakker 2004) is a summative assessment of vigour, dedication and absorption. The UWES is mentioned as the most often used self-report measure of engagement and has been validated in many countries around the world (Bakker et al. 2008). The questionnaire consists of nine items. Schaufeli and Bakker (2004:33) report that the ‘Cronbach’s α of all nine items varies from 0.85 to 0.94 (median = 0.91) across the nine national samples. The α-value for the total data base is 0.90’. With regard to validity, Schaufeli, Bakker and Salanova (2006) claim that the suggested three-factor structure of engagement is confirmed (across samples from different countries) and that the construct is related to other constructs in the expected manner. This suggests construct validity. The questionnaire consists of nine items. The following is a typical item from the scale: ‘At my work, I feel bursting with energy’. Respondents are requested to indicate their views on this statement on a scale ranging from 0 (never) to 6 (every day). The minimum total score is 0 and the maximum 54. A high score on the survey would indicate high levels of engagement and a low score would indicate that the respondents are not engaged.

The Organizational Commitment Scale (Allen & Meyer 1990) has been used to assess organisational commitment. They identify affective, continuance and normative commitment as components of commitment. The full scale consists of 24 questions. Allen and Meyer (1990) report an internal consistency of 0.86, 0.82 and 0.73 for the three subscales. Furthermore, Allen and Meyer (1990:13) report evidence of construct validity and also comment that the ‘relationship between commitment measures … and the antecedent variables … was, for the most part, consistent with prediction’. This points to convergent and discriminant validity. The first item of the scale reads as follows: ‘I would be very happy to spend the rest of my career with this organisation’. Respondents are requested to indicate their views on this statement on a scale ranging from 1 (strongly disagree) to 7 (strongly agree). A high score on the scale indicates high levels of commitment and low scores signify low commitment. For the purpose of this study, only eight items of the Affective Commitment Scale was used.

The Innovative Work Behaviour (IWB) questionnaire was developed by de Jong and den Hartog (2010) to assess the four dimensions they hypothesised to relate to workplace innovation, namely exploration, generation, championing and implementation of ideas. De Jong and den Hartog (2010) state that their analyses demonstrated sufficient reliability and criterion validity. However, they did not find proof of dimensionality in their questionnaire and suggest that it should be used as a one-dimensional construct. The questionnaire was used as presented in the article, with the exception that the stem of the questions was changed from ‘How often does this employee …’ to ‘As an employee how often do you …’ The questionnaire contains 10 items. The first reads as follows: ‘As an employee how often do you pay attention to issues that are not part of your daily work?’ Responses were on a seven-point scale, from 0 (never) to 6 (always). A high score on the scale indicates high levels of innovation in the workplace while low scores indicate low levels of innovation.

Kleysen and Street (2001) hypothesised that individual innovative behaviours (IIBs) consist of five dimensions, namely opportunity exploration, generativity, formative investigation, championing and application. They developed a 14-item questionnaire which assesses these dimensions, called the IIB. As with de Jong and den Hartog (2010), they were unable to confirm their hypothesised dimensionality, but suggest that including a variety of items contributes to a better understanding of the construct. The coefficient alphas for the subscales were 0.791, 0.791, 0.802, 0.893 and 0.796. They report an inter-correlation of 0.945 between the items. As such, they suggest that the items ‘can be combined into a single measure of innovation behaviour … with good construct validity’ (Kleysen & Street 2001:293). The first item of the scale reads as follows: ‘In your current job, how often do you … look for opportunities to improve an existing process, technology, product, service or work relationship?’ Respondents were asked to respond on a six-point scale, ranging from 1 (never) to 6 (always). A high score would then be indicative of high levels of innovative behaviour, whereas a low score would suggest the absence of innovative behaviour.

The Quality of Performance Appraisal Systems Questionnaire (QPASQ) was used to assess the perceived effectiveness of the (traditionally defined) performance appraisal systems in organisations. The QPASQ was developed by Steyn (2010) and is based on human resources management literature, which explains the characteristics of an effective performance appraisal system. Most items were borrowed from Grobler, Wärnick, Carrell, Elbert and Hatfield (2006), who provide a comprehensive list of requirements for an effective performance appraisal system. The items cover the following elements: relevance, reliability, freedom from contamination, discriminability or sensitivity, practicality, acceptedness, labour legislation requirements, specificity, (desired) outcomes, appropriate and contracting. Steyn (2010) reports internal consistency (Cronbach’s alpha) of 0.84 and significant correlations (in the expected direction) with other workplace attitudes. The questionnaire used in this research consisted of 18 items, with the first question reading as follows: ‘The performance appraisal system at my organisation is the primary mechanism used to assess the performance of the employees’. Respondents were requested to indicate their views on this statement on a scale ranging from 1 (Absolutely false – this is true in ±10% of all cases) to 5 (Absolutely true – this is true in ±90% of all cases. A high score would be indicative that a traditionally defined performance appraisal system is in place and effectively functioning while a low score would indicate that the respondents were not of the opinion that a traditionally defined effective performance appraisal system was functioning in their organisation.

The Human Resource Practices Scale (Nyawose 2009) was used to measure the perceived effectiveness of human resource practices, with three questions per practice. Seven HR practices were assessed in this study, namely training and development, compensation and rewards, performance management, supervisor support, staffing, diversity management, as well as communication and information sharing. Nyawose (2009) reports internal consistencies varying from 0.74 to 0.93 for these scales and significant correlations (in the expected direction) with outcomes such as occupational commitment and turnover intentions. Steyn (2012) reports alphas varying between 0.88 and 0.74 and significant correlations (in the expected direction) with outcomes such as job satisfaction, employee engagement, occupational commitment and turnover intentions. The following is the first question from the training and development part of the scale: ‘My company is committed to the training and development needs of its employees’. Respondents were requested to indicate their views on this statement on a scale ranging from 1 (disagree strongly) to 5 (agree strongly). For each individual HR practice, the minimum score would be 3 and the maximum 15. A high score on the survey would be indicative of a belief that HR practices were effective, whereas a low score would indicate that the respondents were not satisfied with the HR practices provided.

The Multifactor Leadership Questionnaire (Avolio, Bass & Jung 1995) was used in the study. The questionnaire measures aspects of transformational leadership (12 items), transactional leadership (6 items), as well as a laissez-faire leadership style (three items). Extensive research on the instrument indicates acceptable reliability as well as validity (Antokonis, Avolio & Sivasubraimania 2003; Avolio, Bass & Jung 1999; Bono & Judge 2004; Muenjohn & Armstrong 2008). Respondents were asked to indicate their levels of agreement with statements such as: ‘My manager makes others feel good to be around him/her’. Respondents were asked to indicate how often this behaviour was present in their managers, where (0) indicates ‘Not at all’, (1) ‘Once in a while’, (2) ‘Sometimes’, (3) ‘Fairly often’ or (4) ‘Frequently, if not always’. A high score on a specific scale would be indicative of a workplace where that type of leadership style is often displayed, while a low score would be indicative of the absence of such leadership.

Data analysis

Demographical information about the sample, as well as descriptive statistics on the instrument of interest, the shortened version of the CEAI, were calculated. The mean, standard error of mean, standard deviation, skewness and kurtosis for the CEAI2 are presented. With regard to kurtosis, for a sample of 200, heavier tails (platykurtic shape) are indicated with values below -0.47 and a sharper peak (leptokurtic shape) is indicated with values higher than 0.62 (Doane & Seward 2009). For a sample of 200, the lower limit for skewness (skewed to the left) is -0.281 and the upper limit (skewed to the right) is 0.281. These cut-off scores will be used in making comments with regard to the normality of the distribution.

Next calculations were done concerning the reliability of the instrument. Cronbach’s alpha coefficient and the strict parallel method for calculating reliability, as generated through the SPSS-23 programme, were calculated. As many authors (Clark & Watson 1995; Field 2009; Guilford & Benjamin 1978; Hair et al. 2010; Nunnally & Bernstein 1994) note that coefficients between 0.6 and 0.8 are expectable, the margin in this research was set at 0.70. This norm was applied to all the calculations pertaining to reliability.

Several calculations were done with regard to gathering information on validity. The first related to criterion validity. To test for concurrent criterion-related validity, the correlation between CEAI, as independent variable, and IWB and IIB, as dependent variables, was calculated. A statistical significant correlation (p < 0.01) of a medium size (larger than 0.3, as defined by Cohen 1988) was set as a minimum indicator of concurrent validity.

The rest of the analyses concerned construct validity. Firstly, the homogeneity of the items was tested. The common inter-item correlation, for the entire as well as the five subscales of CEAI, was calculated. Guidelines on the size of an acceptable inter-item correlation vary, but for this research the range was set in line with Clark and Watson’s (1995) guidelines, which are 0.15 < R > 0.50. Should this correlation be too high, then it means the items are too similar. If the correlations were too low on the other hand, they are not related.

Factorial validity was assessed through testing if the different subscales of the CEAI loaded in different factors. This is a simple analysis to test whether the items of the subtest correlate more with the subtest’s own items than with items from another subtest (Nunnally & Bernstein 1994). Before performing this procedure, the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy was performed, as well as Bartlett’s test of sphericity. The standards of acceptability for KMO are above 0.7 (Field 2009; Hair et al. 2010). In the case of the Bartlett’s test the statistics generated should be statistically significant (p < 0.05) (Pallant 2010). Only if these results were acceptable, a principal component analysis was performed. The standard criteria of eigenvalues greater than 1 will be used for factor extraction. Ideally, five components, representing the five subscales of the CEAI, will be identified. Then the Varimax method with Kaiser normalisation was performed and values smaller than 0.4 were suppressed, to make interpretation easier. Ideally, 80% of the items will load on the appropriate factors (subscales).

Tests of known-group differences were conducted next. Firstly, it was tested whether government organisations (including parastatal entities) show less corporate entrepreneurship than private business entities, and secondly if managers show more corporate entrepreneurship than non-managers. One-way analysis of variance was performed in both cases. In the case of the multiple groups, the Scheffe post hoc test was performed to detect which groups differed from each other. Statistical significance of differences (p < 0.05) between groups was seen as sufficient evidence of known-group validity.

Information on convergent, and to a lesser extent divergent validity, was created by calculating the correlation between CEAI and several other measures. Firstly, it was hypothesised that corporate entrepreneurship would correlate significantly with innovative behaviour, more than generic organisational attitudes (employee engagement and organisational commitment). It was hypothesised that CEAI would correlate more with active forms of leadership (transformational and transactional) than with more passive forms of leadership (laissez-faire). Also, it was hypothesised that CEAI is related to performance, and as such, it would correlate more with a measure of effective performance management, rather than with a general measure of human resources management. Lastly, it was hypothesised that CEAI would correlate more with attitudes towards the job (employee engagement) than with attitudes towards the organisation (organisational commitment). A correlation of 0.5 would be seen as a clear sign of convergence, following Gregory’s (2011) example, and a non-significant correlation as evidence of divergent validity, relying on Cohen et al. (2013). However, in this case divergent validity was not the concern, and differences in correlation, as hypothesised, were used as indicative evidence of validity.

Factor analysis would also be performed to test for divergent validity. This was done through forcing CEAI and each of the other measures used in this study into a two-factor solution, and then performing a Varimax method rotation with Kaiser normalisation. After suppressing values smaller than 0.4, to make interpretation easier, the percentage of items that load (correctly) on the appropriate factor was calculated. Should 70% of the items with loadings of 0.4 load ‘correctly’, it would be interpreted as signalling divergent validity. Should this percentage not be achieved, it would point to a lack of divergent validity. The tolerance for items loading ‘incorrectly’, indicative of poor divergent validity, was set at 10%. Thus, should more than 10% of items load ‘incorrectly’, the instrument would be seen as invalid from a factorial point of view.

Ethical considerations

Ethical clearance for the collection of the data was obtained from the ethics committee of the University of South Africa’s Graduate School of Business Leadership (2014_SBL_018_CA dated 27 February 2014). All standard requirements for collecting data from human subjects were followed and no breaches of procedures were reported during the collection process.

Findings

Demographics of the sample

The total sample consisted of 3180 employees, representative of 53 companies. In total 57.1% reported that they were male employees, compared to 42.5% reporting that they were female employees (missing data = 0.4%). As far as race is concerned, 8.3% marked Asian, 58.4% black people, 8.4% mixed race and 24.6% white people (missing data = 0.3%). Their ages ranged between 20 and 72, with an average of 37.80 (standard deviation = 9.11). As far as tenure at their present company is concerned, it varied between 1 month and 42 years, with an average of 8.39 (standard deviation = 7.47).

As far as functions are concerned, the findings showed that 46.6% indicated that they were involved in the core business of the company, with 52.8% reporting that they fulfil supportive roles (missing data = 0.5%). With regard to position, 36.5% indicated that they hold some kind of managerial position, while 62.9% reported that they did not form part of management (missing data = 0.7%). In Table 1 the post levels of the respondents are presented (missing data = 1.8%).

TABLE 1: Description of post levels of respondents.

Concerning formal schooling, 5.0% reported that they received less than 12 years of formal schooling and 25.5% said that they had completed 12 years of formal schooling. A further 40.2% reported that they had completed a degree or diploma, while 28.9% indicated that they had a higher degree or higher diploma (missing data = 0.4%).

Descriptive statistics of the shortened Corporate Entrepreneurship Assessment Instrument

The descriptive statistics for the total instrument, as well as for the five subscales, are presented in Table 2. The sample size was 3180. The maximum total score was 98 and the minimum total score was 30 (20 items). For the subscales, the maximum score was 20 and the minimum 4 (4 items).

TABLE 2: Descriptive statistics for subscales and shortened Corporate Entrepreneurship Assessment Instrument total score.

Almost all subscales were skewed to the left, with the exception of time availability, which was within the boundaries of normality (-0.281 to 0.281). The total score was also skewed to the left, with a value of -0.291. With regard to kurtosis, the subscale time availability had a heavier tail and the subscale organisational boundaries had a sharper peak. However, the total CEAI score of 0.257 fell well within the boundaries of -0.47 to 0.62.

Reliability

Reliability was reported as per the Cronbach’s alpha coefficient and the strict parallel method, as generated through the SPSS-23 programme. The coefficient for the total instrument was 0.758 (20 items). The unbiased reliability was 0.723 and the common inter-item correlation was 0.115. The coefficients for the subscales are presented in Table 3.

TABLE 3: Reliability coefficients and common inter-item correlation for subscales and Corporate Entrepreneurship Assessment Instrument total score.

Interesting to note from the above is that common inter-item correlation related positively to reliability for the subscales (each with four items), though this parallel is not found for the total instrument (with 20 items).

Criterion-related validity

The results pertaining to the correlation and CEAI and innovative behaviour are presented in Table 4.

TABLE 4: The correlation between Corporate Entrepreneurship Assessment Instrument scores and innovative behaviour in the workplace.

The correlation coefficients presented in Table 4 reveal that the size of the correlations is small, with the highest coefficients just below the threshold of 0.03, which Cohen (1988) set for a medium effect.

Construct validity

Evidence on construct validity is presented under four subheadings.

Homogeneity of the items

As stated above, the homogeneity of the items, expressed as inter-item correlations, could be indicative of content validity. The inter-item correlation of the subscales varies between 0.201 and 0.384, and for the total score it is 0.115 (see Table 3). All the subscales fall within the set parameters (0.15 < R > 0.50; Clark & Watson 1995), but the total is outside these boundaries.

As stated before, Cronbach’s alpha could also be indicative of homogeneity. For the total CEAI score as well as for work discretion, a margin of 0.7 was reached. Unlike in the previous paragraph, where inter-item correlations were used, the CEAI met the requirement of homogeneity using the internal consistency measure.

Factorial validity

The Kaiser–Meyer–Olkin measure of sampling adequacy was performed and the value was 0.805. The Bartlett’s test of sphericity was also conducted and the approximate chi-square value of 11753.89 (degrees of freedom = 190) was significant at a level smaller than 0.001. Given that these values were acceptable, the principle component method was use for factor extraction and this was based on eigenvalues greater than 1. Five factors met the eigenvalue criteria, and these five factors explained 50.5% of the variance in the data. The factors were rotated using the Varimax method with Kaiser normalisation, and this is presented in Table 5. In Table 5 values higher than 0.4 are bolded.

TABLE 5: Rotated component matrix of Corporate Entrepreneurship Assessment Instrument items.

The above rotation converged in five iterations. The results presented in Table 5 present 100% compliance with an ideal solution.

Known-group variation and differences

Validity was also assessed by considering whether the measure could distinguish between groups where differences were expected. In this case it was foreseen that government organisations (including parastatal entities) would show less corporate entrepreneurship than private business entities and that non-managers would similarly show less corporate entrepreneurship than managers. The results revealed that the total scores of private business (N = 1983, mean = 66.61, standard deviation = 8.98), parastatal entities (N = 480, mean = 64.53, standard deviation = 8.77) and government organisations (N = 719, mean = 64.85, standard deviation = 10.04) differed significantly (F = 15.94, p < 0.001). The Scheffe post hoc test showed that parastatal entities and government organisations formed a homogeneous subset, which differed from private business. The results showed that managers (N = 1160, mean = 65.58, standard deviation = 9.38) did not score differently to non-managers (N = 2001, mean = 66.09, standard deviation = 9.16) on corporate entrepreneurship (F = 2.22, p = 0.136).

Convergent and discriminant validity

It was hypothesised that corporate entrepreneurship would correlate significantly with certain constructs, and not with others. From Tables 4 and 6, it can be read that corporate entrepreneurship does not correlate more with innovative behaviour (R = 0.267 and R = 0.277; Table 4) with more generic organisational attitudes (R = 0.420 for employee engagement and R = 0.311 for organisational commitment; Table 6).

TABLE 6: Correlation between Corporate Entrepreneurship Assessment Instrument scores and related measures.

As hypothesised, the CEAI correlates more with active forms of leadership (transformational and transactional) than with passive forms of leadership (laissez-faire). CEAI did not correlate more with a measure of effective performance management than with a general measure of human resources practices, as hypothesised. However, CEAI did correlate more, as expected, with employee engagement than with organisational commitment. Only the correlation with human resources practices surpassed the margin of 0.5, indicating a ‘hefty’ correlation (Gregory 2011), but on total score level almost all met the 0.3 threshold Cohen (1988) set for a medium effect. Noticeably absent are the measures of innovative behaviour, reported in Table 3.

As explained above, factor analysis was also performed as a measure of convergent and divergent validity. Presenting the actual results is extensive, and a summary of that is presented in Table 7.

TABLE 7: Distinctiveness of Corporate Entrepreneurship Assessment Instrument items and other items exposed to factor analysis.

In all the (factor) analyses presented above, the requirements were met which could define CEAI as a distinct measure. This is evidence of divergent validity.

Discussion

Data were sampled from 53 companies. In total, 3 180 employees were respondents in this study. From a demographic point of view, most respondents were men, reported that they were members from the black group and that they were employed in non-core roles. Furthermore, most respondents reported that they did not form part of management. The majority categorised themselves as technically skilled and as being part of junior management or working on a supervisory level. Though their demographic characteristics varied widely, the respondents’ profiles overall mirrored those of the current South African workforce profile. This meets the call for using a non-Western data to verify the psychometric properties of the CEAI.

The descriptive statistics for the CEAI are presented in Table 2. Given the sample size and the broad collection of companies surveyed, this could be used as guidelines when practitioners and researchers administer the instrument. They should, however, take note that median scores are higher than the mean scores and should consider this when they interpret the results of their tests. Given that the distribution for the total CEAI is close to normal, it may be advisable to focus on that score. This meets the call for localised norms for the use of the CEAI.

Reliability for the total CEAI, as reported as per the Cronbach’s alpha coefficient and the strict parallel unbiased reliability, was acceptable at 0.758 and 0.723. As coefficients for some of the subscales were below the set margin, it would be desirable to rather use the total score.

Test of criterion-related validity revealed that the CEAI subscale of management support related most with innovation at work, while the contribution of the subscale time availability was small. When combining the subscales, the correlation between CEAI and innovation at work remained small (R = 0.277 and R = 0.267). Evidence of criterion-related validity was thus lacking.

Evidence on construct validity is presented below. Several measures were used to assess construct validity, as well as some of the results already presented above. The homogeneity of the items (as reflected in the inter-item correlations) was acceptable for the subtest, but not for the total score. The results of the factor analysis, reported in Table 5, confirm relatedness of the subscale items. However, contrary to what is suggested by inter-item correlations of the total instrument, the results of the multiple factor analyses, reported in Table 7, suggest that the items of the total instrument converge. The Cronbach’s alpha also suggests homogeneity. As such, it is judged that the homogeneity of the items supports construct validity.

Factorial validity also forms part of construct validity. All the requirements were met to perform a factor analysis on the items of the CEAI. Five factors, explaining 50.5% of the variance in the in the data, were extracted. This is an acceptable amount of variance explained. When rotating the axis, all the items of the CEAI loaded on factors in accordance with the design of the instrument and per subscale. The results presented in Table 5 present 100% compliance with the theorised solution and therefore form evidence of factorial validity.

The findings pertaining to known-group variation, as evidence of construct validity, were mixed. The CEAI scores of government organisations (including parastatal entities) were lower than or differed from those of private business entities, as predicted, but managers did not score higher than non-managers, which was hypothesised.

Convergent and divergent validity results were mixed. It was hypothesised that corporate entrepreneurship would correlate more with innovative behaviour than with more generic organisational attitudes. Even with hindsight, it is difficult to explain why CEAI would correlate more with employee engagement (R = 0.420) and organisational commitment (R = 0.311) than with the two measures of innovative behaviour (R = 0.267 and R = 0.277). However, CEAI correlated more, as hypothesised, with employee engagement than with organisational commitment. Also, as hypothesised, CEAI correlated more with active forms of leadership (transformational and transactional) than with more passive forms (laissez-faire). The only correlation which surpassed the margin of 0.50 was the generic measure of human resources practices. This correlation was even stronger than the correlation with the measure of effective performance management. Using the correlation matrix as point of departure, it can only be concluded that the results pertaining to convergent and divergent validity are not particularly convincing. However, the results of the factor analysis performed as evidence of divergent validity (Table 7) were confirmative. It is demonstrated that CEAI items are distinct from the items of six other measures. This provides clear evidence of divergent validity.

Conclusion

In this article the psychometric properties of a shortened version of the CEAI are presented. The discussion of the psychometric properties was informed by data from a relatively large sample representative of numerous companies. This sample size compares well to other studies investigating psychometric properties. The reliability scores of the total instrument were acceptable. Also, though the validity evidence was mixed, a multitude of evidence found supported validity. It may be expected that when you collect evidence from a large number of sources, some may yield the consistent confirmation. Given all the evidence provided, particularly the evidence obtained from both applications of factor analysis, it is judged that the CEAI has acceptable validity.

Recommendations

Researchers and practitioners are urged to use the shortened version of the CEAI, using the 20 instead of 48 items. The shortened version of the CEAI showed acceptable reliability and validity and the use of the central statistics provided in Table 2 of this article is recommended, particularly in the South African or similar contexts. Researchers and practitioners can also now exploit the rich theory and empirical knowledge pertaining to CEAI within the South African context.

Limitations

Though the sample size was relatively large, and sampling within organisations was done randomly, organisations were selected on the basis of convenience. The generalisation of the results is thus limited. Furthermore, a judgement had to be made regarding the overall validity of the instrument, as not all the indicators of validity were positive. Although subjectivity was uncomfortable, most authors refer to validity assessment as a judgement call. Further research on this is needed to demonstrate evidence of validity. Also, additional statistical techniques, such as structural equation modelling, could be used in future studies. This may provide additional insights about the topic. The research made use of only a single method of data collection, self-reporting. This limitation may be mediated by adding additional methods of reporting, and this is recommended for future studies.

Acknowledgements

Competing interests

The authors declare that they have no financial or personal relationships which may have inappropriately influenced them in writing this article.

References

Adegoke, O., Walumbwa, F.O. & Myers, A., 2012, ‘Innovation strategy, human resource policy, and firms’ revenue growth: The roles of environmental uncertainty and innovation performance’, A Journal of Decision Sciences Institute 43(2), 273–301. https://doi.org/10.1111/j.1540-5915.2011.00350.x

Allen, N.J. & Meyer, J.P., 1990, ‘The measurement and antecedents of affective, continuance and normative commitment to the organisation’, Journal of Occupational Psychology 63, 1–18. https://doi.org/10.1111/j.2044-8325.1990.tb00506.x

American Educational Research Association, American Psychological Association, and National Council on Measurement in Education, 1999, Standards for educational and psychological testing, American Educational Research Association, Washington, DC.

Antokonis, J., Avolio, B.J. & Sivasubramanian, N., 2003, ‘Context and leadership: An examination for the 9 factor full range leadership theory using MLQ’, The Leadership Quarterly 14, 261–295. https://doi.org/10.1016/S1048-9843(03)00030-4

Avolio, B.J., Bass, B.M. & Jung, D., 1995, MLQ: Technical report, Mind Garden, Redwood City, CA.

Avolio, B.J., Bass, B.M. & Jung, D., 1999, ‘Re-examining the components of transformational and transactional leadership using the Multifactor Leadership Questionnaire’, Journal of Occupational and Organizational Psychology 7, 441–462. https://doi.org/10.1348/096317999166789

Bakker, A.B., Schaufeli, W.B., Leiter, M.P. & Taris, T.W., 2008, ‘Work engagement: An emerging concept in occupational health psychology’, Work and Stress 22(3), 187–200. https://doi.org/10.1080/02678370802393649

Bhardwaj, B.R., 2012, ‘Internal environment for corporate entrepreneurship: Assessing CEAI model for emerging economies’, Journal of Chinese Entrepreneurship 4(1), 70–87. https://doi.org/10.1108/17561391211200948

Bono, J.E. & Judge, T.A., 2004, ‘Personality and transformational and transactional leadership: A meta-analysis’, Journal of Applied Psychology 89(5), 901–910. https://doi.org/10.1037/0021-9010.89.5.901

Brazeal, D.V., Schenkel, M.T. & Kumar, S., 2014, ‘Beyond the organizational bounds in CE research: Exploring personal and relational factors in a flat organizational structure’, Journal of Applied Management and Entrepreneurship 19(2), 78–106. https://doi.org/10.9774/GLEAF.3709.2014.ap.00006

Campbell, D. & Fiske, D., 1959, ‘Convergent and discriminant validation by the multitrait-multimethod matrix’, Psychological Bulletin 56(2), 81–105. https://doi.org/10.1037/h0046016

Cho, E. & Kim, S., 2015, ‘Cronbach’s coefficient alpha: Well known but poorly understood’, Organizational Research Methods 18(2), 207. https://doi.org/10.1177/1094428114555994

Choi, B.K., Moon, H.K. & Ko, W., 2013, ‘An organization’s ethical climate, innovation, and performance: Effects of support for innovation and performance evaluation’, Management Decision 51, 1250–1275. https://doi.org/10.1108/MD-Sep-2011-0334

Clark, L.A. & Watson, D., 1995, ‘Constructing validity: Basic issues in objective scale development’, Psychological Assessment 7, 309–319. https://doi.org/10.1037/1040-3590.7.3.309

Cohen, J., 1988, Statistical power analysis for the behavioural sciences, 2nd edn., Lawrence Erlbaum Associates, Hillsdale, NJ.

Cohen, R.J., Swerdlik, M.E. & Sturman, E.D., 2013, Psychological testing and assessment: An introduction to test and measurement, 8th edn., McGraw-Hill, New York, NY.

Crespell, P. & Hansen, E., 2008, ‘Work climate, innovativeness, and firm performance in the US forest sector: In search of conceptual framework’, Canadian Journal for Forest Research 38, 1703–1715. https://doi.org/10.1139/X08-027

Cronbach, L.J., 1951, ‘Coefficient alpha and the internal structure of tests’, Psychometrika 16(3), 297–334.

Cronbach, L.J. & Gleser, G.C., 1965, Psychological tests and personnel decisions, University of Illinois Press, Urbana, IL.

Cronbach, L.J. & Shavelson, R.J., 2004, ‘My current thoughts on coefficient alpha and successor procedures’, Educational and Psychological Measurement 64(3), 391–418. https://doi.org/10.1177/0013164404266386

De Jong, J. & den Hartog, D., 2010, ‘Measuring innovative work behaviour’, Creativity and Innovation Management 19(1), 23–36. https://doi.org/10.1111/j.1467-8691.2010.00547.x

DeVellis, R.F., 2012, Scale development: Theory and applications, 3rd edn., Sage, Thousand Oaks, CA.

De Villiers-Scheepers, M.J., 2012, ‘Antecedents of strategic corporate entrepreneurship’, European Business Review 24(5), 400–424. https://doi.org/10.1108/09555341211254508

Doane, D.P. & Seward, L.E., 2009, Applied statistics in business and economics, McGraw-Hill, Boston, MA.

Durán-Vázquez, R., Lorenzo-Valdés, A. & Moreno-Quezada, G.E., 2012, ‘Innovation and CSR impact on financial performance of selected companies in Mexico’, Journal of Entrepreneurship, Management and Innovation 8(3), 5–20.

Field, A., 2009, Discovering statistics using SPSS, 3rd edn., Sage, Los Angeles, CA.

García-Morales, V.J., Lloréns-Montes, F.J. & Verdú-Jover, A.J., 2008, ‘The effects of transformational leadership on organizational performance through knowledge and innovation’, British Journal of Management 19(4), 299–319. https://doi.org/10.1111/j.1467-8551.2007.00547.x

Golla, E. & Johnson, R., 2013, ‘The relationship between transformational and transactional leadership styles and innovation commitment and output at commercial software companies’, The Business Review Cambridge 21(1), 337–343.

Grant, R., 2012, Contemporary strategy analysis: Text and cases, Blackwell, Malden, MA.

Gregory, R.J., 2011, Psychological testing: History, principles, and applications, 6th edn., Pearson, Boston, MA.

Grobler, P., Wärnick, S., Carrell, M.R., Elbert, N.F. & Hatfield, R.D., 2006, Human resource management in South Africa, 3rd edn., Thomson Learning, London.

Guilford, J.P. & Benjamin, F., 1978, Fundamental statistics in psychology and education, 6th edn., McGraw-Hill, New York, NY.

Gunday, G., Ulusoy, G., Kilic, K. & Alpkan, L., 2011, ‘Effects of innovation types on firm performance’, International Journal of Production 133(2), 662–676. https://doi.org/10.1016/j.ijpe.2011.05.014

Hair, F.J., Black, W.C., Babin, B.J. & Anderson, R.E., 2010, Multivariate data analysis, 7th edn., Pearson/Prentice Hall, Upper Saddle River, NJ.

Hajipour, B. & Mas’oomi, S., 2011, ‘A survey on the relationship between financial performance and corporate venturing’, Interdisciplinary Journal of Contemporary Research in Business 2(12), 890–901, viewed 16 November 2016, from http://search.proquest.com/docview/876011441?accountid=14648

Hind, C. & Steyn, R., 2015, ‘Corporate entrepreneurial activity: Distilling the concept’, The Southern African Journal of Entrepreneurship and Small Business Management 7, 69–87. https://doi.org/10.4102/sajesbm.v7i1.7

Holt, D.T., Rutherford, M.W. & Clohessy, G.R., 2007, ‘Corporate entrepreneurship: An empirical look at individual characteristics, context, and process’, Journal of Leadership & Organizational Studies 13(4), 40–54. Viewed 16 November 2016, retrieved from http://search.proquest.com/docview/203135466?accountid=14648; https://doi.org/10.1177/10717919070130040701

Hornsby, J.S., Kuratko, D.F., Holt, D.T. & Wales, W.J., 2013, ‘Assessing a measurement of organizational preparedness for corporate entrepreneurship’, The Journal of Product Innovation Management 30(5), 937. https://doi.org/10.1111/jpim.12038

Hornsby, J.S., Kuratko, D.F. & Zahra, S.A., 2002, ‘Middle managers’ perception of the internal environment for corporate entrepreneurship: Assessing a measurement scale’, Journal of Business Venturing 17(3), 253–273. https://doi.org/10.1016/S0883-9026(00)00059-8

Kaiser, H.F. & Michael, W.B., 1975, ‘Domain validity and generalizability’, Educational and Psychological Measurement 35, 31–35. https://doi.org/10.1177/001316447503500103

Kamffer, L., 2004, ‘Factors impacting on corporate entrepreneurial behaviour within a retail organization: A case study’, Master’s degree dissertation, University of South Africa, Pretoria.

Karimi, A., Malekmohamadi, I., Daryani, M.A. & Rezvanfar, A., 2011, ‘A conceptual model of intrapreneurship in the Iranian agricultural extension organization’, Journal of European Industrial Training 35(7), 632–657. https://doi.org/10.1108/03090591111160779

Kleysen, R.F. & Street, C.T., 2001, ‘Toward a multi-dimensional measure of individual innovative behaviour’, Journal of Intellectual Capital 2(3), 284–296. https://doi.org/10.1108/EUM0000000005660

Kuratko, D.F. & Audretsch, D.B., 2013, ‘Clarifying the domains of corporate entrepreneurship’, International Entrepreneurship and Management Journal 9(3), 323–335. https://doi.org/10.1007/s11365-013-0257-4

Kuratko, D.F., Hornsby, J.S. & Covin, J.G., 2014, ‘Diagnosing a firm’s internal environment for corporate entrepreneurship’, Business Horizons 57(1), 37–47. https://doi.org/10.1016/j.bushor.2013.08.009

Lawshe, C.H., 1975, ‘A quantitative approach to content validity’, Personnel Psychology 28, 563–575. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x

Lord, F. & Novick, M., 1968, Statistical theories of mental test scores, Addison-Wesley, Reading, MA.

Martuza, V.R., 1977, Applying norm-referenced and criterion-referenced measurement in education, Allyn & Bacon, Boston, MA.

Marzban, S., Seyed, M.M. & Ramezan, M., 2013, ‘The effective factors in organizational entrepreneurship climate’, Journal of Chinese Entrepreneurship 5(1), 76–93. https://doi.org/10.1108/17561391311297897

Moerdyk, A., 2015, The principles and practice of psychological assessment, 2nd edn., Van Schaik, Pretoria.

Muenjohn, N. & Armstrong, A., 2008, ‘Evaluating the structural validity of the MLQ, capturing the leadership factors of transformational-transactional leadership’, Contemporary Management Research 4(1), 3–14. https://doi.org/10.7903/cmr.704

Newton, P.E., 2012, ‘Clarifying the consensus definition of validity’, Measurement 10, 1–29. https://doi.org/10.1080/15366367.2012.669666

Nicholls, A.R., Madigan, D.J., Backhouse, S.H. & Levy, A.R., 2017, ‘Personality traits and performance enhancing drugs: The Dark Triad and doping attitudes among competitive athletes’, Personality and Individual Differences 112, 113–116. https://doi.org/10.1016/j.paid.2017.02.062

Nikolov, K. & Urban, B., 2013, ‘Employee perceptions of risks and rewards in terms of corporate entrepreneurship participation’, SA Journal of Industrial Psychology 39(1), 1–13. https://doi.org/10.4102/sajip.v39i1.1047

Novick, M.R. & Lewis, C., 1967, ‘Coefficient alpha and the reliability of composite measurements’, Psychometrika 32(1), 1–13. https://doi.org/10.1007/BF02289400

Nunnally, J.C. & Bernstein, I.H., 1994, Psychometric theory, 3rd edn., McGraw-Hill, New York, NY.

Nusair, T.T., 2013, ‘The role of climate for innovation in job performance: Empirical evidence from commercial banks in Jordan’, International Journal of Business and Social Science 4, 208–217.

Nyawose, M., 2009, ‘The relationship between human resources management practices, organisational commitment and turnover intentions amongst engineering professionals’, Master’s degree dissertation, University of South Africa, Pretoria, South Africa.

Nybakk, E. & Jenssen, J.I., 2012, ‘Innovation strategy, working climate, and financial performance in traditional manufacturing firms: An empirical analysis’, International Journal of Innovation Management 16, 1–26. https://doi.org/10.1142/S1363919611003374

Overstreet, R.E., Hanna, J.B., Byrd, T.A., Cegielski, C.G. & Hazen, B.T., 2013, ‘Leadership style and organizational innovativeness drive motor carriers toward sustained performance’, The International Journal of Logistics Management 24(2), 247–270. https://doi.org/10.1108/IJLM-12-2012-0141

Pallant, J., 2010, SPSS survival manual, 5rd edn., McGraw-Hill, Berkshire.

Panuwatwanich, K., Stewart, R.A. & Mohamed, S., 2008, ‘Enhancing innovation and firm performance: The role of climate for innovation in design firms’, in Proceedings of the 5th International Conference on Innovation in Architecture, Engineering and Construction, Antalya, Turkey.

Peterson, R.A., 1994, ‘A meta-analysis of Cronbach’s coefficient alpha’, Journal of Consumer Research 21(2), 381. https://doi.org/10.1086/209405

Polit, D.F. & Beck, C.T., 2006, ‘The content validity index: Are you sure you know what’s being reported? Critique and recommendations’, Research in Nursing and Health 29, 489–497. https://doi.org/10.1002/nur.20147

Schaufeli, W.B. & Bakker, A.B., 2004, ‘Job demands, job resources, and their relationship with burnout and engagement: A multi-sample study’, Journal of Organizational Behavior 25(3), 293–315. https://doi.org/10.1002/job.248

Schaufeli, W.B., Bakker, A.B. & Salanova, M., 2006, ‘The measurement of work engagement with a short questionnaire: A cross-national study’, Educational and Psychological Measurement 66(4), 701–716. https://doi.org/10.1177/0013164405282471

Shaughnessy, J.J., Zechmeister, E.B. & Zechmeister, J.S., 2009, Research methods in psychology, 8th edn., McGraw-Hill, New York, NY.

Sijtsma, K., 2009, ‘On the use, the misuse, and the very limited usefulness of Cronbach’s alpha’, Psychometrika 74(1), 107–120. https://doi.org/10.1007/s11336-008-9101-0

Spatz, C. & Kardas, E.P., 2008, Research methods: Ideas, techniques, & reports, McGraw-Hill, New York, NY.

Steyn, R., 2010, The development and validation of the quality of performance appraisal systems questionnaire, a paper presented at the 27th International Congress of Applied Psychology, Melbourne, Vic, Australia, 11–16 July.

Steyn, R., 2012, ‘Human resource practices and employee attitudes: A study of individuals in ten South African companies’, Alternation 5 184–167.

Strydom, A.S., 2013, ‘The influence of organizational behaviour variables on corporate entrepreneurship’, Doctoral degree thesis, University of South Africa.

Tredoux, C. & Durrheim, K., 2013, Numbers, hypotheses & conclusions: A course in statistics for the social sciences, 2nd edn., Juta, Cape Town.

Van Wyk, R. & Adonisi, M., 2011. ‘An eight-factor solution for the Corporate Entrepreneurship Assessment Instrument’, African Journal of Business Management 5(8), 3047–3055.

Wilson, F.R., Pan, W. & Schumsky, D.A., 2012, ‘Recalculation of the critical values for Lawshe’s content validity ratio’, Measurement and Evaluation in Counselling and Development 45(3), 197–210. https://doi.org/10.1177/0748175612440286

Footnotes

1. Discriminant validity is often used as a synonym to divergent validity. In this paragraph the term ‘discriminant validity’ is preferred as to align the content of the text to the sources consulted. In the rest of the text the term divergent validity will be used.

2. In most of the discussions that follow, reference will be made to the Strydom (2013) adaptation of the CEAI, the shortened version of the CEAI. All the results in the results section refer to the shortened version of the CEAI. To facilitate the flow of the argument, reference will not always be made to shortened version of the CEAI.


 

Crossref Citations

1. Organizational Antecedents of Corporate Entrepreneurship: A Quantitative Investigation from Portugal
Luís Marques, João J. Ferreira, Sascha Kraus, Raj Mahto
The Journal of Entrepreneurship  vol: 31  issue: 3  first page: 483  year: 2022  
doi: 10.1177/09713557221136130