Introduction

As indicated by the name, assess() is used to assess a model estimated using the csem() function.

In cSEM model assessment is considered to be any task that in some way or another seeks to assess the quality of the estimated model without conducting a statistical test (tests are covered by the test_* family of functions). Quality in this case is taken to be a catch-all term for all common aspects of model assessment. This mainly comprises fit indices, model selection criteria, reliability estimates, common validity assessment criteria, effect sizes, and other related quality measures/indices that do not rely on a formal test procedure. Hereinafter, we will refer to a generic (fit) index, quality or assessment measure as a quality criterion.

Currently the following quality criteria are implemented:

  • Convergent and discriminant validity assessment:
    • The average variance extracted (AVE)
    • The Fornell-Larcker criterion
    • The heterotrait-monotrait ratio of correlations (HTMT)
  • Congeneric reliability (ρC\rho_C), also known as e.g.: composite reliability, construct reliability, (unidimensional) omega, Jöreskog’s ρ\rho, ρA\rho_A, or ρB\rho_B.
  • Tau-equivalent reliability (ρT\rho_T), also known as e.g.: Cronbach alpha, alpha, α\alpha, coefficient alpha, Guttman’s λ3\lambda_3, KR-20.
  • Distance measures
    • The standardized root mean square residual (SRMR)
    • The geodesic distance (DG)
    • The squared Euclidian distance (DL)
    • The maximum-likelihood distance (DML)
  • Fit indices
    • The χ2\chi^2-statistic
    • The χ2/df\chi^2/df-statistic
    • The comparative fit index (CFI)
    • The goodness-of-fit index (GFI)
    • The standardized root mean square residual (SRMR)
    • The root mean square error of approximation (RMSEA)
    • The normed fit index (NFI)
    • The non-normed fit index (NNFI)
    • The comparative fit index (CFI)
    • The incremental fit index (IFI)
    • The root mean square outer residual covariance (RMSθ\text{RMS}_{\theta})
  • The Goodness-of-Fit (GoF) proposed by Tenenhaus, Amanto, and Vinzi (2004).
  • The variance inflation factors (VIF) for the structural equations as well as for Mode B regression equations (if .approach_weights = "PLS-PM").
  • The coefficient of determination and the adjusted coefficient of determination (R2R^2 and Radj2R^2_{adj})
  • A measure of the effect size (Cohen’s f2f^2).
  • Direct, indirect and total effect assessment.
  • Several model selection criteria as described in Sharma et al. (2019).

For implementation details see the Methods & Formulae section.

Syntax & Options

assess(
  .object              = NULL, 
  .only_common_factors = TRUE, 
  .quality_criterion   = c("all", "aic", "aicc", "aicu", "bic", "fpe", "gm", "hq",
                           "hqc", "mallows_cp", "ave",
                           "rho_C", "rho_C_mm", "rho_C_weighted", 
                           "rho_C_weighted_mm", "dg", "dl", "dml", "df",
                           "effects", "f2", "fl_criterion", "chi_square", "chi_square_df",
                           "cfi", "gfi", "ifi", "nfi", "nnfi", 
                           "reliability",
                           "rmsea", "rms_theta", "srmr",
                           "gof", "htmt", "r2", "r2_adj",
                           "rho_T", "rho_T_weighted", "vif", 
                           "vifmodeB"),
  ...
)
.object

An object of class cSEMResults resulting from a call to csem().

.quality_criterion

A character string or a vector of character strings naming the quality criterion to compute. By default all quality criteria are computed ("all"). See assess() for a list of possible candidates.

.only_common_factors

Logical. Should only concepts modeled as common factors be included when calculating one of the following quality criteria: AVE, the Fornell-Larcker criterion, HTMT, and all reliability estimates. Defaults to TRUE.

...

Further arguments passed to functions called by assess(). See args_assess_dotdotdot for a complete list of available arguments.

Like all postestimation functions assess() can be called on any object of class cSEMResults. The output is a named list of the quality criteria given to .quality_criterion. By default all possible quality criteria are calculated (.quality_criterion = "all").

Details

In line with all of cSEM’s postestimation functions, assess() is a generic function with methods for objects of class cSEMResults_default, cSEMResults_multi, cSEMResults_2ndorder. In cSEM every cSEMResults_* object must also have class cSEMResults for internal reasons. When using one of the major postestimation functions, method dispatch is therefore technically done on one of the cSEMResults_* class attributes, ignoring the cSEMResults class attribute. As long as assess() is used directly, method dispatch is not of any practical concern to the end-users.

Composite models vs. common factor models

Some assessment measures are inherently tied to the common factor model. It is therefore unclear how to interpret their results in the context of a composite model. Consequently, their computation is suppressed by default for constructs modeled as composites. Currently, this applies to the following quality criteria:

  • AVE and validity assessment based thereon (i.e., the Fornell-Larcker criterion)
  • HTMT and validity assessment based thereon
  • All reliability measures

It is possible to force computation of all quality criteria for constructs modeled as composites using .only_common_factors = FALSE, however, we explicitly warn to interpret results, as they may not even have a conceptual meaning.

All quality criteria assume that the estimated loadings, construct correlations and path coefficients involved in the computation of a specific quality measure are consistent estimates for their theoretical population counterpart. If the user deliberately chooses an approach that yields inconsistent estimates (by setting .disattenuate = FALSE in csem() when the estimated model contains constructs modeled as common factors) assess() will still estimate all quantities, however, quantities such as the AVE or the congeneric reliability ρC\rho_C inherit inconsistency.

Methods & Formulae

This section provides technical details and relevant formulae. For the relevant notation and terminology used in this section, see the Notation and the Termionology help files.

Average Variance Extracted (AVE)

Definition

The average variance extracted (AVE) was first proposed by Fornell and Larcker (1981). Several definitions exist. For ease of comparison to extant literature the most common definitions are given below:

  • The AVE for a generic construct/latent variable η\eta is an estimate of how much of the variation of its indicators is due to the assumed latent variable. Consequently, the share of unexplained, i.e. error variation is 1 - AVE.
  • The AVE for a generic construct/latent variable η\eta is the share of the total indicator variance (i.e., the sum of the indicator variances of all indicators connected to the construct), that is captured by the (indicator) true scores.
  • The AVE for a generic construct/latent variable η\eta is the ratio of the sum of the (indicator) true score variances (explained variation) relative to the sum of the total indicator variances (total variation, i.e., the sum of the indicator variances of all indicators connected to the construct).
  • Since for the regression of xkx_k on ηk\eta_k, the R squared (Rk2)R^2_k) is equal to the share of variation of xkx_k explained by ηk\eta_k relative to the total variation of xkx_k, the AVE for a generic construct/latent variable η\eta is equal to the average over all Rk2R^2_k.
  • The AVE for a generic construct/latent variable η\eta is the sum of the squared correlation between indicator xkx_k and the (indicator) true score ηk\eta_k relative to the sum of the indicator variances of all indicators connected to the construct in question.

It is important to stress that, although different in wording, all definitions are synonymous!

The AVE is inherently tied to the common factor model. It is therefore unclear how to interpret the AVE for constructs modeled as composites. Consequently, the computation is suppressed by default for constructs modeled as common factors. It is possible to force computation of the AVE for constructs modeled as composites using .only_common_factors = FALSE, however, we explicitly warn to interpret results, as they may not even have a conceptual meaning.

Formulae

Using the results and notation derived and defined in the Notation help file, the AVE for a generic construct is: AVE=Sum indicator true score variancesSum indicator variances=Var(ηk)Var(xk)=λk2(λk2+Var(εk)) AVE = \frac{\text{Sum indicator true score variances}}{\text{Sum indicator variances}} = \frac{\sum Var(\eta_k)}{\sum Var(x_k)} = \frac{\sum\lambda^2_k}{\sum(\lambda^2_k + Var(\varepsilon_k))} If xkx_k is standardized (i.e., Var(xk)=1Var(x_k) = 1) the denominator reduces to KK and the AVE for a generic construct is: AVE=1Kλk2=1Kρxk,η2 AVE = \frac{1}{K}\sum \lambda^2_k = \frac{1}{K}\sum \rho_{x_k, \eta}^2 As an important consequence, the AVE is closely tied to the communality. Communality (COMkCOM_k) is definied as the proportion of variation in an indicator that is explained by its common factor. Empirically, it is the square of the standardized loading of the kk’th indicator (λk2\lambda^2_k). Since indicators, scores/proxies and subsequently loadings are always standardized in cSEM, the squared loading is simply the squared correlation between the indicator and its related construct/common factor. The AVE is also directly related to the indicator reliability, defined as the squared correlation between an indicator kk and its related proxy true score (see section Reliability below), which is again simply λk2\lambda^2_k. Therefore in cSEM we always have:

AVE=1KCOMk=1KIndicator reliabilityk=1Kλk2=1KRk2 AVE = \frac{1}{K}\sum COM_k = \frac{1}{K}\sum \text{Indicator reliability}_k = \frac{1}{K}\sum\lambda^2_k = \frac{1}{K}\sum R^2_k

Implementation

The function is implemented as: calculateAVE().

See also

The AVE is the basis for the Fornell-Larcker criterion.

Degrees of freedom

Definition

Degrees of freedom are calculated as the difference between the number of non-redundant free elements of the empirical indicator correlation matrix 𝐒\boldsymbol{\mathbf{S}} and the model parameters.

Although, composite-based estimators retrieve parameters of the postulated models by forming composites, which involves the estimation of weights the computation of the degrees of freedom eventually depends on the postulated model and the parameters implied by the model. Most notably, a common factor model estimated by a composite-based approach such as PLS has the same degrees of freedom compared to e.g., classical maximum likelihood estimation of the same model.

Formulae

df=# non-redundant off-diagonal elements of the empirical indicator correlation matrix 𝐒# model parameters \begin{align} \text{df} &= \text{# non-redundant off-diagonal elements of the empirical indicator correlation matrix $\boldsymbol{\mathbf{S}}$} \\ &- \text{# model parameters} \end{align}

If the model contains only linear terms the model parameters are:

  • # free correlations between exogenous constructs
  • # specified correlations between endogenous constructs
  • # structural parameters

In addition, for each construct ηj\eta_j:

  • # of loadings if ηj\eta_j is modeled as a common factor
  • # of specified measurement error correlations between items of constructs ηj\eta_j if ηj\eta_j is modeled as a common factor
  • # of weights of ηj\eta_jminus 1 if ηj\eta_j is modeled as a composite. One weight per block is fixed and hence not counted as a model parameter since the variance of the composite is scaled to be unity.
  • # of non-redundant off-diagonal elements of 𝚺j\boldsymbol{\mathbf{\Sigma}}_j if ηj\eta_j is modeled as a composite.

If the model contains second-order terms the model parameters are similar:

  • # free correlations between exogenous constructs
  • # specified correlations between endogenous constructs
  • # structural parameters. Note: relations between constructs measuring/forming the second-order construct are not path!

In addition, for each construct ηj\eta_j (including the second-order constructs):

  • # of loadings if ηj\eta_j is modeled as a common factor
  • # of specified measurement error correlations between items of constructs ηj\eta_j if ηj\eta_j is modeled as a common factor
  • # of weights of ηj\eta_jminus 1 if ηj\eta_j is modeled as a composite. One weight per block is fixed and hence not counted as a model parameter since the variance of the composite is scaled to be unity.
  • # of non-redundant off-diagonal elements of 𝚺j\boldsymbol{\mathbf{\Sigma}}_j if ηj\eta_j is modeled as a composite.
Notes
  1. If all constructs are allowed to freely covary, i.e., there is no structural model and no structural parameters, all constructs are considered exogenous.
  2. If the strucutral model contains nonlinear terms (e.g., η12\eta^2_1 or η1η2\eta_1\eta_2), degrees of freedom computation is currently unclear (at least to us). A warning is printed to inform the user that the calculation is may not be correct.

Implementation

The function is implemented as calculateDf().

See also:

Degrees of freedom are required for several fit measures.

Fit Indices

Definition

Fit indices for confirmatory factor analysis (CFA) were first introduced by Bentler and Bonett (1980). Since then a large number of indices has been defined. Contrary to exact tests of model fit, the purpose of fit indices is to measure the fit of a structural equation model on a continuous scale. For normed fit indices this scale is between 0 and 1. Fit indices can be divided into two classes:

  • ‘badness of fit’ (resp. ‘lack of fit’) indices; a smaller value indicates a better fit.
  • ‘goodness of fit’ indices; a higher value represents a better fit.

Several studies have analyzed the empirical and theoretical properties of fit indices in the context of CFA where concepts are expressed by latent variables. only little is known about the properties and the performance of fit indices in composite models and for models estimated using a composite-based approach. cSEM offers a number of fit indices that are known from factor-based SEM. However, applied users should be aware that only little is known about their applicability, intuition, and interpretability in the context of models containing constructs modeled as composites or for models estimated using a composite-based approach.

Independent of the approach and model used, a particularily controversial issue are cutoff values for fit indices (e.g., Marsh, Hau, and Wen 2004). In factor-based SEM cutoff values are rather popular. The basis for these are numerous simulation studies, most notably Hu and Bentler (1999). In contrast for composite models - for better or worse - no cutoff values have been suggested.1 Using assess() to calculate fit indices, the user should always keep in mind that the value of a fit index is just some indication of good or bad fit. Other aspects related to model fit must be considered as well. It is unreasonable to make a binary decision about rejection or non-rejection of a model by soley comparing the value of a fit index with a (more or less) arbitrary cutoff value.

The definitions of fit indices calculated by assess() are given in the following:

  • The χ2\chi^2-statistic is the value of the fitting function times the sample size minus 1.
  • The χ2/df\chi^2/df-ratio is the χ2\chi^2-statistic divided by its degrees of freedom.
  • The goodness-of-fit index (GFI) measures the relative increase in fit of the specified model compared to no model at all.
  • The standardized root mean square residual (SRMR) is the square root of the mean of squared residual correlations.
  • The root mean square error of approximation (RMSEA) is the square root of the discrepancy due to approximation per degree of freedom.
  • The normed fit index (NFI) measures the increase in fit when specifying the model under consideration relative to the fit of a certain baseline model called the “null model”.
  • The non-normed fit index (NNFI) accounts for the degrees of freedom of the involved models. It is the ratio of the distance between the fit of the baseline model and the fit of the specified model (each per degree of freedom) and the distance beetween the fit of the baseline model and the expected fit of the specified model (each per degree of freedom).
  • The comparative fit index (CFI) estimates the relative decrease in non-centrality when specifying the model under consideration instead of the baseline model.
  • The incremental fit index (IFI) is the ratio of the distance between the fit of the baseline model and the fit of the specified model and the distance between the fit of the baseline model and the expected fit of the specified model. Its definition differs only marginally from the definition of the NNFI.
  • The root mean square outer residual covariance (RMSθ\text{RMS}_{\theta}) is defined as the square root of the mean squared covariances of the residuals of the outer model. The calculation of the indicator’s residual covariance matrix involves the calculation of the construct’s covariance matrix. See Lohmöller (1989).

It should be stressed again that (with the possible exception of the RMSθ\text{RMS}_{\theta}) none of the above mentioned fit indices were originally designed for composite models. The indices RMSEA and CFI are non-centrality based and require specific assumptions on model and data typically made in CFA. The same applies for IFI and NNFI since their calculation relies on the properties (primarily the expectation) of the test statistic when data follows a normal distribution. In general, those assumptions are not made in composite models and composite-based estimators, respectively. For this reason, the intuition behind these indices does not hold for composite-based SEM. Nevertheless, calculation of these indices is also possible in this case. Whether the values of these indices are still meaningful in a sense that they can be used for assessment of model fit is an open question. Furthermore, values of fit indices for composite-based estimators and factor-based estimators may not be compared. Users should always keep this aspect and the general limitations of fit indices in mind.

Formulae

The exact formulae of the fit indices as implemented in cSEM are given in the following. The term F=F(𝐒,𝚺(𝛉̂))=F(𝐒,𝚺̂)F = F(\boldsymbol{\mathbf{S}}, \boldsymbol{\mathbf{\Sigma}}(\hat{\boldsymbol{\mathbf{\theta}}})) = F(\boldsymbol{\mathbf{S}}, \hat{\boldsymbol{\mathbf{\Sigma}}}) stands for the value of the maximum likelihood fitting function evaluated at 𝐒\boldsymbol{\mathbf{S}} (the empirical covariance matrix of the indicators) and 𝚺̂\hat{\boldsymbol{\mathbf{\Sigma}}} (the estimated model-implied covariance matrix of the indicators). The value of the maximum likelihood fitting function is computed by calculateDML().

The χ2\chi^2-statistic

The χ2\chi^2-statistic is defined as: χ2=(N1)F \chi^2 = (N-1)\cdot F where NN is the sample size.

Main reference: K. G. Jöreskog (1969)

The χ2/df\chi^2/\text{df}-ratio

The χ2/df\chi^2/\text{df}-statistic is defined as: χ2=(N1)F/dfM \chi^2 = (N-1)\cdot F/\text{df}_M where NN the sample size and dfM\text{df}_M the degrees of freedom of the estimated model.

Main reference: K. G. Jöreskog (1969)

The goodness-of-fit index (GFI)

The GFI is generally defined in analogy to the coefficient of determination (R2R^2) known from regression analysis as 1 minus the share of the weighted unexplained variance (SSE; the difference between 𝐒\boldsymbol{\mathbf{S}} and 𝚺̂\hat{\boldsymbol{\mathbf{\Sigma}}}) relative to the weighted total variance (SST; the variance of 𝐒\boldsymbol{\mathbf{S}}): GFI=1trace{(𝐖12[𝐒𝚺̂]𝐖12)2}trace{(𝐖12𝐒𝐖12)2} GFI = 1 - \frac{\text{trace}\left\{\left(\boldsymbol{\mathbf{W}}^{-\frac{1}{2}}\lbrack\boldsymbol{\mathbf{S}} - \hat{\boldsymbol{\mathbf{\Sigma}}}\rbrack\boldsymbol{\mathbf{W}}^{-\frac{1}{2}}\right)^2\right\}}{\text{trace}\left\{\left(\boldsymbol{\mathbf{W}}^{-\frac{1}{2}}\boldsymbol{\mathbf{S}}\boldsymbol{\mathbf{W}}^{-\frac{1}{2}}\right)^2\right\}} The matrix 𝐖\boldsymbol{\mathbf{W}} is a weight matrix. Depending on the estimation technique used to obtain 𝛉̂\hat{\boldsymbol{\mathbf{\theta}}} different types of GFI may be computed by choosing a particular weight.

  1. If 𝐖=𝚺̂\boldsymbol{\mathbf{W}} = \hat{\boldsymbol{\mathbf{\Sigma}}}, the GFI is based on the SSE and the SST from a maximum likelihood estimation.
  2. If 𝐖=𝐒̂\boldsymbol{\mathbf{W}} = \hat{\boldsymbol{\mathbf{S}}}, the GFI is based on SSE and the SST from a generalized least squares (GLS) estimation.
  3. If 𝐖=𝐈̂\boldsymbol{\mathbf{W}} = \hat{\boldsymbol{\mathbf{I}}}, the GFI is based on SSE and the SST from a unweighted least squares (ULS) estimation.

Note that for any quadratic matrix , we have: trace(𝐗2)=i,jxi2\text{trace}(\boldsymbol{\mathbf{X}}^2) = \sum_{i,j} x^2_i.

Main references: Karl G. Jöreskog and Sörbom (1982), Mulaik et al. (1989) and Tanaka and Huba (1985)

The standardized root mean square residual (SRMR)

The SRMR is defined as SRMR=2j=1Ki=1j[(sijσ̂ij)/(siisjj)1/2]2K(K+1) \text{SRMR} = \sqrt{2 \sum_{j=1}^{K} \sum_{i=1}^{j} \frac{ \lbrack (s_{ij} - \hat{\sigma}_{ij})/(s_{ii} s_{jj})^{1/2} \rbrack^{2}}{K (K+1)}} where KK stands for the number of indicators, sijs_{ij} for the empirical covariance between indicators ii and jj, and σ̂ij\hat{\sigma}_{ij} for the estimated model-implied counterpart. The SRMR describes with which distance the observed correlations are reproduced on average by the model. Therefore, smaller values are associated with a better fit. If data is standardized, sii=sjj=1s_{ii} = s_{jj} = 1 holds, and the formula reduces to: SRMR=2j=1Ki=1j(sijσ̂ij)2K(K+1) \text{SRMR} = \sqrt{2 \sum_{j=1}^{K} \sum_{i=1}^{j} \frac{(s_{ij} - \hat{\sigma}_{ij})^2}{K(K+1)}}

Main reference: Bentler (2006)

The root mean square error of approximation (RMSEA)

The RMSEA is defined as $$ \hat{\epsilon} = \sqrt{\frac{\hat{F}_0}{\text{df}_{M}}} \quad \text{where} \quad \hat{F}_{0} = \max \Bigl( 0, F - \frac{\text{df}_{M}}{N-1} \Bigr) $$ In this formula, dfM\text{df}_{M} stands for the degrees of freedom of the specified model (see the Degrees of Freedom section for details on how the degrees of freedom are calculated). The term F̂0\hat{F}_{0} is an estimator for the discrepancy due to approximation. Thus, the RMSEA measures the discrepancy due to approximation per degree of freedom.

Main reference: Browne and Cudeck (1992)

The normed and non-normed fit index (NFI and NNFI)

The fit indices NFI and NNFI were among the first fit indices to be introduced (Bentler and Bonett 1980). They are defined as: NFI=FBFMFBandNNFI=FB/dfBFM/dfMFB/dfB1/(N1) \text{NFI} = \frac{F_{B} - F_{M}}{F_{B}} \quad \text{and} \quad \text{NNFI} = \frac{F_{B}/\text{df}_{B} - F_{M}/\text{df}_{M}}{F_{B}/\text{df}_{B} - 1/(N-1)} The term FBF_{B} refers to the value of the fitting function in the null model, FMF_{M} to the value of the fitting function in the model under consideration. Thus, the NFI measures the increase in fit relative to the fit of the null model when specifying the model. The intuition of NNFI is that (in factor-based methods) the expectation of FM/dfMF_{M}/\text{df}_{M} is equal to 1/N11/N-1. This does not automatically hold for composite-based estimators.

The NNFI measures the relative departure of the numerator’s term from it’s expectation (in the denominator). That is why, the NNFI is not normed and can take values larger than 11.

Main reference: Bentler and Bonett (1980)

The comparative fit index (CFI)

The CFI is defined as: CFI=1max(0,(N1)FMdfM)max(0,(N1)FMdfM,(N1)FBdfB) \text{CFI} = 1 - \frac{\max(0, (N-1) F_{M}-\text{df}_{M})}{\max(0, (N-1) F_{M}-\text{df}_{M}, (N-1)F_{B}-\text{df}_{B})} Like the RMSEA, the CFI is a non-centrality based index. It measures the increase in fit (that is to say the reduction in non-centrality) when specifying the model under consideration relative to the fit of the null model. The CFI is a normed index with a value of 11 indicating the best fit. Since it makes use of the assumptions in factor-based methods, its intuition does not apply to composite-based estimators.

Main reference: Bentler (1990).

The incremental fit index (IFI)

The IFI is defined as: IFI=FBFMFBdfM/(N1) \text{IFI} = \frac{F_{B} - F_{M}}{F_{B} - df_{M}/(N-1)} The rationale underlying the IFI is that the term FBFMF_{B} - F_{M} (in the numerator) is compared with its expectation FBdfM/(N1)F_{B} - \text{df}_{M}/(N-1) (in the denominator).

Main reference: Bollen (1989)

The root mean square outer residual covariance

See also

Several fit indices require a fitting function, i.e., a distance measure like the geodesic distance, the squared Euclidean distance or the maximum likelihood distance. These are implemented as: calculateDG(), calculateDL(), and calculateDML().

Reliability

Definition

Reliability is the consistency of measurement, i.e., the degree to which a hypothetical repetition of the same measure would yield the same results. As such, reliability is the closeness of a measure to an error free measure. It is not to be confused with validity as a perfectly reliable measure may be invalid.

Practically, reliability must be empirically assessed based on a theoretical framework. The dominant theoretical framework against which to compare empirical reliability results to is the well-known true score framework which provides the foundation for the measurement model described in the Notation help file. Based on the true score framework and using the terminology and notation of the Notation and Termniology help files, reliability of a generic measurement is defined as:

  1. The amount of proxy true score variance, Var(η)Var(\bar\eta), relative to the the proxy or test score variance, Var(η̂)Var(\hat\eta).
  2. This is identical to the squared correlation between the common factor and its proxy/composite or test score: ρη,η̂2=Cor(η,η̂)2\rho_{\eta, \hat\eta}^2 = Cor(\eta, \hat\eta)^2.

This “kind” of reliability is commonly referred to as internal consistency reliability.

Based on the true score theory three major types of measurement models are distinguished. Each type implies different assumptions which give rise to the formulae written below. The well-established names for the different types of measurement model provide natural naming candidates for their corresponding (internal consistency) reliabilities measure:

  1. Parallel – Assumption: ηkj=ηjλkj=λj\eta_{kj} = \eta_j \longrightarrow \lambda_{kj} = \lambda_j and Var(εkj)=Var(εj)Var(\varepsilon_{kj}) = Var(\varepsilon_j).
  2. Tau-equivalent – Assumption: ηkj=ηjλkj=λj\eta_{kj} = \eta_j \longrightarrow \lambda_{kj} = \lambda_j and Var(εkj)Var(εlj)Var(\varepsilon_{kj}) \neq Var(\varepsilon_{lj}).
  3. Congeneric – Assumption: ηkj=λkjηj\eta_{kj} = \lambda_{kj}\eta_j and Var(εkj)Var(εlj)Var(\varepsilon_{kj}) \neq Var(\varepsilon_{lj}).

In principal the test score η̂\hat\eta is a weighted linear combinations of the indicators, i.e., a proxy or stand-in for the true score/common factor. Historically, however, the test score is generally assumed to be a simple sum score, i.e., a weighted sum of indicators with all weights assumed to be equal to one. Hence, well-known reliability measures such as Jöreskog’s ρ\rho or Cronbach’s α\alpha are defined with respect to a test score that indeed represents a simple sum score. Yet, all reliability measures originally developed assuming a sum score may equally well be computed with respect to a composite, i.e., a weighted score with weights not necessarily equal to one.

Apart form the distinction between congeneric (i.e., Jöreskog’s ρ\rho) and tau-equivalent reliability (i.e., Cronbach’s α\alpha) we therefore distinguish between reliability estimates based on a test score (composite) that uses the weights of the weight approach used to obtain .object and a test score (proxy) based on unit weights. The former is indicated by adding “weighted” to the original name.

Formulae

The most general formula for reliability is the (weighted) congeneric reliability:

ρC;weighted=Var(η)Var(η̂k)=(𝐰𝛌)2𝐰𝚺𝐰 \rho_{C; \text{weighted}} = \frac{Var(\bar\eta)}{Var(\hat\eta_k)} = \frac{(\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{\lambda}})^2}{\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{\Sigma}}\boldsymbol{\mathbf{w}}} Assuming 𝐰=𝛊\boldsymbol{\mathbf{w}} = \boldsymbol{\mathbf{\iota}}, i.e., unit weights, the “classical” formula for congeneric reliability (i.e., Jöreskog’s ρ\rho), follows: ρC=Var(η)Var(η̂k)=(λk)2(λk)2+Var(ε) \rho_C = \frac{Var(\bar\eta)}{Var(\hat\eta_k)} = \frac{\left(\sum\lambda_k\right)^2}{\left(\sum\lambda_k\right)^2 + Var(\bar\varepsilon)} Using the assumptions imposed by the tau-equivalent measurement model we obtain the (weighted) tau-equivalent reliability, i.e., (weighted) Cronbach’s alpha):

ρT;weighted=λ2(wk)2λ2(wk)2+wk2Var(εk)=σx(wk)2σx[(wk)2wk2]+wk2Var(xk) \rho_{T; \text{weighted}} = \frac{\lambda^2(\sum w_k)^2}{\lambda^2(\sum w_k)^2 + \sum w_k^2Var(\varepsilon_k)} = \frac{\bar\sigma_x(\sum w_k)^2}{\bar\sigma_x[(\sum w_k)^2 - \sum w_k^2] + \sum w_k^2Var(x_k)} where we used the fact that if λk=λ\lambda_k = \lambda (tau-equivalence), λ2\lambda^2 equals the average covariance between indicators: σx=1K(K1)k=1Kl=1Kσkl\bar\sigma_x = \frac{1}{K(K-1)}\sum^K_{k=1}\sum^K_{l=1} \sigma_{kl} Again, assuming wk=1w_k = 1, i.e., unit weights, the “classical” formula for tau-equivalent reliability (Cronbach’s α\alpha) follows: ρT=λ2K2λ2K2+Var(εk)=σxK2σx[K2K]+KVar(xk) \rho_T = \frac{\lambda^2K^2}{\lambda^2K^2 + \sum Var(\bar\varepsilon_k)} = \frac{\bar\sigma_xK^2}{\bar\sigma_x[K^2 - K] + K Var(x_k)} Using the assumptions imposed by the parallel measurement model we obtain the parallel reliability:

ρP=λ2(wk)2λ2(wk)2+Var(ε)wk2=σx(wk)2σx[(wk)2wk2]+Var(x)wk2 \rho_P = \frac{\lambda^2(\sum w_k)^2}{\lambda^2(\sum w_k)^2 + Var(\varepsilon)\sum w_k^2} = \frac{\bar\sigma_x(\sum w_k)^2}{\bar\sigma_x[(\sum w_k)^2 - \sum w_k^2] + Var(x)\sum w_k^2}

In cSEM indicators are always standardized and weights are chosen such that Var(η̂k)=1Var(\hat\eta_k) = 1. This is done by scaling the weight vector 𝐰\boldsymbol{\mathbf{w}} by (𝐰𝚺𝐰)12(\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{\Sigma}}\boldsymbol{\mathbf{w}})^{-\frac{1}{2}}. This simplifies the formulae: ρC;weighted=(wkλk)2=(𝐰𝛌)2ρT;weighted=ρP;weighted=ρx(wk)2 \begin{align} \rho_{C; \text{weighted}} &= (\sum w_k\lambda_k)^2 = (\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{\lambda}})^2 \\ \rho_{T; \text{weighted}} = \rho_{P; \text{weighted}} &= \bar\rho_x(\sum w_k)^2 \\ \end{align} where ρx=σx\bar\rho_x = \bar\sigma_x is the average correlation between indicators. Consequently, parallel and tau-equivalent reliability are always identical in cSEM.

So far formulae have been motivated theoretically. Since 𝚺\boldsymbol{\mathbf{\Sigma}} is unknown it can be replaced by 𝐒\boldsymbol{\mathbf{S}} (the empirical indicator correlation matrix) or 𝚺̂\hat{\boldsymbol{\mathbf{\Sigma}}} (the model-implied indicator correlation matrix), however, 𝐒\boldsymbol{\mathbf{S}} and 𝚺̂\hat{\boldsymbol{\mathbf{\Sigma}}} are generally not equal. The practical implication is that if ρC\rho_{C} is computed as (𝐰𝛌)2(\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{\lambda}})^2 using unit weights the weights can in fact be scaled by both (𝐰𝐒𝐰)12(\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{S}}\boldsymbol{\mathbf{w}})^{-\frac{1}{2}} or (𝐰𝚺̂𝐰)12(\boldsymbol{\mathbf{w}}'\hat{\boldsymbol{\mathbf{\Sigma}}}\boldsymbol{\mathbf{w}})^{-\frac{1}{2}}! Similarly, ρC;weighted\rho_{C; \text{weighted}} can be computed using weights scaled using either 𝐒\boldsymbol{\mathbf{S}} or 𝚺̂\hat{\boldsymbol{\mathbf{\Sigma}}}. Consequently there are in fact four types of congeneric reliability depending the type of weight and the type of scaling for the weights. Hence, the calculation is of “the” congeneric reliability is always: (𝐰𝛌)2(\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{\lambda}})^2 where 𝐰\boldsymbol{\mathbf{w}} can be:

  1. a vector of unit weights scaled by (𝐰𝚺̂𝐰)12(\boldsymbol{\mathbf{w}}'\hat{\boldsymbol{\mathbf{\Sigma}}}\boldsymbol{\mathbf{w}})^{-\frac{1}{2}}. This is typically what people refer to as the congeneric reliability (Jöreskog’s ρ\rho). We label this type of reliability estimate ρC\rho_C.
  2. a vector of unit weights scaled by (𝐰𝐒𝐰)12(\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{S}}\boldsymbol{\mathbf{w}})^{-\frac{1}{2}}. This has no known name. Its usefulness is an open question. We label this type of reliability estimate ρC;mm\rho_{C;mm}.
  3. a vector of weights obtained using a composite-based estimator (e.g. PLS-PM) scaled by (𝐰𝐒𝐰)12(\boldsymbol{\mathbf{w}}'\boldsymbol{\mathbf{S}}\boldsymbol{\mathbf{w}})^{-\frac{1}{2}}. This is Dijkstra Henseler’s ρA\rho_A. We label this type of reliability estimate ρC;weighted\rho_{C;\text{weighted}}.
  4. a vector of weights obtained using a composite-based estimator (e.g. PLS-PM) scaled by (𝐰𝚺̂𝐰)12(\boldsymbol{\mathbf{w}}'\hat{\boldsymbol{\mathbf{\Sigma}}}\boldsymbol{\mathbf{w}})^{-\frac{1}{2}}. This has no known name. Its usefulness is an open question. We label this type of reliability estimate ρC;weighted;mm\rho_{C;\text{weighted};mm}
A note on the terminology

A vast bulk of literature dating back to seminal work by Spearman (e.g., Spearman (1904)) has been written on the subject of reliability. Inevitably, definitions, formulae, notation and terminology conventions are unsystematic and confusing. This is particularly true for newcomers to structural equation modeling or applied users whose primary concern is to apply the appropriate method to the appropriate case without poring over books and research papers to understand each intricate detail.

In cSEM we seek to make working with reliabilities as consistent as possible by relying on a paper by Cho (2016) who proposed uniform formula-generating methods and a systematic naming conventions for all common reliability measures. Naturally, some of the conventional terminology is deeply entrenched within the nomenclatura of a particular filed (e.g., coefficient alpha alias Cronbach’s alpha in pychonometrics) such that a new, albeit consistent, naming scheme seems superfluous at best. However, we belief the merit of a “standardized” naming pattern will eventually be helpful to all users as it helps clarify potential misconceptions thus preventing potential misuse, such as the (ab)use of Cronbach alpha as a reliability measure for congernic measurement models.

Apart from these considerations, this package takes a pragmatic stance in a sense that we use consistent naming because it naturally provides a consistent naming scheme for the functions and the systematic formula generating methods because they make code maintenance easier. Eventually, what matters is the formula and more so its correct application. To facilitate the translation between different naming systems and conventions we provide a “translation table” below:

Systematic names and common synonymous names for the reliability estimates found in the literature
Systematic names Mathematical Synonymous terms
Parallel reliability ρP\rho_P Spearman-Brown formula, Spearman-Brown prophecy, Standardized alpha, Split-half reliability
Tau-equivalent reliability ρT\rho_T Cronbach’s alpha, α\alpha, Coefficient alpha Guttmans λ3\lambda_3, KR-20
Tau-equivalent reliability weighted ρT;weighted\rho_{T;\text{weighted}}
Congeneric reliability ρC\rho_C Composite reliability, Jöreskog’s ρ\rho, Construct reliability, ω\omega, reliability coefficient, Dillon-Goldsteins’s ρ\rho
Congeneric reliability weighted ρC;weighted\rho_{C;\text{weighted}} Dijkstra-Henseler’s ρA\rho_A
Closed-form confidence interval

Trinchera, Marie, and Marcoulides (2018) proposed a closed-form confidence interval (CI) for the tau-equivalent reliability (Cronbach’s alpha). To compute the CI, set .closed_form_ci = TRUE when calling assess() or invoke calculateRhoT(..., .closed_form_ci = TRUE) directly. The level of the CI can be changed by supplying a single value or a vector of values to .alpha.

Implementation

The functions are implemented as calculateRhoC() and calculateRhoT().

The Goodness of Fit (GoF)

Definition

Calculate the Goodness of Fit (GoF) proposed by Tenenhaus, Amanto, and Vinzi (2004). Note that, contrary to what the name suggests, the GoF is not a measure of (overall) model fit in a χ2\chi^2-fit test sense. See e.g. Henseler and Sarstedt (2012) for a discussion.

Formulae

The GoF is defined as:

GoF=COMk×Rstructural2=1kk=1Kλk2+1Mm=1MRm;structural2\text{GoF} = \sqrt{\varnothing \text{COM}_k \times \varnothing R^2_{structural}} = \sqrt{\frac{1}{k}\sum^K_{k=1} \lambda^2_k + \frac{1}{M} \sum^M_{m = 1} R^2_{m;structural}} where COMkCOM_k is the communality of indicator kk, i.e. the variance in the indicator that is explained by its connected latent variable and Rm;structural2R^2_{m; structural} the R squared of the mm’th equation of the structural model.

Implementation

The function is implemented as: calculateGoF().

The Heterotrait-Monotrait-Ratio of Correlations (HTMT)

Definition

The heterotrait-monotrait ratio of correlations (HTMT) was first proposed by
Henseler, Ringle, and Sarstedt (2015) to assess convergent and discriminant validity.

Formulae

See: Henseler, Ringle, and Sarstedt (2015) on page 121 (equation (6))

Implementation

The function is implemented as: calculateHTMT().

Literature

Bentler, Peter M. 1990. “Comparative Fit Indexes in Structural Models.” Psychological Bulletin 107 (2): 238–46.
———. 2006. EQS 6 Structural Equations Program Manual (version 6). Encino, CA: Multivariate Software, Inc.
Bentler, Peter M., and Douglas G. Bonett. 1980. “Significance Tests and Goodness of Fit in the Analysis of Covariance Structures.” Psychological Bulletin 88 (3): 588–606.
Bollen, Kenneth A. 1989. Structural Equations with Latent Variables. Wiley-Interscience.
Browne, Michael W., and Robert Cudeck. 1992. “Alternative Ways of Assessing Model Fit.” Sociological Methods & Research 21 (2): 230–58.
Cho, Eunseong. 2016. “Making Reliability Reliable.” Organizational Research Methods 19 (4): 651–82. https://doi.org/10.1177/1094428116656239.
Fornell, C., and D. F. Larcker. 1981. “Evaluating Structural Equation Models with Unobservable Variables and Measurement Error.” Journal of Marketing Research XVIII: 39–50.
Henseler, Jörg, Christian M. Ringle, and Marko Sarstedt. 2015. “A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equation Modeling.” Journal of the Academy of Marketing Science 43 (1): 115–35. https://doi.org/10.1007/s11747-014-0403-8.
Henseler, Jörg, and Marko Sarstedt. 2012. “Goodness-of-Fit Indices for Partial Least Squares Path Modeling.” Computational Statistics 28 (2): 565–80. https://doi.org/10.1007/s00180-012-0317-1.
Hu, Li-tze, and Peter M. Bentler. 1999. “Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives.” Structural Equation Modeling 6 (1): 1–55.
Jöreskog, K. G. 1969. “A General Approach to Confirmatory Maximum Likelihood Factor Analysis.” Psychometrika 34 (2): 183–202. https://doi.org/10.1007/bf02289343.
Jöreskog, Karl G., and Dag Sörbom. 1982. “Recent Developments in Structural Equation Modeling.” Journal of Marketing Research 19 (4): 404–16.
Lohmöller, Jan-Bernd. 1989. Latent Variable Path Modeling with Partial Least Squares. Physica, Heidelberg.
Marsh, Herbert W., Kit-Tai Hau, and Zhonglin Wen. 2004. “In Search of Golden Rules: Comment on Hypothesis-Testing Approaches to Setting Cutoff Values for Fit Indexes and Dangers in Overgeneralizing Hu and Bentler’s (1999) Findings.” Structural Equation Modeling: A Multidisciplinary Journal 11 (3): 320–41. https://doi.org/10.1207/s15328007sem1103_2.
Mulaik, Stanley A., Larry R. James, Judith Van Alstine, Nathan Bennett, Sherri Lind, and C. Dean Stilwell. 1989. “Evaluation of Goodness-of-Fit Indices for Structural Equation Models.” Psychological Bulletin 105 (3): 430–45. https://doi.org/10.1037/0033-2909.105.3.430.
Sharma, Pratyush, Marko Sarstedt, Galit Shmueli, Kevin H. Kim, and Kai O. Thiele. 2019. PLS-Based Model Selection: The Role of Alternative Explanations in Information Systems Research.” Journal of the Association for Information Systems 20 (4).
Tanaka, J. S., and G. J. Huba. 1985. “A Fit Index for Covariance Structure Models Under Arbitrary GLS Estimation.” British Journal of Mathematical and Statistical Psychology 38 (2): 197–201. https://doi.org/10.1111/j.2044-8317.1985.tb00834.x.
Tenenhaus, Michel, Silvano Amanto, and Vincenzo Esposito Vinzi. 2004. “A Global Goodness-of-Fit Index for PLS Structural Equation Modelling.” In Proceedings of the XLII SIS Scientific Meeting, 739–42.
Trinchera, Laura, Nicolas Marie, and George A. Marcoulides. 2018. “A Distribution Free Interval Estimate for Coefficient Alpha.” Structural Equation Modeling: A Multidisciplinary Journal 25 (6): 876–87. https://doi.org/10.1080/10705511.2018.1431544.

  1. There are some cutoffs such as e.g., the SRMR should be less than 0.08 or 0.1, however, these values are essentially arbitrary as they have never been formally motivated. Reference is usually done to Hu and Bentler (1999) which based the cut-off on a simulation using factor-based SEM.↩︎