Objectivity

In this context, objectivity means that the results, whether numeric results or the interpretation of test results, cannot be influenced by the examiner. Our standardised evaluation and interpretation mechanisms ensure maximum objectivity. All examiners are extensively trained and receive mandatory certification in order to ensure objectivity as defined in the relevant standards.

Reliability

Reliability refers to the degree of accuracy with which a test measures a specific psychological trait. It primarily describes to what extent test results are free from measurement errors. A decisive factor is to what degree individual parts of the test (items) measure the same (internal consistency) and the degree of concordance between two measurements of the same attributes by the same procedure (retest reliability). Because all our test procedures offer outstanding performance in terms of reliability, we are able to guarantee an extremely high degree of stability and measuring accuracy. Please feel free to contact us if you would like exact figures.

Validity

Validity refers to the fact that a test actually measures what it intends and claims to measure.

There are different concepts of validity:

1. Content validity
Content validity refers to the fact that the content of the test procedure is a representative sample of the behaviour domain to be measured. Content related evidence typically involves subject matter experts evaluating test items against the test specification. The content validity of a test can be considered positive when the individual items are deemed to cover a representative sample of the behaviour domain to be evaluated. For example, it is valid if an individual's knowledge of maths is tested by means of a calculation exercise or the suitability of a driver is determined by means of a test drive. By contrast, it would not be valid to measure a candidate's intelligence by subjecting them to a knowledge test. We ensure that our procedures offer the highest degree of content validity by subjecting them to continual testing by renowned experts.

2. Construct validity
A test has construct validity if it demonstrates an association between the test scores and the prediction of a theoretical trait. This allows conclusions to be drawn from the results of the test person based on psychological personality traits, such as skills, attributes or characteristics. Statistical methods, such as factor analysis or the Campbell and Fiske method, enable valid assertions on the construct validity. It is also important to verify that a test is actually measuring the intended and not other traits. The validity of the test can be proven by showing a correlation (convergent validity) or dissimilarity (discriminant validity) to the results of other test procedures for similar (convergent validity) or dissimilar (divergent validity) traits. Please contact us if you require further details of statistical methods and tests.

3. Criterion validity
A test has criterion validity if the results of a test person within a test situation can be effectively applied to external criteria, such as professional or academic success. For example, if prospective students take an aptitude test and those who passed the test with high marks also go on to be successful students and, vice versa, the students who performed poorly continue to under-perform during their studies, the test has criterion validity. Our test procedures are developed in such a way that they can very accurately predict professional performance and other aptitudes and skills. We conduct research on these topics in development co-operation with a range of partners.

Calibration

The purpose of calibrating a procedure is to obtain useful and valid reference values and comparative samples of people that are as similar as possible to the test person with regard to relevant characteristics (e.g. age, sex, education, professional sector). A test person can then be assessed in relation to the appropriate comparison group, the so-called reference population.

For example, a sales person can be compared to other sales persons in the same sector. As well as the common method of group comparisons with the appropriate reference population, our procedure also supports criterion-oriented diagnostics. Criterion-oriented diagnostics means that other comparison standards, such as best practice or benchmarking, can also be applied. In this case, our sales person can be compared, not only with the average of other sales persons, but also with demonstrably successful sales persons.

Economy

A test meets the criterion of economy if, relative to the evaluation and interpretation options available, it it is not costly and time consuming. Our fast evaluation methods are an extremely cost-effective solution, particularly when measured against the potential cost of a mis-hire, and may in actual fact save you considerable investment costs. Poor hires can prove extremely costly for an organisation, in terms of time, money and reputation - and even costlier in specialised areas, such as key account management, or at management level. By comparison, the level of investment in a sound HR diagnostic plan is relatively low, and the ROI extremely high.