Pedagogical Research
Research Article
2018, 3(1), Article No: 02

Effect of Group versus Individual Assessments on Coursework among Undergraduates in Tanzania: Implications for Continuous Assessments in Universities

Published online: 21 Feb 2018
Download: 607
View: 966

Abstract

The study analyzes students’ performance scores in formative assessments depicting the individual and group settings. A case study design was adopted using quantitative approach to extract data of 198 undergraduate students. Data were analyzed quantitatively using descriptive statistics - means and frequencies; spearman correlations, multiple regression and independent sample t-test. The findings show students perform better in groups than in individual settings as evidenced by weak and negative monotonic correlation between tests scores and randomized group assessments scores (rs = -.318, p < .000). Further, students’ scores in randomized groups increased with increase in number of members in a group. Moreover, both tests and group assignments had statistically significant effect on coursework scores, however, the scores from randomized groups had the highest effect on coursework (R2 = .186). The results confirm that randomized group assessments are better than students’ chosen referenced group assignments, though both being commendable than individual tests. The study recommends more studies in all assessment categories reflecting on group and individual settings to broaden an understanding of learning assessments efficacy in universities.

INTRODUCTION

Universities are of utmost significance in developing a qualified human capital as contingent catalyst for national development across countries worldwide (Mohamedbhai, 2014; Mtahabwa, 2016). However, burgeoning demands to access of higher education and universities in particular poses a serious concern on number of quality proxies that are significant factors to quality assurance in universities. Such a concern is manifested by quantity-quality conundrum as a current highly debated glocal discourse phenomenon (Maher, 2007; Mtahabwa, 2016). The discourse emanates from argumentations on quality service provision in Higher Education Institutions (HEIs), and hence universities are no exception. Studies on HEIs in Tanzania reveal discrepancies in terms of infrastructural capacity, availability of qualified and experienced academic staff, class size, quality of admitted students, curricula content and approaches, and finances to cite a few (Materu, 2007; Ishengoma, 2011; Mtahabwa, 2016; Mbalamula, 2017).

By and large, the demographic changes in students population have brought solemn concerns on the quality of knowledge being transacted in lecture halls, and if at all our students learn what we teach and in particular on how students can be assessed formatively (Alnuaimi et al., 2010; Mosha, 2012; William, 2013; Rich et al., 2014). Assessment of students’ learning is one amongst other areas being affected significantly which calls for advocacy on “rethinking” and “improvement” of assessment processes in HEIs (Mosha, 2004; Bali, 2012; Binde, 2012). Empirical studies reveal that formative assessments have not only been used adequately but also not practiced systematically in universities (Takiguchi et al., 2012). The term assessment broadly connotes testing and examinations in various forms and types where students’ learning is checked against the pre-determined educational objectives or goals (Roediger et al., 2011; Binde, 2012; William, 2013).

In the backdrop of the reforms in HEIs in Tanzania, many universities and other tertiary institutions currently operate under semesterization mode whereby teaching and learning proceed in approximately seventeen weeks (equivalent to four months) in one academic year teaching cycle. A typical teaching and learning in a semester comprises formative assessments in form of individual tests, term paper, experiments/studios, projects quizzes; and group assessments such as field work. The assessments are either administered simultaneously or in series and cumulatively constituting a coursework as an aggregate of all formative assessments in a semester. Normally, the coursework takes 40%, and final University Examination (UE) administered at end of the semester takes 60%. The variations in type and forms of assessments and allocated percentages in assessments exist depending on the nature of the course, degree program and other contextual factors. In most cases, details on assessment modes and credit allocation are stipulated in the respective course outline- a curriculum document outlining all modules and topics, mode of delivery, assessment and evaluation of the course.

As noted earlier, large classes in many HEIs in Tanzania pose sizeable challenge impeding faculty to execute effectively individual and group formative assessments (Mosha, 2012; Osaki, 2012). Yet, formative assessment remains a central to inform effective instruction among faculty and university at large (William, 2013). Studies in developed countries highlight assessment in universities to be a categorical dilemma not only in its philosophy but also in its theoretical and pragmatic dimensions (Elton and Johnston, 2002). Such ambivalence extends to the concerns for investigations on learning assessments as reliable proxies for the accountability of university systems (Gudo et al., 2011; Ssebuwufu, Ludwick & Béland, 2012). The quest for such improvements in formative assessments stand “On Guard” not only to recent critics urging universities to improve their academic programmes, but also emanating from exacerbated skills mismatch currently reported by employment sectors (Ndyali, 2016; Mufuruki et al., 2017). Therefore, studies on assessment processes for students’ learning are pertinent to help improvement of broad-spectrum of quality issues in HEIs in Tanzania and the world at large.

PURPOSE OF THE STUDY

Tanzanian HEIs encounter numerous contextual challenges which affect Learning assessments due to demographic, technical and professional shortcomings (Rich et al., 2014; Alnuaimi et al., 2010). This research study was guided by two major objectives including (i) investigating on the students’ performance scores in Individual Assessments and Group Assessments, and (ii) investigation on the proportionate effect of students’ performance from Individual Assessments and Group Assessments on Coursework. Two research questions hereunder were adapted.

  1. How students’ performance scores in Individual Tests compares with those attained in Group Assignments?

  2. What is the proportionate effect of Individual Tests and Group Assignments on Coursework?

METHODS AND MATERIALS

Case Study Design was used to investigate on the differences exhibited in performance scores between two Individual Tests (ITs) and three Group Assignments (GAs) on the Coursework (CW). Sample of 198 students extracted from 217 students enrolled in one of the Undergraduate Course (Israel, 2012). Table 1 shows that of 198 respondents, the majority (61.6%) were males and females constituted only 38.4%. Also, of 198, the majority (86.9%) was pre-service, and 13.1% were in-service. The data were collected from the Lecturer’s Assessment Records (ITs and GAs) of 198 students’ data scores of all six administered assessments as shown in Table 2. Then the data were coded and thereafter were analyzed using Descriptive Statistics, Spearman Rank Order Correlation, Multiple regression and independent samples t-Test. The sample was randomized to develop the three categories of groups as highlighted in Table 1. Firstly, the first category of ten (10) groups of ten (10) students each were devised for Seminar Presentations (GA1) where students were allowed to choose a group not exceeding 10 members at their own discretion. Then, the second category (GA2) of fifteen (15) groups composed of seven (7) students each were constituted, and lastly, the third category (GA3) of seven (7) groups composed of fifteen (15) students.

Table 1. Profile of the Respondents

Variable

Frequency (f)

Percentage (%)

Gender

Male

122

61.6

Female

76

38.4

Work Status

Pre-service

172

86.9

In-service

26

13.1

Therefore, unlike the first category, the students’ placement in the second and third categories was randomly executed. While the randomization strategy enabled the researcher to develop three different prototypes of Students’ Group Activity Scores (SGASs); the two tests administered to students in series provided two prototypes of Students’ Individual Test Scores (SITSs). Both SGASs and SITSs were useful proxies to investigate the students’ performance difference and its effects on Students’ Coursework Aggregate (SCAs). The details of whole sampling procedure and administration are provided in Table 2.

Table 2. Administration of the Assessments

Type of Assignment

Tests

Group Assessments

T1

T2

GA1

GA2

GA2

Time from first day of Teaching

Week 9

Week 13

Week 6-15

Week 10

Week 11

Number of Question Items

14

11

1

5

8

Number of candidate(s)

1

1

10

7

15

Duration for the Assessment

1hr

1hr

1hr

120hrs

120hrs

DATA ANALYSIS AND FINDINGS

Descriptive Statistics, Spearman Rank Order Correlation, and independent samples t-Test were used to analyze the data to establish the linear association of the SITSs and SGASs had on dependent SCAs. While the Spearman Correlation tests were used to establish the monotonic relationship (Mukaka, 2012), the t-test was used to check the significant difference between the variables. Therefore, the data collected were coded and ranked to suit the two statistic measures used.

The results presented in Table 3 presents analysis of data deduced from Continous Assessments which amounted to 40% of the whole course assessment in a semester. The results from individual testing show students had scored relatively higher (maximum score of 4) in the first seating of testing (M=2.74, SD=.77) than maximum score of 3 on the second testing (M=1.86, SD=.45), and the average test results show far better results with maximum score of 3 (M=2.79, SD=.88) than the two. Also, the average test results show positive skewness (.47) of average test results compared to negatives in both tests (-.34, -.59) respectively.

Table 3. Descriptive Assessment Statistics (N=198)

Statistic Unit

Individual Assessments

Group Assessments

Course work

T1

T2

Av.T1+T2

Group (self)

Group (random)

CA

Mean

2.74

1.86

2.79

3.09

3.13

2.71

Mode

3.00

2.00

2.00

3.00

3.00

3.00

Std. deviation

.77

.45

.88

.29

.55

.45

Skewness

-.34

-.59

.47

2.87

.08

-.94

Minimum

1.00

1.00

1.00

3.00

2.00

2.00

Maximum

4.00

3.00

5.00

4.00

4.00

3.00

In addition, the results show students scored higher in group assignments in which they were randomly allocated (M=3.13, SD=.55) than in those they personally chose (M=3.09, SD=.29). In the same vein, the results show that there is more positive skewness in group scores in which students chose by themselves (2.87) and in those where students were allocated at randomly (.08). Moreover, the results show students had accumulated low scores low in CA (Mean=2.71, SD=.45) compared to average mean score of the individual testing (Mean=2.79, SD=.88), and either of the two group mean scores (M=3.09, SD=.29; M=3.13, SD=.55).

The general inference drawn from Table 3 indicate the declining differences in individual student testing which may suggest, on the one hand, presence of the flaws in designing of one or both of the tests; and on the other hand, problems attributed to various students’ factors related to learning. Also, the results show students perform better in groups they are allocated at random, which may indicate influence of some kind of factors to group settings. In the same vein, the higher test scores (maximum of 5) compared to group scores (maximum score of 4) may complement the fact for the contention made in the latter. Moreover, the general results indicate that students perform better in groups than as individuals depicted by more students attaining more or equivalent to half (>50%) of the total scores allocated to respective assessment mode. Figure 1 provides the general pattern of total number of students who attained different proportions of total scores percentage allocated, and expected of them to accrue in different course continuous assessments.

Figure 1. General Pattern of Students in Various Assessments

SITSs-SGASs Differentials

The results provided in Table 4 are derived from the data analysis using Spearman's correlation computed to determine the relationship between SITSs (Av.T1+T2) and SGASs (Group (self), Group (random)).

Table 4. Profile of the Respondents

 

SGASs (self)

SGASs (random)

 

Spearman’s rho

Av.T1+T2

Correlation Coefficient

-.047

-.318**

Sig. (2-tailed)

.514

.000

N

198

198

**. Correlation is significant at the 0.01 level (2-tailed).

The results show there is a weak and negative monotonic correlation between SITSs and SGASs (random) (rs = -.318, n = 198, p < .000), and there was no significant relationship between SITSs and SGASs (self) (rs = -.047, n = 198, p < .514). The results indicate that individual students’ test scores are more related to aggregate scores attained by students when performing group assignments when they are allocated in groups of different sizes randomly. This may suggest presence of some extraneous factors that are attributed to students’ performance which influence their performances not only as individuals but also when they either chose the groups by themselves or they are placed at random.

The Independent Samples t-test was ensued to analyze the difference between Students’ Performance Scores attained during Individual Assessments and those attained in Group Assessments. The results from the t-tests are provided in Table 5. The results in Table 5 were derived from Independent Samples t-test analysis which showed that there was significant difference in students’ scores between those allocated at random (t(196) = 13.27, p=.000). This indicates that Students’ Performance Scores attained during Individual Assessments and those attained in Group Assessments are different. Also, this indicates students in group of fifteen would perform better (M=3.44, SD=.78) than those in group of 7 (M=2.23, SD=.49). In that regard, random allocation and increase of the students in different size of the groups has a positive effect to help students attain higher scores. Generally, this suggests not only random allocation of students in group tasks guaranteed individual performance, but with increase in number in those groups is likely to help students perform and score higher.

Table 5. Differentials in SGASs (random)

 

SGASs (random)

Group Statistics

N

Mean

Std. Deviation

Std. Error Mean

SGASs (random)

7 students per group

91

2.9451

.58429

.06125

15 students per group

107

3.2897

.45577

.04406

 

Independent Samples Test

Levene Percentage (%)’s Test for Equality of Variances

t-test for Equality of Means

F

Sig.

t

df

Sig. (2-tailed)

Mean Difference

Std. Error Difference

95% Confidence Interval of the Difference

Lower

Upper

SGASs (random)

Equal variances assumed

.625

.430

-4.659

196

.000

-.34466

.07397

-.49055

-.19878

Equal variances not assumed

 

 

-4.568

168.855

.000

-.34466

.07545

-.49361

-.19572

Model

Coefficientsa

Unstandardized Coefficients

Standardized Coefficients

B

Std. Error

Beta

t

Sig.

1

(Constant)

2.326

.104

 

22.318

.000

Designation of Test Average sum

.138

.036

.267

3.879

.000

2

(Constant)

1.658

.353

 

4.701

.000

Designation of Test Average sum

.141

.035

.273

3.991

.000

Designation of score students attained in a group of ten chosen by themselves (10%)

.213

.108

.135

1.980

.049

3

(Constant)

.543

.408

 

1.333

.184

Designation of Test Average sum

.199

.036

.384

5.578

.000

Designation of score students attained in a group of ten chosen by themselves (10%)

.244

.102

.155

2.383

.018

Designation of score in group assignment achieved by the student in group allocated at random

.275

.057

.330

4.789

.000

a. Dependent Variable: Designation of cummulative total scores achieved by the student at end of semester (40%)

Effect of SITSs versus SGASs on SCAs

A multiple regression was run to predict effect of SITSs and SGASs on SCAs. The results presented in Table 6 indicated that all three variables added statistically significantly on SCAs, the SITSs (F (1, 196) = 15.044, p<.000, with an R2 of .071, SGASs (self) (F (2, 195) = 9.595, p<.049, with an R2 of .090, and SGASs (random) (F (3, 194) = 14.762, p<.000, with an R2 of .18.6. However, the results also indicate that SGASs (random) had the highest effect of 18.6% (R2=.186) followed by SGASs (self) by 9.0% (R2=.090%) and lowest factor being individual tests by 7.5% (R2/2=7.511). The results shown in Table 6 present descriptive mean scores of the three assessments. The results highest mean score (M=3.13) for SGASs (random), and the least mean score (M=2.78) for individual assessments in form of tests. The results suggest group assessments are better than individual assessments; however, group formation modalities and group size may have an effect on group performance as depicted in recorded higher mean scores (M=3.13) for SGASs (random) than those recorded in SGASs (self) (M=3.09).

Table 6. Results from Multiple Regressions of SITSs and SGASs on SCAs

 

Descriptive Statistics

 

Mean

Std. Deviation

N

SITSs

2.7879

.87576

198

SGASs (self)

3.0909

.28821

198

SGASs (random)

3.1313

.54533

198

Model

Model Summaryd

R

R Square

Adjusted R Square

Std. Error of the Estimate

Change Statistics

R Square Change

F Change

df1

df2

Sig. F Change

1

.267a

.071

.067

.43856

.071

15.044

1

196

.000

2

.299b

.090

.080

.43533

.018

3.921

1

195

.049

3

.431c

.186

.173

.41273

.096

22.938

1

194

.000

a. Predictors: (Constant), Designation of Test Average sum

b. Predictors: (Constant), Designation of Test Average sum, Designation of score students attained in a group of ten chosen by themselves (10%)

c. Predictors: (Constant), Designation of Test Average sum, Designation of score students attained in a group of ten chosen by themselves (10%), Designation of score in group assignment achieved by the student in group allocated at random

d. Dependent Variable: Designation of cummulative total scores achieved by the student at end of semester (40%)

Model

ANOVAd

Sum of Squares

df

Mean Square

F

Sig.

1

Regression

2.894

1

2.894

15.044

.000a

Residual

37.697

196

.192

Total

40.591

197

 

2

Regression

3.637

2

1.818

9.595

.000b

Residual

36.954

195

.190

Total

40.591

197

 

3

Regression

7.544

3

2.515

14.762

.000c

Residual

33.047

194

.170

Total

40.591

197

 

a. Predictors: (Constant), Designation of Test Average sum

b. Predictors: (Constant), Designation of Test Average sum, Designation of score students attained in a group of ten chosen by themselves (10%)

c. Predictors: (Constant), Designation of Test Average sum, Designation of score students attained in a group of ten chosen by themselves (10%), Designation of score in group assignment achieved by the student in group allocated at random

d. Dependent Variable: Designation of cummulative total scores achieved by the student at end of semester (40%)

DISCUSSION

The observed fluctuations in student performance scores attained in tests administered in individual settings establish on the one hand as derivative of specific students’ factors such as students’ study skills, seriousness, psychological readiness and test preparation to mention a few. On the other hand, the flaws in the complete designing of one or both of the tests in reference to the course content, quality of course delivery, and how well were the two tests been marked to point the least. Deductively, the negative skewness of scores observed in the two tests is clear indication of low validity and inconsistency in the two assessments.

Arguably, students’ scores variations may not exclusively be a problem stemming from test design per se but also from practical flaws emanating from many norm-referenced characteristics of the assessments coupled with overemphasis on designing of tests and examinations which are often by and large structured to conform to the rule of bell-shaped in normal distribution performance model (Binde, 2012). Therefore, such differences in students’ test scores must account to other overriding factors reflecting students’ diverse profiles of abilities, teachers’ quality factors and other contextual factors including infrastructures that can enhance effective and efficient teaching and learning (Bonaccorsi et al., 2010). A study by Roediger et al., (2011) revealed that frequency of testing, for instance, exhibit a significant influence on students’ tendency to study more and with more regularity, and hence may produce more normalized students’ performance scores.

The analyses of the findings show comparatively that randomized group settings and those under students’ discretion complemented to wide consensus that individual performance vary with changes in certain students’ factors, in this case, mode of group constitution and group size. Also, these factors critical predictors to tendency of students’ reduced performance as depicted by respective collective performance as in group assignments viz-a-viz individual performance expected in single-handed tests, the phenomenon is known as social loafing (Tsaw et al., 2011; Rich et al., 2014). Ideally, students working in groups would perform better than when student work alone, this has been well established in this study where group scores were higher compared to test scores (Stenlunda et al., 2017).

On the one hand, the difference existed between the two groups’ scores indicate potential influence of overarching factors characterizing the two group settings. Such factors explicate the Individuality agency on performance in assorted group settings. Studies reveal a range of contextual factors attributing to such phenomenon including group members’ demographic factors- group size, students’ learning and assessment preferences, interpersonal, motivational and socio- emotional challenges, group management process, intragroup conflict (Alnuaimi et al., 2010; Tsaw et al., 2011; Rich et al., 2014), group’s sense of reciprocity and mutuality (Jassawalla et al., 2009; LaBeouf et al., 2016), lack of motivation due either to low self-esteem or lack of incentive, time constraints, language difficulties, cultural differences, learning disabilities, or personality problems (Dommeyer and Lammers, 2006), and also team size, task duration, and task assignment (Lee et al., 2008).

On the other hand, the factors influential to group performance are not exclusive but provide comprehensive benchmarks and justifiable to explain the effect of individual and group assessments on mediated coursework scores. Retrieval from results of students’ test scores showed more relation to randomly allocated groups’ scores. The individual agency was more pronounced particularly when the effect is gauged with increase of number of members in a group showed to effect attainment of higher scores. With individual agency suggesting tentatively rather other factors than group size significantly affected students’ performance. Several reasons may attribute to such occurrence as the critical factor in group tasks is not necessarily the size of group but may depend on clarity of task objectives, students’ ages, students’ experience on team-working, availability of learning materials and facilities (Dommeyer and Lammers, 2006). Also, inevitable existence of students in a group who have higher stakes in their grades and hence would always commit substantive effort in fear for those likely to contribute less (Jassawalla et al., 2009; Barbara & Bob, 2010; Rich et al., 2014).

Moreover, consistent to group performance realm is perceived productivity and enjoyment which is broadly explained by range of students’ intrinsic factors including their engagement in the course, group participation, and off-class study behaviors, rather than size of the group (Bonaccorsi et al., 2010). In the same vein, Enu et al. (2015) argue that what actually students do in a group activity is not exclusively and categorically confined to size of the group but should account extensive teaching and learning context. A study by Taqi and Al-Nouh (2014) revealed that the method of group formation circumscribes to some social and academic variables such as age and cognitive ability which influence students’ engagement, learning and hence results of group work. Therefore, a number of factors may extend to intricacies of access and availability of conducive environment that supports effective and efficient teaching and group learning that results into optimum transaction of knowledge, skills and values in such collective settings.

CONCLUSION

The modality of teaching and learning in universities cannot proceed without accounting the inevitable changing contexts in demographic factors as depicted with exponential increase in enrollments and ultimately the large class size phenomenon. The positive side of such increase attributes to widening access to higher education for the relatively greater proportion of the populace which contributes to critical mass of qualified human capital essential for economic prosperity of a particular country. Handling of such large classes is tricky reflecting on the proxies for quality education in universities, and in this case. In this case, how faculty formatively assesses the students in such contexts becomes a difficult endeavor, and since quality teaching and learning can seldom proceed without compromising the assessment process. For instance, in many cases the universities in Tanzania has adopted multiple of assessment modes depending on the nature of discipline, programme or a course, but many of such modalities have converged on individual testing and group projects. In this study, greater focus was rather on group assessments than on individual testing to circumvent the effect of group size and formation had on students’ scores (coursework). While there are number of flaws in both assessment modes, it is of interest that increase in group size may not necessarily detriment on students’ performance in groups. Ceteris paribus, individual testing may be as good as group assessment if certain factors are factored in the process of formative assessment. However, the difference in student scores observed in randomized groups and those from groups that student chose by themselves confirm operational factors either imminent due to students diverse characteristics or nature of the assessments provided. Several causes can attributed to such difference and not discrediting cheating, assignments’ plagiarism and other forms of academic fraud which are not uncommon in most HEIs in Tanzania. Noteworthy, the detailed characteristics of students in those groups must be known a priori to trade off the individual differences associated with learning including students’ learning styles. In that regard, beyond reasonable doubt all such circumstances highlight on plausible discrepancies in academic operations in HEIs that raise questions not only on validity and reliability of teaching and learning processes, but also of students’ academic achievement. Hence, it is imperative that other studies to incorporate the frameworks that can be used to analyse and explain such masking effects of unknown factors. Therefore, integration of social loafing theoretical model may be feasible to explain the Ringelmann Effect and specifically identify the operational factors that are of significance influence students’ performance in group assessments.

Figure 1 Figure 1. General Pattern of Students in Various Assessments
  • Alnuaimi, O. A., Robert, L. P. and Maruping, L. M. (2010). Team Size, Dispersion, and Social Loafing in Technology-Supported Teams: A Perspective on the Theory of Moral Disengagement. Journal of Management Information Systems, 27(1), 203-230. https://doi.org/10.2753/MIS0742-1222270109
  • Bali, A. L. T. (2012). From Teaching to Learning. In Teaching and Learning Improvement in Higher Education. Workshop Proceedings of 28th March to 3rd April 2012. Paper presented at the University of Dodoma, Dodoma. The University of Dodoma.
  • Barbara, M. and Bob, P. (2010). Dealing with free-riders in assessed group work: results from a study at a UK university. Assessment & Evaluation in Higher Education, 451-464.
  • Binde, A. L. (2012). Making Sense of Assessment: The same from outside different from inside. In Teaching and Learning Improvement in Higher Education. Workshop Proceedings of 28th March to 3rd April 2012. Paper presented at the University of Dodoma, Dodoma. The University of Dodoma.
  • Bonaccorsi, A., Daraio, C. and Geuna, A. (2010). Universities in the New Knowledge Landscape: Tensions, Challenges, Change-An Introduction. Minerva, 48, 1-4. https://doi.org/10.1007/s11024-010-9144-0
  • Dommeyer, C. J. and Lammers, B. H. (2006). Students’ Attitudes toward a New Method for Preventing Loafing on the Group Project: The Team Activity Diary. Journal of College Teaching & Learning, 3(1), 15-22.
  • Elton, L. and Johnston, B. (2002). Assessment in Universities: a critical review of research. Learning and Teaching Support Network Generic Centre.
  • Enu, J., Asominiwa, L. and Obeng, P. (2015). Effects of Group Size on Students Mathematics Achievement in Small Group Settings. British Journal of Education, 3(4), 58-64.
  • Gudo, C. O., Ole. M. A. and Oanda, I. O. (2012). University Expansion in Kenya and Issues of Quality Education: Challenges and Opportunities. International Journal of Business and      Social Science, 2(20), 203-214.
  • Ishengoma, J. M. (2011). The Socio-economic Background of Students Enrolled in Private Higher Education Institutions in Tanzania: Implication for equity. Papers in Education and Development, 30, 53-103.
  • Jassawalla, A., Sashittal, H. and Malshe, A. (2009). Students’ Perceptions of Social Loafing: Its Antecedents and Consequences in Undergraduate Business Classroom Teams. Academy of Management Learning & Education, 8(1), 42–54. https://doi.org/10.5465/AMLE.2009.37012178
  • LaBeouf, J. P., Griffith, J. C. and Roberts, D. L. (2016). Faculty and Student Issues with Group Work: What is Problematic with College Group Assignments and Why? Journal of Education and Human Development, 5(1), 13-23. https://doi.org/10.15640/jehd.v5n1a2
  • Lee, R., Max, E. and Robert, A. B. (2008). Designing Group Examinations to Decrease Social Loafing and Increase Learning. International Journal for the Scholarship of Teaching and Learning, 2(1), Article 17. Available at: http://digitalcommons.georgiasouthern.edu/ij-sotl/vol2/iss1/17 (Accessed 15 December 2017).
  • Maher, A. (2007). Learning Outcomes in Higher Education: Implications for Curriculum Design and Student Learning. Journal of Hospitality, Leisure, Sport and Tourism Education, 3(2), 46-54. https://doi.org/10.3794/johlste.32.78
  • Materu, P. (2007). Higher Education Quality Assurance in Sub-Saharan Africa: Status, Challenges, Opportunities, and Promising Practices. Washington, D. C. The World Bank. https://doi.org/10.1596/978-0-8213-7272-2
  • Mbalamula, Y. S. (2017). Complementing Lecturing Pedagogy and Learning Styles in Universities in Tanzania: State of issues. Educational Research Review, 12(13), 653-659. https://doi.org/10.5897/ERR2017.3232
  • Mohamedbhai, G. (2014). Massification in Higher Education Institutions in Africa: Causes, Consequences, and Responses. Int. J. Afr. Higher Educ., 1(1), 60-89. https://doi.org/10.6017/ijahe.v1i1.5644
  • Mosha, H. J. (2004). New Directions in Teacher Education for Quality Improvement in Africa. Papers Education and Development, 24, 45-68.
  • Mosha, H. J. (2012). The State and Quality of Education in Tanzania: A reflection. Papers Education and Development, 31, 61-76.
  • Mtahabwa, L. (2016). Quality Assurance in a New University. Journal of Education and Development, 2(1), 31-50,
  • Mufuruki, A. A., Mawji, R., Marwa, M. and Kasiga, G. (2017). Tanzania’s Industrialization Journey, 2016-2056: From an agrarian to a modern industrialized state in forty years. Nairobi. Moran (E.A.) Limited.
  • Ndyali, L. (2016). Higher education system and jobless graduates in Tanzania. Journal of Education and Practice, 7(4), 116-121.
  • Osaki, K. M. (2012). Teaching Large Classes as if they were small. In Teaching and Learning Improvement in Higher Education. Workshop Proceedings of 28th March to 3rd April 2012. Paper presented at the University of Dodoma, Dodoma. The University of Dodoma.
  • Rich, J. D., Owens, D., Johnson, S., Mines, D. and Capote, K. (2014). Some Strategies for Reducing Social Loafing in Group Projects. Global Journal of Human-Social Science A: Arts & Humanities – Psychology, 14(5). Version 1.0.
  • Roediger, H. L., Adam L. Putnam, A. L. and Smith, M. A. (2011). Ten Benefits of Testing and Their Applications to Educational Practice. Psychology of Learning and Motivation, 55, 1-36. https://doi.org/10.1016/B978-0-12-387691-1.00001-6
  • Stenlunda, T., Jonssonb, F. U. and Jonsson, B. (2017). Group discussions and test-enhanced learning: individual learning outcomes and personality characteristics. Educational Psychology, 37(2), 145–156. https://doi.org/10.1080/01443410.2016.1143087
  • Takiguchi, Y., Arai, K., Ieiri, I., Uejima, E. and Hirata, K. (2012). Development of Educational Evaluation Methods in Practical Experience in National Universities. Yakugaza Zasshi, 132(3), 365-368. https://doi.org/10.1248/yakushi.132.365
  • Taqi, H. A. and Al-Nouh, N. A. (2014). Effect of Group Work on EFL Students’ Attitudes and Learning in Higher Education. Journal of Education and Learning, 3(2), 52-65. https://doi.org/10.5539/jel.v3n2p52
  • Tsaw, D., Murphy, S. and Detgen, J. (2011), Social Loafing and Culture: Does Gender Matter? International Revie of Business Research Papers, 7(3), 1-8.
  • William, D. (2013). Assessment: The Bridge between Teaching and Learning. Voices from the Middle, 21(2), 15-20. https://doi.org/10.1057/9780230359284.0022
AMA 10th edition
In-text citation: (1), (2), (3), etc.
Reference: Mbalamula YS. Effect of Group versus Individual Assessments on Coursework among Undergraduates in Tanzania: Implications for Continuous Assessments in Universities. Pedagogical Research. 2018;3(1), 02. https://doi.org/10.20897/pr/85171
APA 6th edition
In-text citation: (Mbalamula, 2018)
Reference: Mbalamula, Y. S. (2018). Effect of Group versus Individual Assessments on Coursework among Undergraduates in Tanzania: Implications for Continuous Assessments in Universities. Pedagogical Research, 3(1), 02. https://doi.org/10.20897/pr/85171
Chicago
In-text citation: (Mbalamula, 2018)
Reference: Mbalamula, Yazidu Saidi. "Effect of Group versus Individual Assessments on Coursework among Undergraduates in Tanzania: Implications for Continuous Assessments in Universities". Pedagogical Research 2018 3 no. 1 (2018): 02. https://doi.org/10.20897/pr/85171
Harvard
In-text citation: (Mbalamula, 2018)
Reference: Mbalamula, Y. S. (2018). Effect of Group versus Individual Assessments on Coursework among Undergraduates in Tanzania: Implications for Continuous Assessments in Universities. Pedagogical Research, 3(1), 02. https://doi.org/10.20897/pr/85171
MLA
In-text citation: (Mbalamula, 2018)
Reference: Mbalamula, Yazidu Saidi "Effect of Group versus Individual Assessments on Coursework among Undergraduates in Tanzania: Implications for Continuous Assessments in Universities". Pedagogical Research, vol. 3, no. 1, 2018, 02. https://doi.org/10.20897/pr/85171
Vancouver
In-text citation: (1), (2), (3), etc.
Reference: Mbalamula YS. Effect of Group versus Individual Assessments on Coursework among Undergraduates in Tanzania: Implications for Continuous Assessments in Universities. Pedagogical Research. 2018;3(1):02. https://doi.org/10.20897/pr/85171
Related Subjects
Educational Science
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Submit My Manuscript



Phone: +31 (0)70 2190600 | E-Mail: info@lectitojournals.com

Address: Cultura Building (3rd Floor) Wassenaarseweg 20 2596CH The Hague THE NETHERLANDS

Disclaimer

This site is protected by copyright law. This site is destined for the personal or internal use of our clients and business associates, whereby it is not permitted to copy the site in any other way than by downloading it and looking at it on a single computer, and/or by printing a single hard-copy. Without previous written permission from Lectito BV, this site may not be copied, passed on, or made available on a network in any other manner.

Content Alert

Copyright © 2015-2018 Lectito BV All rights reserved.