ASKS: A System for Supporting Asynchronous Discussion for Unpaced Online Learners
Abstract.
Learning for students enrolled in “individualized” distance education programs can be enriched by enabling information sharing among students. Time, distance, and unsynchronized study schedules make information sharing among students difficult to implement. The evaluation of class participation in classroom or synchronized distance education programs is inherently unfair in that it is generally subjective and students who participate first have an advantage over students who participate later. This paper presents a system that creates virtual cohort study groups out of students enrolled in an unsynchronized individual study distance education program and provide a structured process for students to share information and for instructors to objectively evaluate class participation.
Introduction
Many universities now offer Internet-based education. To some researchers, the Web is an effective teaching medium with student learning outcomes at least equivalent tothose of classroom-based students (see for example, Gerhing, 1994; Golberg, 1997; McCollum, 1997).
Online courses generally reflect many features of the traditional academy – to wit, they generally have specified start and end dates, limited entry points, and consist of cohorts of students who proceed through each course at about the same pace. This cohort model lends itself to group-based, online learning experiences. Commercial online learning management systems, informed by group support systems that also assume an underlying cohort-based learning model, usually try to facilitate many desirable features of classroom learning experienceslike collaborative learning. This in turnenables knowledge constructionamong learners.Not surprisingly, most research about onlineeducation is informed bythese cohort-based learning experiences (see, for example, Arbaugh, 2001; Burke, 2001; McEwen, 2001; Montoya-Weiss, Massey, & Song, 2001).
However, there is a tradition of open education that has sought to address the needs of learners who for one reason or another do not fit the classic mould of higher education. In large open and distance education institutions like the Open University of the United Kingdom, or smaller variants like AthabascaUniversity in Canada, the primary objective of the learning model is to provide a greater degree of flexibility for students. Learners may enrol in courses throughout the year, for instance, and proceed through these at their own pace. Assignments and exams can often be completed at any time, and in any order. The relatively unpaced nature of this “individualized” model often appeals to learners who have significant other responsibilities like full-time jobs and families.
Despite these advantages, unpacedonline learning must address some important challenges. Alavi (1994) suggests that effective learning requires three main attributes: active learning, learning through problem solving, and cooperation and teamwork. While the first two attributes can be somewhat overcome by appropriate design of instructional material, the last is difficult to incorporate into unpaced online courses. Collaboration among students is difficult because by definition they do not belong to a cohort, and their courses are designed to be self-paced. As a result, interactions among learners cannot be easily facilitated, monitored, and evaluated.
Mangan (2001) defines distance learning as “education that is accessible at a time, place, location, and pace that is convenient to the user” (p. 30). This definition implicitly recognizes that control of the learning environment is also a significant contributing factor to a learner’s success in distance education. However, some research indicates that learners do not make good instructional choices if they are given complete control over course material (Steinberg, 1989; Williams, 1989). Individual learners can be assisted by devising methods and techniques that enable a student to take better control of learning.Bell & Kozlowski (2002) proposed “adaptive guidance” as a means to enhance learners’ self-regulation processes and improve the efficiency of the learning process. Intelligence agents monitor and assess learner progress, suggest individualized sequencing of course material, and provide tailored feedback. A learning system that enables professors to more easily employ adaptive guidance- through easy access to prior group knowledge and the ability to efficiently assess and respond to student contributions, may suffice to create a greater sense of instructor immediacy in the learning process, a behaviour found to increase student satisfaction in online courses (Arbaugh, 2001).
Technology and types of interactions in online learning environments
However, there appears to be a lack of suitable technology at present that allows features like adaptive guidance, instructor immediacy, and collaborative learning in an unpaced online learning environment. This is illustrated below.
The means of interactionamong two or more people depends on whether or not they are communicating at the same time or in the same place, as follows:
Table 1: Types of interaction in learning environments.
Using this schema, and by definition, online learning can only take place in quadrants 2 and 4. It is in these areasthat teaching and learning activities occur in different places, requiring some form of technological mediation. Technology that facilitates synchronous online learning(e.g., desktop video conferencing) falls into quadrant 2 (different place, same time). Asynchronous technology (e.g., computer conferencing) falls into quadrant 4 (different place, different time).
However, this representation does not take into account the relatively paced or unpaced nature of online courses.Since “Place” is extraneous to our analysis if we consider only forms of communication that must be used among physically dispersed individuals, this variablecan be replaced with “Pace” to gives us a more descriptive schema of online learning:
Table 2: Types of interactions in online learning environments
Using this analysis, synchronous forms of technology-mediated communication like desktop videoconferencing generally occur in quadrant 1 (same pace, same time). Asynchronous forms of communication like computer conferencing occur in quadrant 3 (same pace, different time). There are few forms of technology that facilitate learning in quadrants 2 and 4 (i.e., unpaced online learning). This gap is explored further below.
In the traditional classroom setting, various types of interpersonal communications are possible: student-student, student-class, instructor-class, student-instructor, and instructor-student. By employing various forms of technology-mediated communication in online learning environments, these forms of interpersonal communication can be facilitated and learner perceptions of isolation can be reduced (Yin-Sum & Tak-Wing 2002). Table 3delineatestechnologies that can be used to replicatethese fourtypes of interactions in paced and unpaced online learning environments.
Table 3: Technologies that facilitate interactions in online learning environments.
Enabling Online TechnologyInteraction type / Paced / Unpaced
Student to student / Email
Telephone
Online chat
Discussion boards
Desktop video conferencing / None*
Student to class / Teleconference
Desktop video conferencing
Class email
Discussion boards
Computer conferencing / None
Student to professor / Online chat
Telephone/pager/voice mail
Email / Online chat**
Telephone/pager/voice mail
Professor to class / Teleconference
Videoconference
Class email
Discussion board
Computer conferencing / Class email
* assuming that students are not apprised of the means to contact other students – for example, given email addresses and telephone numbers. This is generally the case in an unpaced online learning environment. Privacy legislation can also limit this practice.
** by chance, perhaps
The dearth of suitable technologies in the last column of Table 3 coincides with a lack of examples to illustrate quadrants 2 and 4 in Table 2 above. Bothdiagrams illustrate that while technologies exist to facilitate synchronous and asynchronous forms of interaction in paced, online learning environments, facilitating interaction among learners in an unpaced setting is still problematic – and this despite rapid advances in technology and online learning management systems. As noted above, this has likely occurred because most online learning systems have evolved from classroom-based educational models and group-based support systems.
The balance of this paper describes the development of an online learning system prototype designed to facilitate interaction in an unpaced, online environment. The system promises to provide learners with maximal amounts of flexibility,yet rectify an important practical gap in unpaced online learning–the means to effectively communicate with peers and instructors and thereby facilitate group-based learning.However, many of the features of this system can also be applied to paced online learning environments, thereby addressing some needs of learners and instructors that are common across all online learning models.
The ASKS System
The ASKS (ASynchronous Knowledge Sharing) system uses discussion boards with capabilities characteristic of most group decision support systems (Nunamaker, Dennis, Valacich, Vogel, & George, 1991). These facilitate both public and private online discussions. Students and instructors access the system through unique URLs.
The main student screen is divided into three areas - knowledge sharing topics in the left-hand pane, the main menu in the top part of the right-hand pane, and the topic headings just below the main menu.
Figure 1. Student main screen
Each knowledge sharing topic has four parts: a closed or open file folder icon just to the left of the topic, the topic itself, the number of entries created by a student for the related topic shown in parentheses, and a trash can icon showing the number of entries that have been deleted. Each knowledge sharing topic is briefly described, similar to the subject line in an email.
When the file folder icon for an applicable topic is opened, the individual student’s entries related to the topic are displayed in the right-hand pane. In this case, six entries have been made by the student related to the topic, “System Advantages.” Each response to the knowledge sharing topic is accompanied by the date an entry was entered or last modified, size of the response, a short description of the entry, and a link to a more detailed explanation.
Topic submissions can be created by clicking the “Compose” button. This brings up the editing screen shown in Figure 2.
Figure 2. Topic editing screen
This screen has the look and feel of most email systems. A subject line provides a brief description of the response. The Explanation area is similar to the main body of an email. Students compose their detailed responses to the given topic here, if desired. If no explanation is entered, the system default reports “No explanation, point self-explanatory”.
Responses to knowledge sharing topicscannot be viewed by others until the student clicks the first column in the related submission line. A check box appears to the immediate left of such entries. Clicking the “Post” button on the main menu makes the entry accessible to the instructor for reviewing and inaccessible to the student for further editing. Other students cannot view the submission until the instructor reviews it. The “Delete” button moves selected messages from the topic’s inbox to the corresponding trash can.
The last item in the right-hand pane is the “Instructor’s Comments.” If the instructor has evaluated an entry, a “new mail” icon and the date of the evaluation appear in this section. When a students reads the comments, the icon changes to an “opened email” icon. Entries that have been rejected by the instructor appear with a red “X” icon. Other possible instructor comments are “Not sent to instructor yet” for entries that have not yet been submitted for evaluation, and “Awaiting evaluation” for entries that have been submitted but not reviewed by the instructor.
Clicking the date in the Instructor’s Comments column opens the screen shown in Figure 3.
Figure 3. Instructor's comments on an individual entry.
This provides each student with individual feedback on their submissions. If the instructor is not satisfied with the overall quality of submissions from a particular student related, hints can be provided to the student. The “Hints” button will then appear on the main menu. This is hidden until the instructor has commented on all entries made by the student. Clicking on this button brings up instructor feedback similar to that shown in Figure 4.
Figure 4. Instructor hints
The instructor’s overall comments are shown in red. The summary of student Mary Swift’s responses is shown in the left-hand column. In addition, a list of points not mentioned by the student but submitted by others in the virtual cohort is shown on the right-hand side of the screen. The instructor can choose the amount of other students’ contributions that are disclosed to a participant. The student then submits additional responses until the instructor is satisfied.
The main screen for instructors is shown in Figure 5
Figure 5. Instructormain screen
This screen shows the student submissions awaiting evaluation. In this case there are three: one from Mary Swift, and two from John Doe related to the knowledge sharing topic, “System Advantages.” Clicking a student’s name opens the following evaluation screen:
Figure 6. Submission evaluation screen
The ASKS system streamlines the instructor evaluation process through several means. The upper left-hand part of the screen shows the student’s submission to be evaluated. The upper right-hand part shows a summary of points already contributed by the cohort, as selected by the instructor in previous evaluations. The bottom left-hand part of the screen (“Evaluation”) enables the instructor to judge a particular response in terms of those of other cohort members (“Class Matching”), clarity of presentation (“Articulation”), and the importance of the point to the knowledge sharing topic (“Relevance”).
With respect to Class Matching, one of three possible evaluations is selected: the entry is judged to besimilar to a current class entry, a new entry for the cohort, or unacceptable in its current form. Selecting any one of the three options fills the feedback box in the bottom right-hand part of the screen with a randomly selected preset comment, suitable to the evaluation type selected. As a result, instructors do not have to type in comments for every entry they evaluate. However, the comments can be easily modified if the instructor feels that more descriptive feedback is needed.
After all the entries on a knowledge sharing topic are evaluated for a particular student, another comment screen automatically appears. This screen enables the instructor to enter an overall assessment of the student’s entries and also gives the student permission to view other entries that may have been missed. The default setting enables access to all the entries. The instructor can choose to keep some entries hidden as an encouragement for the student to come up with the missing points. Comments to the student can also be modified to assist this process. These are then posted, and become available to the student for viewing either in the “Instructor’s Comments” section of the student screen if overall contribution by a student is satisfactory (see Figure1), or as “Hints” if not (see Figure 4).
A student’soverall class participation mark for a given knowledge sharing topic is based on four criteria: attendance,measured as a percentage of discussion topics a student participated in; participation, measured as a percentage of class facts or class equivalent facts that a student came up with; presentation skills as measured by articulation scores, and quality of postings as measured by relevance scores. Relative weights are pre-assigned to each of these categories by the instructor. These different components of the overall class participation mark are calculated as follows.
Where
Example
Below is an example of how the participation grades for a hypothetical class would be computed. The example is based on the following assumptions; a class of three students, 10 critical thinking assessment topics, and students generating five unique facts for each topic. The assumed number of facts raised by each student for each topic are as shown in table 4.a. A black box indicates a topic a student did not participate in at all. The assumed weights for the four grading criteria, attendance, participation, articulation and relevance are as shown in table 4.b. Each time a student brings up a fact that has not been mentioned by any other student, the fact becomes a new class fact and is assigned a relevance score as shown in the evaluation section in bottom part of figure 6. All other student who subsequently mention the same fact are assigned the same relevance score. Table 4.c shows the assumed relevance scores for the 50 facts that the hypothetical class raised.
An attendance point is awarded for each topic participated in. In the hypothetical class, student 1 who participated in all topics gets 100% for attendance. Participation is defined as entering at least 1 fact for a topic. Students 2 and 3 get 90% and 80% attendance scores respectively.
Table 4. Simulation Assumptions
a. Simulated number of facts entered by each student for each topic / b. Grading SchemeTopic / Student1 / Student 2 / Student 3 / Class / Criteria / Weight
1 / 3 / 4 / 5 / 5 / Attendance (Showing up) / 0.1
2 / 4 / 5 / 5 / Participation (Quantity) / 0.2
3 / 3 / 5 / 5 / 5 / Articulation (Communication) / 0.3
4 / 4 / 5 / 5 / 5 / Relevance (Quality) / 0.4
5 / 5 / 5 / 5 / 5 / Total / 1
6 / 4 / 5 / 5
7 / 5 / 3 / 5 / 5
8 / 5 / 4 / 5 / 5
9 / 3 / 5 / 5
10 / 4 / 4 / 5 / 5
Total (n) / 40 / 40 / 40 / 50
4. C. Class Relevance Scores
Class FactTopic / 1 / 2 / 3 / 4 / 5 / Total
1 / 7 / 7 / 5 / 7 / 7 / 33
2 / 6 / 7 / 6 / 6 / 7 / 32
3 / 6 / 7 / 7 / 6 / 7 / 33
4 / 5 / 7 / 7 / 7 / 6 / 32
5 / 5 / 6 / 7 / 7 / 6 / 31
6 / 5 / 7 / 7 / 5 / 7 / 31
7 / 5 / 5 / 6 / 7 / 7 / 30
8 / 5 / 5 / 5 / 5 / 6 / 26
9 / 7 / 6 / 7 / 7 / 5 / 32
10 / 5 / 7 / 6 / 6 / 6 / 30
Participation marks, which are awarded for quantity of facts raised, are given as a percentage of facts raised by the whole class. In the example all students raised 40 facts each but as a class they raised 50 facts, so each student gets 80% for participation.