Evaluation: 1st phase

Six practicing teachers participated in the pilot test and evaluation. Self-assessing on a scale from one (1) to five (5) their level of knowledge, the participants stated ‘No-to-moderate’ knowledge in software programming (Mn = 1.67, SD = 0.82) and robotics (Mn = 1.83, SD = 0.75). The average time experience of teachers in relation to programming was about one year (Mn = 1.03, SD = 1.98) while three of the six stated that they had programmed robots before.

The first evaluation phase was conducted in January 2020 at the PANDORA / R4A team (THMMY, AUTh). In the beginning, the TekTrain system was demonstrated to the participants, lasting 15 minutes. The demonstration included the use of the system, from the initial entry on the TekTrain platform to the selection of a training mission, its resolution, and the sending of the generated code to the robotic device. Login codes were then given to all participants. Upon entering the platform, participants had to graphically resolve some of the available reporting applications. The process of resolving the missions lasted about 1 hour. Then, a demonstration of downloading and running the developed applications on the TekTrain robotic device was demonstrated. Upon completion of this stage, the participants filled in a questionnaire electronically.

 

System usability

The average primary score of the TekTrain System on the SUS scale was 75.83 (SD = 8.61, CI = 66.8, 84.87). Then, the primary score was converted into a percentage ranking so that it could be interpreted. The percentage ranking of the TekTrain System exceeds 73%, which means that the TekTrain System was considered more useful than 73% of the database systems and received a B rating from the participants.

 

TekTrain  Programming Environment Acceptance 

Participants viewed the system under evaluation as dual, as the scores on TAM’s complementary productivity (Mn = 9.33, SD = 0.82) and entertainment (Mn = 7.83, SD = 3.55) scales were above the threshold (5.5) required for to evaluate the evaluation with the tool.

Regarding the acceptance of the TekTrain programming environment by the participants, Table 1 presents the descriptive measures (Average, Standard Deviation) for each of the objects of the TAM questionnaire separately, as well as for the four factors (perceived usefulness, perceived ease of use). , perceived pleasure, intention to use) collectively. The evaluation with the TAM questionnaire is done on a 6-point scale in which the lower values ​​show greater acceptance of the system by the user.

Questionnaire results

Questionnaire items

Av.

Perceived utility

2.19 (1.52)

Using the TekTrain programming environment helps me program robots easily.

2.33 (1.86)

Using the TekTrain programming environment improves my performance in robot programming.

2.17 (1.94)

Using the TekTrain programming environment increases my productivity in robot programming.

2.5 (1.87)

Using the TekTrain programming environment enhances my efficiency in robot programming.

2.5 (1.87)

Using the TekTrain programming environment makes robot programming easy.

2.5 (1.76)

I find the TekTrain programming environment useful for robot programming.

1.17 (0.41)

Perceived ease of use

1.9 (0.75)

Learning to use the TekTrain programming environment was easy for me.

2.33 (1.51)

I found it easy to get the TekTrain programming environment what I wanted.

2.33 (1.51)

The interaction with the TekTrain programming environment is clear and understandable.

1.67 (0.82)

It was easy for me to become proficient in using the TekTrain programming environment.

1.67 (0.52)

I find the TekTrain programming environment easy to use.

1.5 (0.55)

Perceived pleasure

1.42 (0.49)

Enjoyable – Hateful

1.33 (0.52)

Exciting – Boring

1.5 (0.55)

Pleasant – Unpleasant

1.5 (0.55)

Interesting – Boring

1.33 (0.52)

Intention to use

1.5 (0.55)

Evaluation of reference applications

The descriptive measures (Average, Standard Deviation) from the teachers’ answers to the questionnaire are presented. The evaluation was done on a 7-point scale in which the higher values show a more positive evaluation of the applications.

Questionnaire results

Questionnaire items

Μ.Ο. (Τ. Α.)

The objectives of the educational missions are appropriate for the age group to which they are addressed.

6.33 (0.82)

The escalation of the level of difficulty of the educational missions is successful.

6.67 (0.52)

The time required for students to complete the educational tasks is desirable.

5.67 (1.03)

The scenarios of the educational missions are attractive for the students.

6.17 (0.98)

Educational missions encourage students’ creativity and improvisation.

6.17 (0.75)

 

Evaluation: 2nd phase

 

A cycle of fixes and improvements to the TekTrain system followed. In particular, the needs arising from the evaluation were taken into account in the end-user requirements and, consequently, in the final specifications and system architecture. The updated version of the system was then re-evaluated by teachers.

 

Participants

In this second phase, the method of heuristic evaluation was followed, a systematic and qualitative method of systems inspection, in which it is evaluated whether an interface follows established and commonly accepted principles. The participants were educators, who were asked to make an overall assessment of the graphical programming platform TekTrain and to identify problems, weaknesses or shortcomings both in terms of usability and in terms of teaching-learning utilization of the platform. Specifically, six (6) teachers with teaching experience in primary education and research experience in learning technologies, educational robotics and the evaluation of educational technology systems took part in the process.

 

Tool

The list of Heuristic Rules for the Development and Evaluation of Educational Robotics Systems (HEDEERS) was used for heuristic evaluation. The list of HEDEERS heuristic rules includes cognitive load, challenge, adaptability, interaction, level of automation, collaboration and communication, feedback, ease of installation, enjoyment and aesthetics, transparency, active learning, relevance, reflection and computational thinking. The heuristic rules were transferred to an electronic form with open fields for the recording of free text comments by the participants.

Process

 

The heuristic evaluation was performed asynchronously and remotely. In other words, the participants received the necessary material via e-mail and carried out the evaluation in a place and time they wished. Initially, each participant received a personal user account to log in to the TekTrain platform as a tutor. Participants had at their disposal the auxiliary user manual of the platform which they could study and consult in order to become familiar with the functions of the platform. Participants were asked to take the time to browse the platform and explore its features, performing at least the following:

1. Join the platform as a teacher and create a new student.
2. Creation of a new exercise.
3. Assignment of the exercise to the student.
4. Log in to the platform as a student and solve some of the reference applications offered by the platform.
5. Solve the exercise assigned by the teacher.
6. Entry as a teacher and evaluation of the student’s solution.
7. Admission as a student and an overview of the awards the student received when solving the exercise.

Throughout the review of the platform, participants were asked to keep structured notes of any weaknesses they identified and to write a brief description on the online form with the HEDEERS heuristic rules.

Results

Cognitive load: It was pointed out that the information contained in some graphical commands, especially in the motion command, is very condensed and to some extent hinders the cognitive flow. It was proposed to separate the different motion parameters into different command plates.

Challenge: The participants pointed out that while there is an escalation of activities, it is not automatically provided by the system nor is it immediately distinct. The teacher is asked to choose the level of difficulty of each activity or to structure the exercise in a way that encourages the challenge (in case of creating a new exercise). Also, in order to make the challenge more distinct and the gradual escalation of the activities, a clearer thematic separation of the exercises was proposed based on the cognitive objects of the curriculum, in which the exercises with escalating difficulty will be provided.

Adaptability: Participants felt that the system would seem complicated to children with no prior programming knowledge. For this reason, they proposed to differentiate the graphical commands in the exercises that are characterized as “easy” so that they appear simplified, retaining only the basic functions, especially in terms of the movement of the robot.

Interaction: In the pop-up information pop-up windows there was difficulty scrolling the bar to display the text, in the case of extended instructions. It was also pointed out that the information in these windows could be shorter and more concise, while a typographical error was detected in the text explaining the rotational speed.

It was also suggested that during the solution the pronunciation of the exercise should be displayed constantly and in case of a snapshot it should be available to the user through the option “Instructions”. In addition, it was suggested that the instruction video of the exercise be included in the “Instructions” option so that the student does not have to leave the environment to refer to the video. There were also some difficulties in using the system.

Automation: When solving an exercise by the student, it was suggested that the system display automatically published exercises on a similar topic, so that the student can get ideas and can be helped without having to go through the process of looking for similar exercises.

Collaboration and communication: Participants noted the lack of opportunity for collaboration and communication between users and made the following suggestions: i) add the ability to collaborate between students to solve an exercise in a common work environment using different computers, ii) create a forum or other medium exchange of messages for the communication of the student with the teacher, the solution of questions etc.

Feedback: Participants noted that while providing effective visual feedback for the student-user, there is a lack of feedback for the teacher-user. It was also suggested to add the possibility of simulating the student solution to a virtual robot as well as the possibility of selecting an “assistant”, which will appear in the graphical programming environment and will inform the student-user about any errors or remind him of basic instructions for use. TekTrain. Finally, there was a need to display information messages for all function icons when the mouse pointer hovered over them.

Transparency: It was pointed out that the keys “Motion”, “Crawl”, “Sound” etc. are preferable to remain stable in the environment to be clear throughout the five basic features provided to the user. It was also suggested that the published exercises be published with their solution, without, however, losing their ability to be solved by the student.

Active Learning: It has been pointed out that some expressions may negatively affect the sense of exploration and active inquiry and it would be better to restate them. For example, the frequent use of the word “student” in graphic command information refers to a typical learning context, while the phrase “submit the exercise for correction” predisposes a motivation more exogenous (grade) than endogenous (problem solving / experimentation). . Also, the condition of having a correct solution is restrictive in some cases. The possibilities provided by the platform should be able to be used through semi-structured or completely open exercises, in which students are asked to combine the various commands in a creative, non-prescribed way.

Relevance: The participants noted that, when creating a new exercise, there should be an additional field in the description in which the teacher completes the learning objectives of the exercise. This information would be especially useful for leveraging published exercises by other teachers. Also, it should be possible to match the exercises with cognitive objects of the curriculum.

Reflection: In order to enhance the element of reflection and information, it was suggested that the system display to the user statistics on its learning progress.

 

Skip to content