Child pages
  • Trung, Ivan Task4
Skip to end of metadata
Go to start of metadata

Title Page

Usability Report:

version 0.1

Due date of Report:


Actual submission date:


Revised version:

14.12.2009 (final)

Product name and version:

Tuubi/Workspace view

Organisers of the test:

Ivan, Trung

Date of the test:


Date of the report:




Contact name(s):

Name and e-mail|

Executive summary

Provide a brief level overview of the test (including purpose of test)

Name the product: Tuubi/Workspace view
Purpose/objectives of the test:

  • Users should be able to perform the most used tasks with the least possible clicks and milage of mouse movements.
  • Create a layout which is more comprehesible. The user should not spend too much time browsing through the functions.

Method 1:
Number and type of participants:
Method 2:

Results in main points
e.g. bullet list (this is needed for being able to get the main results without reading the full report, this is seen important, since the reports serve different purposes and sometimes the need is to get a fast overview)


Full Product Description

  • Tuubi/Workspace view
  • The workspace view of the Tuubi portal concerning the subjects that students take.
  • The user population for which the product is intended:
    • - students:
      • exchange students -degree students
      • international students - Finnish students
      • Media Engineering, Art Design, Nurse, IT, Social Services, Building...
      • different campuses: Leppävaara, Myyrmäki, Tikkurila, Bulevardi...
  • Brief description of the environment in which it should be used:
    Tool is used to aid the administration of the educational institute both for stundents and for teachers.

Test Objectives

  • State the objectives for the test and any areas of specific interest
    • Users should be able to perform the most used tasks with the least possible clicks and milage of mouse movements.
    • Create a layout which is more comprehesible. The user should not spend too much time browsing through the functions.
  • Functions and components with which the user directly and indirectly interacted
    • browsing the news from the subjects, lectures
    • upload homeworks
    • start workspace
    • enter workspaces
    • communicate
    • download course material
    • read feedback
    • checking the calendar
    • place announcement
    • course related activities in the course view
  • Reason for focusing on a product subset
    •  Just because!



  • The total number of participants tested: 10
  • Key characteristics and capabilities of user group:

Media Engineering students

1. # of users that occupy this user type: 138
2. General responsibilities or activities: keep track of their activities, return assignment
3. Computer skills: proficient
4. Domain expertise: media engineering, design, mainly graphics oriented skills, less programming
5. Goals: make the communication easier between the teachers and students: submission of assignment, downloading teaching materials
6. Pain Points: ease the submission of tasks, homeworks; reaching teaching materials
7. Usage Contexts: in school, at home
8. Software Ecosystem

  • browser: Internet Explorer, Firefox, Safari
  • might use Microsoft Office: Word, Excel, PowerPoint
  • programs that can create PDF
  • 9. Collaborators: teachers and maybe fellow students
  • 10.Frequency of Use: daily (30%), every 2-5 days (70%)
  • They represented the average, mainly English-speaking Media Engineering students

Context of Product Use in the Test

  • Any known differences between the evaluated context and the expected context of use
  • Tasks
  • Describe the task scenarios for testing
    • Explain why these tasks were selected
    • Describe the source of these tasks
    • Include any task data/information given to the participants
    • Completion or performance criteria established for each task
    • no answer to these?

Test Facility

The test was conducted in the Jobs room of the Leppävaara campus.

Participant's Computing Environment

  • Computer configuration, including model, OS version, settings,
  • Browser name and version; nothing?
  • Relevant plug-in names and versions nothing?

Test Administrator Tools (report if relevant for the particular test)

  • Questionnaires were used. Is it in the attachment? or link where it is to be found!

Experimental Design

  • Define independent variables and control variables Not applicable to this test
  • Describe the measures for which data were recorded (the scale/scope of the recorded data, if relevant for the particular test, i.e., written notes, think aloud in audio recording, etc.). You did make some measurements - so?


  • Operational definitions of measures (e.g., how is it decided that that a task is completed)
  • Policies and procedures for interaction between tester(s) and subjects (e.g., is the test conductor aloud to answer questions of the user, provide help, etc.)
  • State used: non-disclosure agreements, form completion, warm-ups, pre-task training, and debriefing
  • Specific steps followed to execute the test sessions and record data
  • Number and roles of people who interacted with the participants during the test session
  • Specify if other individuals were present in the test environment
  • State whether participants were paid

Participant General Instructions (here or in Appendix)

  • Instructions given to the participants
  • Task instruction summary
  • Usability Metrics (if used)
  • Metrics for effectiveness
    • Metrics for efficiency
    • Metrics for satisfaction, etc.


  • Data Analysis
  • Quantitative data analysis 
  • Qualitative data analysis
  • Presentation of the Results
    • From quantitative data analysis
    • From qualitative data analysis (descriptive and clarifying presentation of the results)

Reliability and Validity

Reliability is the question of whether one would get the same result if the test were to be repeated.
This is hard to acquire in usability tests, but it can be reasoned how significant the findings are.
(Example from expert evaluation: This review was made by one reviewer in order to give quick feedback to the development team. To get more reliable results it would have been desirable to use three, or at least two reviewers, as it is often the case that different reviewers look at different things. We do feel, however, that for the purpose of this report, and the essence of quick feedback, one reviewer has given enough feedback to enhance the usability of the system.)

Validity is the question of whether the usability test measured what was thought it would measure, i.e., provide answers to. Typical validity problems involve: using the wrong users, giving them the wrong tasks.
(Example from expert evaluation: "The reviewer is an experienced usability professional that has evaluated systems for many years. We therefore feel that the method used as well as the tasks used give an appropriate view of how ordinary users would behave in the system.")

Summary Appendices

  • Custom Questionnaires, (if used, e.g., in expert evaluation there is no participants)
  • Participant General Instructions
  • Participant Task Instructions, if tasks were used in the test
  • No labels
You must log in to comment.