Child pages
  • Alexander, Ermias, Yohannes, Mesfin. (All task together)
Skip to end of metadata
Go to start of metadata

Task 1


Here we comment WinhaWille student portal.

Users of Winha Wille are students.

  Segmentation

1.Students

  •   All metropolia students in different campuses.
  • International  and National Finnish students
  • Different campus students such as : Leppävaara, Myyrmäki, Tikkurila, Bulevardi  etc...
  • Different Departement students  such as : Media Engineering, IT Technology,Art Design, Music,Nurse,  Social Services, Environment etc..
  • Different Groups: DAP07S,CAP06S,TI08S1E  etc...

2.Teachers

3.Other Support staff 

  1. Activities:

  • getting student data
  • changing address
  • check the confirmed participation
  • check the completed courses
  • check the study guide line
  • checking the courses offered for the department
  • enrolling for the courses that are offered
  • changing the data language
  • checking the enrollment status(season)

Focal roles for the winha wille website

International student, Media Engineering or IT students,

For international students International student, Media Engineering or IT students there role is to change their data change addresses ,to enroll for the course ,to check the course that they complete ,to check the credit that they accomplish and to know how many is left, it's mandatory to sign there user name and password.

Media Engineering students

1. # of users that occupy this user type: All students participating in programme
2. General responsibilities or activities: search cources, control performance, enrol to cources, check participated and completed courses
3. Computer skills: general computer skills
4. Domain expertise: media engineering
5. Goals: this tool is compulsory to be used. Student can't continue studies without registering as present for period. Also student cannot get grades for course if not enrolled.
6. Pain Points: Main purpose is remote control of studying proces
7. Usage Contexts: Winha can be accessed  from "lobby" area ,"Liberary", and  from  home through internet.
8. Software Ecosystem: Winha is accessible via all major browsers.To communicate with other students or teachers, the students use phones and emails. And to get the course guide the students use study guide which is located in tuubi.( It is coupled MB: with what other tools, word? Material booklet of the study guide, mobile for text messaging or phoning help(discussion) with friends what courses they are taking, what grade they got, etc??)

Winha is accesed usually from tuubi portal, so in some way it is inside tool of tuubi.
9. Collaborators: teachers, other students, IT support
10.Frequency of Use: around 1-3 times a week

Personal Activities

Name: John Doe

A role or Job Title: Student

     "How can i register for a course?"

John Doe is a third year media engineering student.He lives in Lintuvaara. Over his study years he has used winha to put his personal data, to check all the courses that are offered through out the 4 years, to check the courses offered for the specific period , to check the compulsory and optional courses, to check the grades that he has got from different courses.In winha, which is a frustrating thing for him is that when browsing through it ,it is difficult to find what he wants. For example when he wants to enroll , he has to click ISP in the winhaa page  and then enrollement.But since the term ISP is not self discriptive, he has to  ask from students how to enroll (MB so he does use e.g., mobile when using Winha...).Besides when he clicks the help tab , user help file appears which is written in finnish.

MB: Please finnish this task ASAP and move to task 2. The Task 1. is missing still the usability criteria for the students users of Winha. These you need for the task2. Finnish both task during this week 47 since next week 48 we should do the pilot testing of your test plan (task2).

Usability Criteria

1. Learnability

     - The system should be easy to learn, so that the student can use it as rapidly as possible.

2. Efficency

    - The system should be efficent to achieve high level of productivity .

3. Memorabilty

    -  The system should be memorable, so that the student does not have to learn it again after being away for a while.

4. Errors

    - The system should have low error rate, by preventing the student from making errors and recovering from errors made. There should not be catasrophic errors.

5. Satisfaction

   - The system should be satisfactory to the student , inother words it should be likeable.

6. User in control

   - The user should be in control of the system .


Task 2


Planning Report:

• The goal of the test: what do you want to achieve?

Goal of the test is to check if the Winha system fits the user requirements(usability criteria).

MB: What user requirements you mean here? The usability criteria or some other requirememnts?

• Where and when will the test take place?

The test will take place possibly in one of Usability and Interface design labs.

• How long is each test session expected to take?

We expect each test to take 30-45 minutes

• What computer support will be needed for the test?

For the test basic computer configuration is prefered. Winha should be accessible using the most basic configurations computer.

Network connection is needed.

• What software needs to be ready for the test?

Any basic operating system (Win XP,  Win 7, Mac OS, or other), Any major browser (Opera, IE, Chrome, Mozilla)

• What should the state of the system be at the start of the test?

User is not logged in.

• What should the system/network load and What should the system/network load and response times be? (not too fast or too slow);

Response time is not critical, though it should be as fast as possible.

• Who will serve as experimenters for the test?

We, the members of the group.

• Who are the users going to be and how are we going to get hold of them?

We will research Winha from the "student user" point of view

• How many users are needed?

1 - ∞

• What test tasks will the users be asked to perform?

1) Log in to Winha.

2) Check enrollments

3) Enroll to new cource

4) Check current ECTS amount

5) Check course details

Task list can be improved during the test

• What criteria will be used to determine when the users have finished each of the test tasks correctly?

1) Log in to Winha. - user logged in / user failed to log in.

2) Check enrollments - courses the user enrollet to are displayed correctly / incorrectly.

3) Enroll to new cource - Enrolled OK / failed to enroll-

4) Check current ECTS amount - Amount of ECTS is correct / incorrect

5) Check course details - Course details are correct and meaningful / wrong or useless course details.

Task list can be improved during the test

• What user aids will be made available to the test users (manuals, online help, etc)?

Online help provided by Winha itself

• To what extend will the experimenter be allowed to help the users during the test?

0 %

• What data is going to be collected and how will it be analysed?

Result of test (probably measured in %% of completion) - 100% result OK - 0% failed

OK go, when you ahve done the pilot test, then amend to what of the task and taks combinations provide information to what of your usability criteria.

PILOT TEST
Pleas fill in the attachment. (smile)

Then attach the filled in file. (don't forget to rename the file somehow, so they will not overwrite each other)

File is on the very top of the page. right below Metropolia logo.

Then We'll discuss the results

After performing the pilot test:-

  • When and where the test was executed
    • The test was executed in a class(jobs), from 09:40 -10:20 ,26-11-2009
  • What you have to correct in your testing plan. e.g., your task descriptions were not clear, your time estimation of how long the test takes was not correct, you did not have the right dbriefing questions, you did not get the information that you thought you would from the test.
    • In our testing plan  there were some questions which were unclear and  open questions, so we have to make a correction there.
  • Then correct your plan (state what colour you will use for the corrections so that i find them)
    • . The color we used is blue.
    • We made only minor changes to our plan. But about our usablity crieteria we stated six of them , but  we foucused on the Efficency,Memorabilty,Errors,Satisfaction off the system. Since the user is a student and uses the winha a lot , the learnablity is a given and user in control crieteria can be included in satisfaction crieteria.
  • State when and where will the actual testing take place.
    • Next week(week 50).

_______________

Task 5

Reporting Template

Use the following guidelines to assure that required elements are reported. Use the ones that are relevant for your testing purpose, method and setting.

NOTE: this works as guideline. As mentioned above, not everything mentioned has to be described.
NOTE: this template is meant to be used for reporting all kind of tests.

Title Page

Usability Report.

Winha System

Due date of Report:

10.12.2009

Actual submission date:

10.12.2009

Revised version:

10.12.2009 (final)

Product name and version:

Tool name, version (proto or stable version)

Organisers of the test:

Alexander, Ermias, Yohannes, Mesfin.

Date of the test:

10.12.2009

Date of the report:

10.12.2009

Editor:

Alexander

Contact name(s):

alexander.dyukov@metropolia.fi


Executive summary

Provide a brief level overview of the test (including purpose of test)

Name the product: WinhaWille
Purpose/objectives of the test: This test is intended to discover how well Winha Ville fits Metropolia students needs.
Method 1: Pilot testing
Number and type of participants: 2 students
Tasks (if task are used): Taking notes and preparing questionnaires
Method 2: Actual test

Number and type of participants: 10 students

Results in main points
e.g. bullet list (this is needed for being able to get the main results without reading the full report, this is seen important, since the reports serve different purposes and sometimes the need is to get a fast overview)

Language and translation issues:

  • Language issue. Even though I log in with English, Some data (like ISP contents) is still in Finnish. When I change the language, it logs me out automatically and only when I log in again then is a change of language. Irritating "15 minutes" logout issue. I have to find link all over agin after it
  • Translation of interface is not complete. Sometimes I get stuck with Finnish

Navigation issuses (quotations from users):

  • I had to cick blindly because almost all link names mean nothing to me
  • the names are not self explantory, i very rarely use the system , always forget under what link i find what.
  • WinhaWille should be Changed. It screwed up my ISP, because I was always lost what to do and when to click "register"
  • Winha is diffcult to use and does not provide much information about courses.

    Introduction

Full Product Description

  • Formal product name and release or version
    • WinhaWille
  • Describe what parts of the product were evaluated
    • Course registration and implementation
  • The user population for which the product is intended
    • Metropolia students, teachers and the staff
  • Brief description of the environment in which it should be used (this means the context of the use of product/tool, e.g., is it an education product used in primary school, higher education, etc., or maybe research tool used in the field -then what could be field)
    • It is an education product used in higher education.

Test Objectives

  • State the objectives for the test and any areas of specific interest
    • Functions and components with which the user directly and indirectly interacted
      • To test what is the students reaction toward WinhaWille during course registration and implementation process
    • Reason for focusing on a product subset
      • It is very important and often used by the student, and we felt that it lacked clarity.

Method

Participants

  • The total number of participants tested
    • For pilot testing = 2 students, and for the actual testing  =  10 students.
  • Segmentation of user groups tested, if more than one user group was tested
    • DAP07S(Media Engineering students)
  • Key characteristics and capabilities of user group (this info might have been acquired through the background (pre) questionnaires, thus it can be just referred here, e.g. linked to the description of the results of the background (pre) questionnaires)
    • Students whose are in third Year of studies,And we have been sure they have good sort of experience  with the system.
  • How participants were selected; whether they had the essential characteristics
    • Good experience with the system and easily reachable.
  • Differences between the participant sample and the user population
    • We consider them as part of the population and represent the majority, the only thing is that individual difference might matter when comparing the two.

Context of Product Use in the Test

  • Any known differences between the evaluated context and the expected context of use
    • There is no big difference between our experctations and the reality, since we also feel what respondents feel.
  • Tasks
    • getting student data
    • changing address
    • check the confirmed participation
    • check the completed courses
    • check the study guide line
    • checking the courses offered for the department
    • enrolling for the courses that are offered
    • changing the data language
    • checking the enrollment status(season)
  • Describe the task scenarios for testing
    • Explain why these tasks were selected
      • Because they are the basic and most common tasks to be done in the system.
    • Describe the source of these tasks
      • From our own experience
    • Include any task data/information given to the participants
      • We provided questionnaire
    • Completion or performance criteria established for each task
      • The respondants were expected to answer YES  , NO and YES WITH DIFFICULTIES, inaddition they are provided spaces to include thier personal opinions 

Test Facility
Describe the setting, and type of space in which the evaluation was conducted

  • The test was performed in the labs

Detail any relevant features or circumstances, which could affect the results (e.g. There was a brake down of the server, which messed up the test for a while and created unnecessary tension. There was unforeseeable noise that disturbed the test, etc.)

  • There was a printing error which we corrected right away.

Participant's Computing Environment

  • Computer configuration, including model, OS version, settings,
    • OS(Mac)
  • Browser name and version;
    •  Fire Fox
  • Relevant plug-in names and versions (the bullets mean stating e.g., what browser and computers the users are using in the test. In field trials this is information that is not known by the technical partners. For example, in one of the tests during last spring 2007, one of the users was at home using SSp during the test, so it was asked what she used e.g., Internet Explorer 6 and Mozilla Firefox2.0.0.6, Compaq Presario with Windows XP and IBM ThinkPad with Windows XP. If all is not know then it is not but it would be good to try to get the info. Plug-ins can refer for example to the browser add-ons (in Firefox these are found from the upper tools menu. Sometimes it is needed to know if some plug-ins are on or off, because it might change or prohibit some functions.).
    • OS(Mac)
    • Fire Fox

Display Devices (report if relevant, e.g., Paper prototypes are tested or static prototypes are tested on screen)

Not applicable to this test, the used sustem was the system in real use.

Test Administrator Tools (report if relevant for the particular test)

  • If a questionnaire was used, describe or specify it here (add these to appendix)
    • The questionnaire is attached at the top of this page.
  • Describe any hardware or software used to control the test or to record data (audio, video)
    • We used A4 printed out papers.

Experimental Design

  • Define independent variables and control variables: Not applicable to this test
  • Describe the measures for which data were recorded (the scale/scope of the recorded data, if relevant for the particular test, i.e., written notes, think aloud in audio recording, etc.). Analysing the written answers in qualitative manner and counting some percentages on the answers.

Procedure

  • Operational definitions of measures (e.g., how is it decided that that a task is completed)** When the questionnaire is filled.
  • Policies and procedures for interaction between tester(s) and subjects (e.g., is the test conductor aloud to answer questions of the user, provide help, etc.)** Just provide the questionnaire and wait for the answers(no help was provided in answering the questions).
  • State used: non-disclosure agreements, form completion, warm-ups, pre-task training, and debriefing
    • none of the above
  • Specific steps followed to execute the test sessions and record data
    • questionnaire was given and collected after finnishing.
  • Number and roles of people who interacted with the participants during the test session
    • 4 of us
  • Specify if other individuals were present in the test environment
    • No
  • State whether participants were paid
    • No

Participant General Instructions (here or in Appendix)

  • Instructions given to the participants
    • perform the tasks whcih were provided in the questionnaire and answer the questions.
  • Task instruction summary
    • The tasks are to answer the given questions.
  • Usability Metrics (if used)
    • Efficency,Memorabilty,Errors,Satisfaction
  • Metrics for effectiveness
    • Metrics for efficiency
      • According to their answer
    • Metrics for satisfaction, etc.
      • According to their answer

Results

  • Data Analysis
  • Quantitative data analysis
  • Qualitative data analysis
    • We have done the analysis based on the answers given by the respondents,hope it reflects the real feel of the respondents.
  • Presentation of the Results
    • From quantitative data analysis
      • From the number of answers given by our respodents we have analysed the the users are not verymuch satisfied and the system is not logically desined, the language aspect also a bit clumsy.
    • From qualitative data analysis (descriptive and clarifying presentation of the results)
      • From the Users point of view the answers given are generally valuable and really acceptedand it impplies there is major task is to be on place for school.

Reliability and Validity

Reliability is the question of whether one would get the same result if the test were to be repeated.
This is hard to acquire in usability tests, but it can be reasoned how significant the findings are.
(Example from expert evaluation: This review was made by one reviewer in order to give quick feedback to the development team. To get more reliable results it would have been desirable to use three, or at least two reviewers, as it is often the case that different reviewers look at different things. We do feel, however, that for the purpose of this report, and the essence of quick feedback, one reviewer has given enough feedback to enhance the usability of the system.)

  • We do feel that for the purpose of this report, and the essence of quick feedback, the reviewers has given us enough feedback to enhance the usability of the system.

Validity is the question of whether the usability test measured what was thought it would measure, i.e., provide answers to. Typical validity problems involve: using the wrong users, giving them the wrong tasks.
(Example from expert evaluation: "The reviewer is an experienced usability professional that has evaluated systems for many years. We therefore feel that the method used as well as the tasks used give an appropriate view of how ordinary users would behave in the system.")

  • The users invoolved in the testing are mostly third year metropolia students,so that we have been sure that they good experience on the suystem and the result comes out from them is valid and acceptable.

Summary Appendices

  • Custom Questionnaires, (if used, e.g., in expert evaluation there is no participants)
  • Participant General Instructions
  • Participant Task Instructions, if tasks were used in the test
    • We provideded a questionnaire to be filled.
  • No labels
You must log in to comment.