CSC 406 Syllabus
After some dispassionate reflection, I have concluded that my proposal to rewrite the entire system in Python was an immature, capricious, ill- considered, unscholarly, and unsound overreaction to a poorly functioning product. In plain English -- it was a dumb idea.
I've spent a few days experimenting with Python/QT and Java/Swing as possible alternatives to the current Java/GWT implementation. Neither of the alternative platforms is a panacea for the problems with the current implementation.
It is clear now that the approach most likely to produce a usable product is to focus clearly on fixing the problems in the existing code, fully test the entire system, and release as soon as possible for client review. We will discuss this approach on the first day of class.
Per consensus on the first day of class, we have agreed to indeed proceed with the current implementation. Accordingly, the following items need to be accomplished before we release to the clients:
During 405, we were building a prototype that was not fully tested, which we could have released as such to understanding clients. Having not produced a prototype suitable for release, we have now entered the deployment phase, where any released product must be fully tested.
There are some acceptance test results from 405. Mine in particular itemize specific bugs that must be fixed immediately, so we can have another round of customer-level acceptance testing. See
In particular, all of the MAJOR ERRORs must be fixed.http://scheduler.csc.calpoly.edu/releases/alpha/testing/acceptance/CSC/ finals-week-report.html
The immediate point of user-level acceptance testing is to ensure that the functionality we now have in place is adequate to allow real department schedules to be generated. That is, we need to make sure that there are no significant user-level features that make it difficult or impossible to produce department schedules, in particular for Fall 2011 and Winter 2012. This level of human acceptance testing will go on in parallel with the code-level JUnit and Selenium testing.
The following is the administrative process we will follow, as discussed during the 405 final exam period, and refined during the first class meeting of 406:
On Mondays and Fridays, up to 10 minutes at the beginning of class will be devoted to a Scrum-style stand-up discussion of issues of potential relevance to the entire class. If no such issues exist, then no discussion will be necessary.
After the first two days of class, all class and lab time will be devoted to working on the project. Think of this as a job where you come to the office six hours per week to do work.
The following is the tentative team organization we agreed to during the 405 final exam period:
Team | Project Focus | Code Review Pairing |
Salome, Adam | Algorithm | Schedule Views |
Matt, Tyler Y | Schedule Views | Algorithm |
Kaylene, Jonathan | Resource Tables, Back End | Front End |
Evan, Tyler H | Resource Tables, Front End | Back End |
Jake, Carsten | Instructor Web App | Top-Level |
James, Jordan | Top-Level UI & File Storage | Instructor Web App |
The 'Code Review Paring' column indicates how the teams are paired for the code review process. Teams so paired will review each other's code.
The following is the code review process we tentatively agreed to during the 405 final exam period:
The code to review is any file for which one or more svn diffs exist since the commit for the previous week's review. There will be a script that will perform the diffs and list the files that need to be reviewed in a particular week.
The following are the code review criteria we tentatively agreed to during the 405 final exam period, and refined during the first week of 406:
The following acceptance test procedure was determined during Wednesday of Week 2 of class:
The following matters will be discussed and resolved in the first two class meetings of Spring quarter:
Based on dicussions during the second class meeting, the following is the large-grain confirmed project schedule:
Week | Task |
1 | Wednesday: Project planning is complete |
2 | Monday: Some 405 bugs fixed, minimum 25% tested |
3 | Monday: All 405 bugs fixed, minimum 50% tested |
4 | Monday: Minimum 75% tested |
5 |
Monday: 100% tested
Wednesday: Contacted interested clients Friday: Released to clients |
6 | Add enhancements, respond to client feedback |
7 | Add enhancements, respond to client feedback |
8 | Second client release, work on documentation |
9 | Respond to client feedback, work on documentation |
10 | Work on documentation, final client release |
The percentage measurements for testing is based on code coverage. I.e., "25% minimum tested" means "at least 25% code coverage, as measured by the Cobertura coverage tool".
Assume a 100 point scale for scoring, with the usual breakdown of 90% and above being an "A" grade, 80-89% a "B", 70-79% a "C", 60-69% a "D", and less than 60% and "F". Everyone in the class starts out with a 100% score. Points are permanently deducted for the following class activities:
Task | Deduction |
Class Attendance | -4 points for each unexcused absence; coming to class more that 10 minutes late is considered an absence |
Commit of Test-Failing Code | -10 points for each instance; although we will have a script that will help avoid this problem, it will likely still be possible to commit failing code, which is why there is a significant penalty for it |
Assigned Critical Task | -4 points for each uncompleted critical task; these tasks will be defined weekly by Kaylene and Jira posted, as was done last quarter |