Ok, I hear through the grapevine that the LiveSupport guys want to
convert to subversion after they reach 1.0, which means after
SummerCamp. This will also be a good time for campsite to do it as
well, so we will delay until then to do it (or maybe do it during
summercamp).
Sidenote:
I also re-stumbled upon Trac (http://projects.edgewall.com/trac/) for
documentation. It has been mentioned before, but suddenly is much more
appealing if we move to Subversion. The only problem (as always) is
that we will have to help them make it multi-lingual. Yes, this is a
bit of work, but I am really impressed with this tool. It is made to
integrate into a software development project, something that none of
the other tools we have looked at do. I especially like the timeline
feature.
- Paul
------------------------------------------
Posted to Phorum via PhorumMail
Paul Baranowski wrote:
> Ok, I hear through the grapevine that the LiveSupport guys want to
> convert to subversion after they reach 1.0, which means after
yes, we want to be converts
> SummerCamp. This will also be a good time for campsite to do it as
> well, so we will delay until then to do it (or maybe do it during
> summercamp).
>
> Sidenote:
> I also re-stumbled upon Trac (http://projects.edgewall.com/trac/) for
> documentation. It has been mentioned before, but suddenly is much more
> appealing if we move to Subversion. The only problem (as always) is
> that we will have to help them make it multi-lingual. Yes, this is a
> bit of work, but I am really impressed with this tool. It is made to
> integrate into a software development project, something that none of
> the other tools we have looked at do. I especially like the timeline
> feature.
I also have an idea on how to improve the development processes. now I
don't have time to write them down in detail, but main inspirations come
form the Capability and Maturity Model for Integration (CMMI), developed
by the Carnegie Mellon University, from eXtreme Programming and from the
Rational Unified Process.
Basically the idea would be something like the following:
- a project contains a list of requirements.
- at first these are 'business requirements' (as opposed to 'technical
requirements'), and are quite easliy formulated in a human language.
usually there are no more than a few of these at project startup
- for each requirement, the criteria can be defined by which the
requirement is met. as soon as a requirement is drawn up, acceptance
criteria is also written beside it. just by testing the project results
agains the acceptance criteria it is possible to say which requirements
have been met, and which have not
- during the analysis phase, each requirement is getting more and more
detailed. basically there will be more business requirements coming in,
as the core requirements are elaborated on. but, there's a clear
dependency relationship between these
- for all the new requirements, acceptance criteria is drawn up as well
- at some point of detail, technical requirements come in. these can be
met with well-defined tests. as soon as a technical requirement is drawn
up, appropriate tests are also defined. these tests are preferably
automated, unattended tests
- of course, for all technical requirement, there are corresponding
business requirements.
- as a consequence, for each core business requirement, there's a well
defined, finite set of (preferably automate) test cases. if these test
cases pass, then the corresponding core business requirement is met.
thus, the question of 'are we ready' can be answered by this mechanism
- usally, coding would start _after_ all the hassle above. thus, the
task of the coder will be to make sure the code passes the _already
available_ test cases.
on configuration management and versioning:
- the course of change in the reposity is something like:
- the need for change rises, and is described
- someone is assigned to make the change
- the change is made
- the change is checked if it meets the original needs for the change
- with meeting requirements, the need for 'change' is the requirement
itself.
- with bug reports, the need for change is the bug. ideally, each
reproducable bug would have a corresponding test case, that fails with
the bug still there, and passis when the bug is fixed.
- with each change in the version control system, the corresponding
reason for change is also noted. (ie. the requirement ID or bug #). by
this, for any requirement or bug, one can check what changes have been
made for it.
on automated testing environments:
- the development enviornment usually includes an automated test
environment, which can execute all the automated test cases. it will
excute it on a regular bases. running all the tests on a regular basis
ensures that the quality of the code remains high.
- each change in the codebase may change the result of the tests. some
may not change the test outcomes, some make some previously failed tests
pass, and some make previoulsy passed tests fail.
- if a change makes a previously failed test pass, and that is the last
non-passed test for the original reason of change, then that reason is
fulfulled (ie. the bug is fixed, the requirement is met). this fact can
actually be automatically deduced
- if a change makes a previously passed test fail, that change triggers
a bug report, with the details on which test case it makes fail. the bug
report can be automatically assigned to the person who made the change
in question.
now, most of the above stuff can be achieved with integrating some of
the tools, like version control, issue tracking, etc. etc. while this is
not totally easy, we should strive somewhat into this direction. (and
also, there is no reason to make a completely automated system - humans
are very sophisticated in doing complicated things themselves
Akos
------------------------------------------
Posted to Phorum via PhorumMail