The first talk I went to for the conference is/was "Measurement and metrics for test managers." This is a subject near and dear to my heart as when you're a manager, you get asked all of the time: how long will it take to do this? How are things going? How long is left? How did the project go? Test status reports and test estimation have been my lifeblood in QA and I find them fascinating.
The talk hit two points which I thought very useful. One was how to measure code quality, a puzzle I've debated over for ages. I have never worked in an organization where I could get numbers for lines of code, so how was I to say how good it was when I didn't know what I was measuring against? All I could do was say how many bugs. But I got the idea of doing number of bugs versus hours of development, and now I can see that I can have some numbers that actually relate back to the size of the project in a way I consider valid.
The other idea was knowing how good the testing is, which is done after the project is done when you are finding out how many bugs are found after release. This requires access to the external bugs information. But apparently you can use this data to calculate a percentage of bugs missed over time, ie bugs found by QA / bfbQA + bNOTfbyQA. Industry standard is 85% - but after just one project I'll have an estimate for MY company, and I can predict what the bNOTfbyQA is for the next project once I've got the percentage.
I am very excited about both of these things.
Lunch was fine and some nice people invited me to eat with them so it wasn't too lonely. The jet lag is hurting me a little right now and I need some tea, but for now I'm going to run upstairs and get right on "System testing with an attitude."