Current summaries: For each release, quick summary of results for each configuration for last several days for which any results were saved. (I.e., one table for current, one table for 1.3.) Per-configuration page: One table, one row per date, one column per release tested. Per-test-run page: More detailed results for one configuration for one day. Current configurations: Somewhat haphazardly set up. Some configurations differ only in shared vs static libraries, and now, thread support enabled vs disabled. Build in source tree vs using srcdir. Desirable summaries: ... probably similar, maybe options for breaking down or filtering by build options (thread support enabled, static libraries). New arrangements of the data? Complain more loudly if dejagnu tests aren't run on UNIX. Run some tests on Windows. Desirable configurations: More combinations, perhaps not each tested every day. E.g., testing the rules for building AIX shared libraries with both AIX and GNU make, but we don't need to exercise the same compiler and test suite twice each day if all other parameters are the same. Results by machine Results by platform (may have multiple machines per platform) Multiple Linux vendors (chroot environments probably okay, but should be noted; UML may be better) Windows (KfW), Mac (KfM or plain-unix). Test run data: testing results log files of each pass cpu and/or real time for each pass? disk space used for test run free disk space (this would be for a collection of configurations using the same file system) disk space used for install tree? Configuration data: tool used (OS vs GNU, version numbers, options), e.g.: gcc 3.4.0 + system linker gcc 3.4.0 + GNU ld 1.2.3.4.5 Apple-modified gcc 3.x cc (sunsoft v7) -xarch=v9 ... GNU make configure options (krb4, thread support, whatever) system versions of stuff used (et, ss, db) OS version (patches?) environment variables pathnames of tools (Q: Should we set up test runs with $PATH containing one directory where we put symlinks to all the tools we want used, and no others available?) Current reporting scheme: I try to remember to run a script each day which runs ssh/rsh out to each machine to collect a set of output files and copy them to the server; after the ssh/rsh jobs all finish, another script walks the tree and rebuilds all the web pages from scratch. Desired reporting scheme: Driven by client. Permit multiple reports from a test run (e.g., "now I'm about to start running tests, here's an update to the results so far"), while the run is still going. Client should not have access to do anything "interesting" on the server besides storing its data. (A simple approach: Assign each client an ssh key, and in the authorized_keys file, indicate that the client can only run one program, the result-storing script, with an argument indicating which client it is. More complicated approach: Client certificates and a web server form for updating the database. Question: Should one client identity in whatever form be allowed to update multiple configurations, or should we use one per config?) Optional: Squish build trees down to .tar.gz files after finishing, to make space for other builds. Optional: Do multiple builds in parallel, as space/memory/cpu allows, but keeping dejagnu tests serial unless they get fixed to use different port numbers for each run.