Unix/Linux SystemI do a huge amount of testing in Linux and Unix environments, mostly through tools available in the command shell. There are a wealth of tools and tricks to help testers to understand what is happening with both the software and the machine while testing.
- drop_caches Not strictly a "tool" however this addition to the Linux 2.6 kernel, giving the ability to clear the Linux buffer cache by writing to the /proc/sys/vm/drop_caches virtual file, is an essential in ensuring your performance testing is not giving you misleading results due to loading files from memory, a subject I wrote about in this post. To use simply echo a code out to the file
- gdb Allows either attaching to a process to analyse the call stack of the processing threads, or attaching to a core dump from a system crash. I only touch the surface using gdb, mainly getting thread backtraces - my programmer colleagues can examine memory addresses and values of crash files to identify the exact cause of application crashes. The most useful options are
- lsof lsof provides a list of open files on the system, with the option of narrowing to files for a specific user or process using the appropriate parameters. In Linux/Unix environments a "file" actually includes not only disk files but also pipes and network sockets.
- netstat Lists network connections. Despite the availability of the tool on Windows I've included in the Unix/Linux section as I don't tend to use the Windows version, preferring the tools I've listed there. I use netstat occasionally when examining client server connections e.g. when testing odbc/jdbc driver connectivity to ensure there are no connection leaks. I tend to stick to the following command
- $RANDOM While not exactly a tool, the $RANDOM variable can be very useful for quickly randomising a sequence of tests. Take a file of inputs, create a list of $RANDOM values the same length and combine the two using paste. Sort the result and you have a randomised list.
- ps Incredibly useful little tool that allows you to examine details on the processes running on the machine. Using various output flags it is possible to examine process id, session id, simple memory and cpu usage and other characteristics of the server processes. I tend to use this in a scripting loop to gather information over time and analyse the behaviour of my application's processes over the course of an event, e.g
- pmap Pmap provides a memory map of an active process on Linux. The summary lines are useful to parse and gather from a graphing perspective. As mentioned in the ps entry, the output of ps is limited as it does not distinguish between shared and non-shared memory. For more accurate memory measurements I tend to take the summary line from pmap and select shared/non-shared memory as appropriate.
- strace Allows tracing of system calls by a process ID. This can be very useful to identify if a process is looping on file access, or simply inefficiently using system files. In very simple terms, to see all calls
- top A useful interactive linux tool for monitoring current activity. Supports a wealth of flags and options for filtering and sorting activity based on key criteria. See also topas on AIX, prstat on Solaris. I use top for interactive monitoring when I want to manually monitor behaviour on a machine running tests.
echo $val > /prov/sys/vm/drop_cachesWhere a $val value of 1 flushes page cache, 2 flushes dentries and inodes and 3 flushes all of the above.
bt - to get backtrace from the active thread at the point of attaching/crashing
thread apply all bt - gets a backtrace from all active threads
I use a here document to output to a file e.g.
adam>gdb MyProgram core.1234 > file.txt <<EOF
thread apply all bt
lsof -U$UIDA list of open files is sometimes not particularly useful in itself, unless you know what you are expecting to see. When I use lsof it is generally in the context of what James Bach described to me as "consistency relationships", i.e. I know what the output looks like for comparable states or processes and I can use this knowledge as the basis of deciding whether or not the current output constitutes a problem. I monitor and check against lsof counts in automated testing to check for file and socket leaks in processes. As with other such tools, a diff comparison is an excellent way of analysis the output across a test to check for problems.
netstat -tulnapas it is easy to remember and allows me to grep for the addresses and ports that I am interested in. The specific parameters are:-
t=TPC/IP connections; u=UDP connections; l=listening ports; n=Show ports as numeric ; a=Show all ; p=show owning program and PIDs.
adam>len=`cat file.txt | wc -l`
adam>for i in `seq 1 $len` ; do echo $RANDOM >> rfile.txt ; done
adam>paste -d"|" rfile.txt file.txt | sort -t"|" -n
while true ; dowill give process id, command name, rss and vsz memory, cpu utilisation and command line arguments to a file every 10 seconds for every process owned by the current user.
ps -u$UID -o pid,comm,rss,vsz,%cpu,args >> pslog
NB The memory information output by ps is somewhat unreliable in that each process lists includes shared memory. For more accurate memory measurements I use pmap (see below).
The richer "maps" are quite inaccessible when observing them from a static viewpoint, however by comparing maps between processes or between different points in time on the same process (e.g. using a diff tool), issues such as memory leaks can be identified.
I use pmap rarely interactively, but when I do need it it is invaluable, for example when trying to pin down increases in memory usage or apparent memory leaks. I've also integrated pmap into my automated regression testing and in this respect I use it on a daily basis.
strace -p <processid>To see only file open calls
strace -e open -p <processid>
Windows environmentWindows has a number of built in tools that are great for testing including basics like the task manager and event viewer to more advanced options such as the PerfMon data collection tool. In addition I use the following tools to debug and diagnose behaviour in a Windows environment:-
- BareTail This is a nice, albeit limited, application on Windows. It essentially mimics the "tail -f" capabilities in Linux of following a file as it is being generated, such as a log file. It neatly allows specific lines to be highlighted based on the content, making it easier to search for certain content. I don't use it extensively on its own but as I wrote in a post which I'll update here, it combines nicely with RapidReporter to allow note taking and highlighted items from the session. Many thanks to Joe Strazzere for pointing this tool out to me, via this post.
- cports AKA CurrPorts: A very nifty little tool which shows all of the port activity on the windows box, including the Process ID, protocol, local and remote IP address and ports as well as the connection state.
- Cygwin Cygwin provides a bash interpreter for Windows machines. For anyone who is used to manipulating files and scripting tasks in a Linux environment, Cygwin provides a shell environment in Windows where you can utilise many of the basic bash commands and command line tools. Being frustrated by the limitations of Windows batch scripting, and not knowing powershell, this is essential for me.
- Depends Does similar job to ldd on Linux. This great little tool does a "dependency walk" of an application or module and tells you what DLLs that module depends upon.
- Process Explorer You know when Windows annoyingly states that you can't delete a file because another process has it open. ProcExp lets you find out which process that is. That in itself justifies it as an essential tool. ProcExp essentially provides a much richer version of the Windows Task manager. In addition to information on processes and their memory usage you get useful information such as registry keys and file handles.
- ProcMon ProcMon is the newer version of the excellent FileMon - allows you to monitor file access on a Windows machine and filter for specific programs or files.
DatabaseI work extensively with database and data access technologies. The following tools are very useful simple tools for testing against a generic database system (NB my remote data access tends to be limited to a querying capacity rather than database admin, which is why you won't see tools such as Toad in the list) :-
- Microsoft ODBC Test This is a great ODBC Testing tool as it allows access at the API interface level to test ODBC commands. It does require an understanding of ODBC in order to use it, however using the tool itself is an excellent way to build that understanding. Comes in 32 and 64 bit ANSI and Unicode versions.
- ExecuteQuery The best open source JDBC tool that I have seen. Primarily a JDBC data querying tool, ODBC access is also supported, albeit via an ODBC/JDBC bridge connection. Simple and relatively robust this tool is the first place I look when trying out the validity of a JDBC connection (the second place is debugging directly with Java code and eclipse).
Analysis and Documentation
There are many ways to record and analyse the information that we collect through our testing activities. I work in a Windows desktop environment so most of my analysis and analysis tools are for Windows. Here are a select few tools that I use regularly.
- DiffMerge Excellent tool for comparing files to look for anomalies and smells. See this post for a detailed example of how I use DiffMerge. I got a lot of feedback after that post suggesting WinDiff as a good alternative.
- Excel Simply the best. If I could use only one application in my testing efforts, it would be excel. The combination of recording and analytical capabilities make this an essential part of my testing activities. Many vendors out there scoff at the use of excel in documenting testing, arguing for a complete solution. I've not yet seen another tool that has the combination of flexibility, simplicity and power in terms of reporting and analysis. See this post for one example of the ways I use excel for reporting.
- Hi-Editor A use this sparingly but it is a very useful little editor if you need to open BIG files. When notepad++ gives up the ghost, I get this editor out. At the time of writing this I am using this editor to examine debug tracing files of over 500MB in size.
- HxD I use this tool primarily as a hex editor. It allows the examination, editing and comparison of files viewed in Hexadecimal form. This is particularly useful to me when I'm testing extended or special characters and I want to avoid being fooled by the application of a character set, by examining the raw data directly. As with HiEditor, HxD is designed for large files and seems to cope with massive files with no problem at all. HxD also has the ability to view and edit disk images and process memory directly , although I don't use it in this capacity.
- Notepad++ Awesome editor. I wrote a post a while ago on how I discovered some great features in notepad++ to help with my testing. The ability to search across all open files, explore and search directories and apply standard or custom syntax highlighting makes this tool incredibly useful. The clean and seamless ability to work with files in different formats from different operating systems (UTF-8, ASCII, with and without Windows BOM, CRLF or LF line endings) has resulted in me recommending this editor to any customers who are getting confused over how to view their international data.
- Rapid Reporter Written by a fantastic chap called Shmuel Gershon, Rapid Reporter is an exploratory note taking tool. I personally don't tend to use it for my exploratory testing, preferring Excel, however I do make use of this tool when performing static reviews, notably static code analysis or even in a non-testing capacity of reviewing multiple CVs when recruiting.
- Xmind I'm not one of those testers who believes that everything should be done as a mind map. I use electronic mind maps sparingly but surgically, for those specific tasks where I need an extended breakdown of an area. This usually applies at a higher level than actual coal face interactive testing, such as when planning an approach to a piece of work like defining a test approach for a new feature or preparing a presentation.
Xmind is the best I've found for this purpose, primarily through its ease of use. Want a new Subordinate, hit TAB. Want a new peer node, hit enter. I find that mouse and right click navigaton lets down many otherwise good visual tools and Xmind has overcome this problem very well through thoughful keyboard accessibility. I use the excellent free option but there is also a paid version that allows advanced features and upload and sharing of mind maps.
I've listed here some of the tools that are most useful to me in my day to day testing activities. I've not listed them all, and certainly not done some justice, but hopefully in sharing this information it will help others looking for useful tools to solve their testing problems. In my team when we find a useful tool we try to run a session demonstrating to the rest of the team, encouraging a culture of shared learning. Hopefully this page will help to contribute to this endeavour in the wider testing community.