<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/tools/testing/ktest, branch v3.6-rc5</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/tools/testing/ktest?h=v3.6-rc5</id>
<link rel='self' href='https://git.amat.us/linux/atom/tools/testing/ktest?h=v3.6-rc5'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2012-07-30T18:37:01Z</updated>
<entry>
<title>ktest: Allow perl regex expressions in conditional statements</title>
<updated>2012-07-30T18:37:01Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-30T18:37:01Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=8fddbe9bbfe5771a9d9e5d0c6f5bae3213c20645'/>
<id>urn:sha1:8fddbe9bbfe5771a9d9e5d0c6f5bae3213c20645</id>
<content type='text'>
Add '=~' and '!~' to the list of allowed conditionals for DEFAULT and
TEST_START section if statements.

ie.

 TEST_START IF TEST =~ .*test$

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Ignore errors it tests if IGNORE_ERRORS is set</title>
<updated>2012-07-30T18:33:55Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-30T18:30:53Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=9b1d367dbbeb6646f04a8865ecc2bc454f7dd88f'/>
<id>urn:sha1:9b1d367dbbeb6646f04a8865ecc2bc454f7dd88f</id>
<content type='text'>
The option IGNORE_ERRORS is used to allow a test to succeed even if a
warning appears from the kernel. Sometimes kernels will produce warnings
that are not associated with a test, and the user wants to test
something else.

The IGNORE_ERRORS works for boot up, but was not preventing test runs to
succeed if the kernel produced a warning.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Reset saved min (force) configs for each test</title>
<updated>2012-07-21T02:39:16Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-21T02:39:16Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c1434dcc57f97b0e533dedb8814a76ef13e702b4'/>
<id>urn:sha1:c1434dcc57f97b0e533dedb8814a76ef13e702b4</id>
<content type='text'>
The min configs are saved in a perl hash called force_configs, and this
hash is used to add configs to the .config file. But it was not being
reset between tests and a min config from a previous test would affect
the min config of the next test causing undesirable results.

Reset the force_config hash at the start of each test.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Add check for bug or panic during reboot</title>
<updated>2012-07-19T20:11:21Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-19T20:08:33Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=8a80c72711a9b78af433013067848c0a5473a484'/>
<id>urn:sha1:8a80c72711a9b78af433013067848c0a5473a484</id>
<content type='text'>
Usually the target is booted into a dependable kernel when a test
starts. The test will install the test kernel and reboot the box. But
there may be a time that the kernel is running an unreliable kernel and
the reboot may crash.

Have ktest detect crashes on a reboot and force a power-cycle instead.

This can usually happen if a test kernel was installed to run manual
tests, but the user forgot to reboot to the known good kernel.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Add MAX_MONITOR_WAIT option</title>
<updated>2012-07-19T20:05:42Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-19T20:05:42Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=407b95b7a085b5c1622033edc2720bb05f973317'/>
<id>urn:sha1:407b95b7a085b5c1622033edc2720bb05f973317</id>
<content type='text'>
If the console is constantly outputting content, this can cause ktest
to get stuck waiting on the monitor to settle down.

The option MAX_MONITOR_WAIT is the maximum time (in seconds) for ktest
to wait for the console to flush.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Fix config bisect with how make oldnoconfig works</title>
<updated>2012-07-19T19:29:43Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-19T19:29:43Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=cf79fab676b3aa3b5fbae95aab25e2d4e26e4224'/>
<id>urn:sha1:cf79fab676b3aa3b5fbae95aab25e2d4e26e4224</id>
<content type='text'>
With a name like 'oldnoconfig' one may think that the config generated
would disable all configs that were not defined (selecting "no" for all
options). But this is not the case. It selects the default. If a config
has a 'default y', then it is added if not specified.

This broke the config bisect, because options not specified by a config
will just use the default, where it expected to turn off. This caused an
option to be enabled that disabled an option that would break the build.
The end result was that we never found the bad config at the end of the
test.

Instead of using 'make oldnoconfig', ktest now builds the options it
expects enabled and disabled. When it turns off an option, it will no
longer remove it, but actually set it to:

 # CONFIG_FOO is not set.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Add CONFIG_BISECT_CHECK option</title>
<updated>2012-07-19T19:26:00Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-19T19:26:00Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=b0918612545e698e55889c15d25e5118ea09c1fd'/>
<id>urn:sha1:b0918612545e698e55889c15d25e5118ea09c1fd</id>
<content type='text'>
The config-bisect can take a bad config and bisect it down to find out
what config actually breaks the config. But as all tests will apply a
minconfig (defined by a user) to apply before booting, it is possible
that the minconfig could actually make the bad config work (minconfigs
can disable configs). The end result is that the config bisect test will
not find a config that breaks. This can be rather frustrating to the
user.

The CONFIG_BISECT_CHECK option, when set to 1, will make sure that the
bad config (with the minconfig applied) still fails before trying to
bisect.

And yes, I did get burned by this.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Add PRE_INSTALL option</title>
<updated>2012-07-19T19:22:05Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-19T19:22:05Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=e5c2ec11a07b9e1e7eb714aad13583e2bbae49bd'/>
<id>urn:sha1:e5c2ec11a07b9e1e7eb714aad13583e2bbae49bd</id>
<content type='text'>
Add the PRE_INSTALL option that will allow a user to specify a shell
command to be executed before the install operation executes.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Add PRE/POST_KTEST and TEST options</title>
<updated>2012-07-19T19:18:27Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-19T19:18:27Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=921ed4c7208e5c466a87db0a11c6fdd26bcc2fe7'/>
<id>urn:sha1:921ed4c7208e5c466a87db0a11c6fdd26bcc2fe7</id>
<content type='text'>
In order to let the user add commands before and after ktest runs, the
PRE_KTEST and POST_KTEST options are defined. They hold shell commands
that will execute befor ktest runs its first test, as well as when it
completed its last test.

The PRE_TEST and POST_TEST will be run befor and after (respectively)
for a given test. They can either be global (done for all tests) or
defined by a single test.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ktest: Remove commented exit</title>
<updated>2012-07-19T19:17:23Z</updated>
<author>
<name>Steven Rostedt</name>
<email>srostedt@redhat.com</email>
</author>
<published>2012-07-19T19:12:25Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=958d8435c257f93123dec83647130457816a23e6'/>
<id>urn:sha1:958d8435c257f93123dec83647130457816a23e6</id>
<content type='text'>
A debug 'exit' was left in ktest.pl. Remove it.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
</feed>
