TestBash Belfast 2017

Testbash belfast adverts 04
Thursday, 18th May 2017

Yes, TestBash really is coming to Belfast. We still can’t believe it ourselves. The community asked and Ministry of Testing responded.

Ministry of Testing is supporting local testers: Heather Reid, Hugh McCamphill and Neill Boyd in bringing a 'tiny' version of TestBash to Belfast.

The conference will be a single day event consisting of 10 awesome talks, and we'll also be hosting a two day API testing class with Mark Winteringham prior to the conference.

The only thing that makes it tiny is the capacity, there are only 100 tickets available, so don't delay! You can expect a wonderful community to come together in a friendly, professional and safe environment. We think you'll feel at home when you arrive!

The venue is The Crescent Arts Centre.

Event Sponsors:
Training
Tuesday, 16th May 2017:

Testing Web Services - Mark Winteringham - 2 Day Course - 16th-17th May
Mark Winteringham

Web service based software architecture is becoming more and more popular as a choice for applications and this poses new and interesting challenges and opportunities for Testers. Testing Web services training gives attendees the knowledge to rapidly learn and test web services in both an exploratory and automated capacity. The workshop is interactive using bespoke services built specifically for the workshop that give participants the chance to try out the skills they are being taught in a safe and relevant environment.

In addition to the primary goal of the training the workshops give attendees knowledge of a building block of the Web, that can be leveraged for both exploratory and automated testing.

Day 1: Understanding and testing Web services

Understanding and testing Web services is an interactive workshop that guides participants through the fundamentals of what makes a Web service and how to build requests to query and manipulate data from a web service. The attendees will learn key skills through testing a bespoke web service, learning how the service and requests work, and discovering and reporting bugs.

Key Topics:
  • What is a Web service?
  • How to build requests to query and manipulate data from a Web service
  • Test design techniques to consider when testing a Web service
  • What is REST and what makes a Web service RESTful?
What people have said about Understanding and testing Web services

Whilst I work with APIs everyday so the subject matter wasn't new, Mark Winteringham's RESTful services workshop was so well put together I had a blast!
Mark Winteringham's workshop on RESTful services was excellent was packed. You should definitely have him run it again.

Part 2: Exploring and modelling Web services

For this workshop we provide participants an opportunity to learn how to explore a platform containing multiple Web services, build a model of how the platform works and the use that knowledge to build a series of automated checks. We will look at different tools and sources of information we can leverage in our exploring before discussing we should use using models to build up a picture of how we can generate our test ideas. Finally, participants will learn how to automate their tests, even if they have little or no coding experience.

Key Topics:
  • Tools we can use to explore a Web service platform
  • Sources of information we can use in exploration
  • How to model an application from a backend perspective
  • Designing a test suite based on your application model
  • How to automate your test suite using current toolsets
Mark Winteringham
Markwinteringham I am a test manager, testing coach and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies. I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester
Conference
Thursday, 18th May 2017

Cynefin for Testers
Liz Keogh

Whenever we do anything new, we make discoveries, and often those discoveries force us to change direction and rethink our goals. In a world which embraces uncertainty, and in which innovation means trying things out and iterating more often than analyzing and predicting, what's the role of a tester?

In a world of change, where a quick reaction to problems is often a better approach than a prediction of them, we look at how a tester's mindset and skills can still bring much-needed clarity, ensuring coherence in the experiments we perform and making sure that they're safe-to-fail.

Liz Keogh
Cropped liz by flavio

Liz Keogh is a Lean and Agile consultant based in London. She is a well-known blogger and international speaker, a core member of the BDD community and a contributor to a number of open-source projects including JBehave. She specializes in helping people use examples and stories to communicate, build, test and deliver value, particularly when faced with high risk and uncertainty.

Liz's work covers topics as diverse as story-writing, haiku poetry, Cynefin and complexity thinking, effective personal feedback and OO development, and she has a particular love of people, language, and choices. She has a strong technical background with over 15 years’ experience in delivering value and coaching others to deliver, from small start-ups to global enterprises. Most of her work now focuses on Lean, Agile and organizational transformations, and the use of transparency, positive language, well-formed outcomes and safe-to-fail experiments in making change innovative, easy and fun.


Shift Left, Shift Right and improve the Centre - A strategy for testers in continuous delivery context
Augusto "Gus" Evangelisti

The technology world is changing fast, faster than ever. A few years ago we thought that only the Amazon(s) or Facebook(s) of this world would do continuous delivery, now the phenomenon is getting traction, starting to become mainstream and soon it will be commoditised. Everybody will do it.

Why, you might ask? Because the ability to release often gives organisations an edge over the ones that have a slow turnaround. For an example, you only have to look at the disruption happening in the financial industry, where billions of dollars are being invested by venture capitalists on startups that will replace the banks of today.

As testers we can either hide our head in the sand and hope it all goes away, or try to understand what the change means for us, embrace it, and build the skills we require to thrive.

In this interactive talk you will learn about the challenges that continuous delivery brings to testers, you will discover opportunities for testers that are within your current domain and brand new dimensions where today’s testers can be extremely valuable tomorrow.

You will leave this talk with a checklist of skills and tools you need to start learning. These skills will give you an edge over the competition and will allow you to become much more valuable when the change finally knocks at your door.

Augusto "Gus" Evangelisti
Augusto evangelisti

Augusto “Gus” Evangelisti is a product development professional with over 20 years experience in many different industries where he played just about every existing role, from developer to tester to analyst, product manager, test manager, and more recently agile/lean coach.

During his agile lean journey, he discovered amazing people that helped him grow his understanding and passion for the subject. His passion is learning and sharing his learning with others.

He is the Principal Consultant at Evangelisti Consulting where he helps organisations empower their people to deliver products that matter using agile and lean practices.


The Automated Acceptance Testing Paradox
Mark Winteringham

There have been times when I have struggled with Automated acceptance testing, Test Driven Design and Acceptance Test Driven Design and how it impacts my role. I've also seen other testers struggle with too. Whether it's Testers being pushed out of roles in favour of Developers automating all their acceptance tests or Automators spending hours tearing their hair out maintaining brittle end to end tests, there's no denying it. Automated acceptance tests simply don’t work as tests and can have a substantially negative impact on a test strategy. But why don't they work and why use them at all if they don't?

The automated acceptance testing paradox' draws on my experiences to help answer these questions by:
  • Investigating the role of 'Acceptance testing', what are they? and who deems them as 'Acceptable'?
  • The misconceptions around the benefits of Automated acceptance testing, ATDD and TDD.
  • The paradox of how the tools used in theses approach cannot completely determine a feature 'acceptable'
  • Present the real benefits of TDD and ATDD and how they can sit alongside your test strategy in harmony to help create a more robust testing strategy.

So if you are looking to get your automation out of the rut it is in or avoid it completely, looking to create a robust test automation strategy or find out where and why there is value in Automated acceptance tests and other automation in testing activities, then come learn about 'The automated acceptance testing paradox'.

Mark Winteringham
Markwinteringham I am a test manager, testing coach and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies. I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester

The Hypocrisy of Hypotheses (Or, How do we test hypothesis driven acceptance criteria)
Sharon McGee

Hypothesis driven development (HDD) helps ensure that product design results in business value. Framing product features as hypotheses and conducting mini experiments allows us to assess whether they will deliver pre-stated measurable business goals. Future product direction can then be informed by the results of our experiments. Borrowed from Thoughtworks, here is an example…

We Believe that increasing the size of hotel images on the booking page Will Result In improved customer engagement and conversion. We Will Know We Have Succeeded when we see a 5% increase in customers who review hotel images and then proceed to book in 48 hours

So we change the size of the image, deploy, test and observe that we have a 5% increase in customers who proceeded to book within 48 hours. We passed the acceptance criteria. Our hypothesis was correct and we can conclude that changing the size of the image increased sales. Right?

Maybe

This talk will examine the reasons why the answer to that question can only ever be maybe. Through example, we will explore the difference between this approach and empirical scientific methods where there are no stated acceptance criteria, and the motivation for the experiment is to falsify the hypothesis. We will walk though scenarios and discover some of the potential pitfalls of expressing requirements in this way. Together, we can identify types of acceptance criteria that would require more confidence in our test results. We can then discover how a more rigorous approach can help us ensure that the results of our tests mean what we think they mean.

Sharon McGee
Sharon mgee

I am a software analyst who has recently returned to work after a period of time during which I did lots of other interesting things! These include looking after my children and completing an empirically based PhD on the causes and consequences of software requirements change. I enjoy trying to figure out what makes people tick, why software works and how music is wonderful.


Is It Too Late To Discover The Value Of Exploratory Testing? A Love Story
Simon Tomes

“So what do you actually do?” asked everyone. “Well I just explore the app and find bugs and I stop bad things from getting into the hands of users.” I replied.

Did I really just say that? How uninspiring! Everyone continued to ask questions. “Why waste your time doing that? Why don’t you just automate everything?”.

“Well, um. There’s this thing about testing that it’s kinda, well you know, better than automation. Maybe, well.” I paused to take a breath and the following spilled out, “Let’s automate everything and I’ll play around with the app during regression testing and it’ll be OK.”

It wasn’t OK.

Over the last three years I’ve discovered – or perhaps rediscovered – the true value of exploratory testing. I have the following to thank: the incredible testing community, building an exploratory testing tool from scratch and learning to share content with unrestraint. Plus a little secret I can’t wait to reveal!

By the end of this talk you’ll discover ways to share the value of exploratory testing. We’ll look at compelling reasons why exploratory testing rocks! Together we’ll learn it’s never too late to discover and share the awesomeness of exploratory testing.

Simon Tomes
Simon tomes profile pic 1

Simon is on a mission to move the product development world forward. He can’t get enough of testing (since 2003!) and has led test and development teams at popular UK websites.

He’s developing a tool to support and enhance the lives of people who love exploratory testing.

Obsessed with music, learning and helping others, he also enjoys daily meditation. Simon believes in simplicity, transparency and collaboration.

He shares ideas, thoughts and learnings on his blog. You can find him on Twitter and Ministry of Testing Slack.

Tested By Monkeys – The End Of Banana Software!
Jeremias Rößler

Various automation tools have been available for quite some time now, but due to high efforts, GUI-Testing is still mainly a manual task. Meanwhile overall testing effort has risen to make up 30% of an average overall software budget. Is crowd-testing the answer? What if we could have automated test cases be created automatically?

Monkey Testing is not a new idea. But combined with a manually trainable AI and an innovative new testing approach (dubbed ""difference testing""), we can now not only have the monkey search for technical bugs (i.e. crashes) but generate functional test cases that are optimized towards several goals and are even better than manually created ones.

Visit the future of testing and see how AI can help us create better software!

Jeremias Rößler
Jermemias rossler

The last 3 years I had no personal income.

The last 3 years I tried to change the world.

In every software project I was so far, testing was a major issue – always too late, always too little. Especially regression testing. And that I wanted to change.

Therefore I relinquished income for three years and invested all of my none-existent personal assets to let a vision come true.

This vision is what I want to show you!


A Test Pyramid Heresy - a fresh look at test automation strategies
John Ferguson Smart

The Test Pyramid is a staple in Test Automation theory, and is used by many teams as a cornerstone of their test automation strategy. But is it still the best model for modern development practices? Are there better and more efficient ways of thinking about test automation today?

Test automation is nowadays essential in any project, but inefficient test automation is increasingly becoming a major cause of waste and, ironically, delay. Test suites need to be lean, slick and efficient, with just enough tests (and the right kind of tests!) at each level to provide confidence, but not so many as to slow down the test suite or increase maintenance costs. And automation requirements vary wildly from one kind of project to another. The way we think about test automation strategy clearly needs a rehaul.

Drawn from approaches such as Lean, Behaviour Driven Development, and Outside-in Development, this talk presents a different way of looking at test automation, one that will help you focus your test automation efforts where they will add real value.

You will never look at a pyramid in the same way again!

John Ferguson Smart
Profile picture %281%29

John is an experienced author, speaker and trainer specialising in Agile Delivery Practices currently based in London. An international speaker well known in the Agile community for his many published articles and presentations, particularly in areas such as BDD, TDD, test automation, software craftsmanship and team collaboration, John helps organisations and teams around the world deliver better software sooner and more effectively both through more effective collaboration and communication techniques, and through better technical practices.

John is also the author of 'BDD in Action', 'Jenkins: The Definitive Guide', and 'Java Power Tools', and lead developer of the Serenity BDD test automation library.


Testing So You Can Move On
Nicola Owen

Working as a Test Consultant makes me think more about the testing process than the results itself. My job is to do my job (test) and then leave the client in a better position, whether that be a new testing process is formed, automation tests have been set up or they just have a better idea of what testing is capable of. Put simply, when I test - I test with a timeframe in mind. It might be 6-9 months from now (with the possibility of extension), or it might be just 1-2 months.

In my talk, I’d like to share my experiences on being a test consultant with a focus on leaving projects and setting things up so that when I leave, they still get to reap some of the benefits. I’ll focus specifically on two recent projects (with varied lengths), and how I tackled the issue differently of making sure the project is in a good place when I leave. Testing so you can move on, focusses not just on the testing process itself but constantly making sure you’re recording enough information of what you’re doing and how you’re doing it so that someone else can continue where you left off.

Nicola Owen
15937182 10154740565661422 6163828934665400019 o %281%29 I'm a Test Consultant with House of Test who blogs about testing at nickytests.blogspot.com, teaches software testing and also enjoys doing the job itself. I started the Stockholm Software Testing Talks meet-up, an active monthly discussion-focussed meet-up and have also recently added automation into my testing tool-kit.

A Tale Of Testability
Rob Meaney

In this session I'll describe how a bloated, ineffective release cycle was transformed over a 12 month period using a whole team approach focused on building testability into the product.

We managed to reduce the pre-release regression cycle from 7 weeks with all team members(35) included to 5 days with a handful of testers.

In that time we also managed to significantly improve the stability, test coverage and adoption of the product as a result of these efforts.

Rob Meaney
Aaeaaqaaaaaaaae8aaaajdq5n2exythklwzkztutngrjms1iota2lwqwzwmzoguxntawma

I'm Rob Meaney, I've a degree in electrical & electronic engineering. I came to work in the software industry soon after finishing college and began working as a tester without even knowing what stating was.

I learned my trade testing desktop applications for manufacturing safety automation industry. Soon I got bored of manually checking the same thing over and over to I decided to try automating some of my tests.

Since then I have worked in the start-ups, gaming, data storage, medical, fraud detection and communication companies building test and automation frameworks.

I've worked as a manual tester, automation architect and test manager.

I love testing and continuously read and learn about testing and development

From a personal perspective, I love to work hard and have fun doing it. I'm always up for a bit of messing but I take my job very seriously.


Testing in Production - dangerous, scary or better?
Jon Hare-Winton

Everyone runs their tests on a safe, separate test environment. But what if we let our automated tests loose on our production systems? Is that a dangerous thing to do, is it worth the risk?

I’ll discuss why this can seem a daunting prospect, but demonstrate the huge benefits of testing in production. I’ll briefly discuss some of the downsides to testing in isolated, unrepresentative test environments, before giving an account of my experiences over the last couple of years of trying to test in production as much as possible.

I’ll demonstrate some of the benefits, like how we can take conventional automated tests and turn them into monitoring and early alerting systems for our production environments, and the safety steps we need to take to make sure our production tests don’t affect our users.

Jon Hare-Winton
Img 20160401 135916

I am a Senior Automation Engineer at the Guardian. I worked originally as a manual tester, before moving into Automation and more general Development across the media, marketing and finance industries, and am a regular contributor to the Guardian developer blog. I have a passion for pushing the conceptions of what a Tester can be, and what Testing and Quality teams can deliver to wider software development, beyond conventional test practices.


Micro Sponsors: