TestBash Philadelphia 2017

Testbash philadelphia adverts cheesesteak 2017 02
November 9th 2017 - November 10th 2017

TestBash Philadelphia is back, after the roaring success of last year, we had to!

This year's event is, again, going to be run over two days, following our single track format, with some conference-wide fun thrown in for good measure.

The majority of that fun is in the form of the TestBash Circus which will take place on Thursday afternoon. Sadly no animals or clowns, instead we'll have 10-15 mini workshops, all with practical hands-on experience and activities for you to take back to work. A real opportunity to practice some of the content from the talks and learn from other attendees. More information on The Circus is available below

On both days you can expect to find a wonderful community coming together in a friendly, professional and safe environment. We think you'll feel right at home as soon as you arrive!

Expand the talks below to get more information, especially on The Circus. Tickets start from $799 with our Super Early Bird tickets, there are limited numbers of tickets at each rate. The ticket includes all the talks and the Circus! Book now to avoid sad faces.

The venue is FringeArts and we have some hotel recommendations and a discount rate.

And, of course the meetups!

Pre-TestBash Meetup - Wednesday 8th of November.

Post-TestBash Meetup - Friday 10th of November.

Event Sponsors:
Conference
November 9th 2017 - November 10th 2017

The TestBash Circus
Richard Bradshaw

So what is this all about?

Last year we got some feedback that attendees would have liked some more hands-on activities. So I've been pondering how we could do this ever since. After lengthy discussions with our EducationBoss, we came up with the idea of running an education circus (sometimes referred to as 'The Carousel Technique').

In the TestBash Circus we will run 10 to 15 different activity stations, 'mini workshops' if you will, each no longer than 10 to 30 minutes long. After 30 minutes, attendees will then get the opportunity to move on to another activity, attending the ones that are of most interest to them. All these activities will be running in parallel and will run repeatedly throughout the circus.

We'll run the circus for three hours, allowing attendees to take part in several activities. The activities will be designed so that you can take them back to the office and make use of them after TestBash. Topics will loosely follow the theme of the talks, so we'll have security, performance, automation, bug reporting, exploratory testing, CD/CI. This will be a really great opportunity to get some hands-on experience and to work with the speakers and other attendees; learning together and solving problems.

Takeaways

  • Hands-on experience
  • Opportunity to work with and learn from the speakers and other attendees
  • Activities to take back to the office and work on with your colleagues

Richard Bradshaw
Richardbradshaw Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

Was It Something I Said?
Martin Hynie & Paul Holland

The Art of Clear Communication

Testers naturally love the challenge of breaking down a system to uncover the secrets it may hide. We do so by learning to use a great variety of tools that enable us to build models, to exploit behaviours, and to describe whatever intended and unintended behaviours we might discover. All of this is performed in order to help others make timely and informed decisions about the product. We are partners with all members of software development teams. We work together, solving great riddles and building evolving solutions to some of life's newest and greatest challenges.

So why do we so often find ourselves at testing conferences complaining about not being understood? We gather together and describe our struggles:

  • How do we report accurately on exploratory testing efforts?
  • Why does it seem so hard at times to get clear direction on where we should focus our time?
  • Why must we spend so much time proving that we are performing valuable work?
  • How do I answer that magic question: “When will testing be done?”

It certainly would be helpful to just have a “tester translator” available on every team to help us with these communication problems, but this seems unlikely. Perhaps it is time for us to look at this from the perspective of a tester exploring a system. How do we begin using our skills and tools to discover dependable heuristics on how to better communicate with our teammates?

Paul and Martin offer some interesting (and contrasting) approaches that might provide some clues on how to use some of your existing testing skills and talents to help solve this riddle within your own teams.

Martin Hynie
Martinhynie With over fifteen years of specialization in software testing and development, Martin Hynie’€™s attention has gradually focused on emphasizing value through communication, team development, organizational learning, and the significant role that testers can play to help enable these. A self-confessed conference junkie, Martin travels the world incorporating ideas introduced by various sources of inspiration (including Cynefin, context-driven testing, the Satir Model, Pragmatic Marketing, trading zones, agile principles, and progressive movement training) to help teams iteratively learn, to embrace failures as opportunities and to simply enjoy working together.
Paul Holland
Paul head shot

Paul Holland is a Senior Director of Test Engineering at Medidata Solutions, Inc. in New York City. Paul has over 20 years experience in software testing. Prior to joining Medidata in August 2016 he was Head of Testing at a small New York based consultancy for 2 years. Previously he spent 2 years as the principal consultant and owner at TestingThoughts, and 17 years in testing at Alcatel-Lucent.

Paul specialises in adapting testing methodologies to reduce waste, and be more effective and efficient; finding ways to document only that which needs to be documented; modifying reporting of test activities to provide actionable information to stakeholders; and reduce/eliminate potentially harmful metrics. Paul is one of four instructors of the Rapid Software Testing course, developed by James Bach and Michael Bolton.


I've Got A Feeling: Thoughts About Myself and The State Of Testing
Ash Coleman

Testers, QAs, QEs, SEITs, Engineers in Test, Quality Control, etc… The lists of names that are identified in the testing community are endless. There is much debate about how we call ourselves, how we find ourselves applying our work and how we identify with one another. There are countless questions around qualifications, certificates, degrees and expertise. Heuristics or defaulted inquiry? Testing or Checking? Best Practices? Who said what? Who owns what? What is what?! It’s all so confusing.

Meanwhile, I’m over here wondering if any of this is even relevant to me in order to continue doing the job that I love well.

As technology grows, so do we. In the world of testing, there is an ever growing need to recognise the direction the role is taking. With all the talk that is going on, how relevant are these conversations to our everyday professional life? What are these conversations trying to solve? Where do we find ourselves when we are not included in them?

I have a feeling that there is a lot more substance to these conversations than what is being published publicly. For the next 45 minutes, I would like to account for the feelings I have personally had while being involved and in some cases just an observer, in the debates over the community. And with these accounts, explain their potential impact on the community.

Takeaways

  • What am I doing here? I feel like an imposter.
  • Is this thing on? I feel too junior.
  • What’s the debate? I feel like I don’t have a say.
  • Am I doing this right? I feel like I don’t know anything.

Ash Coleman
Ashcoleman

Ash, a former chef, put recipes aside when she began her career in software development, falling back on her skills in engineering she acquired as a kid building computers with her brother. A progressive type, Ash has focused her efforts within technology on bringing awareness to the inclusion of women and people of colour, especially in the Context-Driven Testing and Agile communities. An avid fan of matching business needs with technological solutions, you can find her doing her best work on whichever coast is the sunniest. Having helped teams build out testing practices, formulate Agile processes and redefine culture, she now works as an Engineering Manager in Quality for Credit Karma and continues consulting based out of San Francisco.


How to be a Redshirt, and Survive!
Dan Billing

What is a Redshirt?

In science fiction lore, it's those unfortunate crew members who give their lives. Either to protect their comrades or in an act of ill-prepared bravado, without learning from the mistakes of their predecessors.

For me, it's become a great way of describing the pitfalls and problems of security testing. I'd like to rebrand the Redshirt so that it becomes a mark of preparation and a developing mindset for security.

This talk will help you understand how to identify the biases at play when security testing. We will explore the negative behaviours that can challenge you when security testing. The security mindset is more than tools and technical skill. It's about having an instinct for spotting problems that are often hidden in plain sight. It's about exploring the problems that some may prefer to brush under the carpet.

Security issues aren't going to disappear into a black hole. It's time that testers took up the challenge. We can do it together.

Takeaways

  • Developing a security Mindset in cross functional teams
  • Identifying biases and working with them
  • Challenging negativity when security testing
  • Recognising patterns in security problems
  • Understanding that security is everyone's problem

Dan Billing
Unnamed %281%29

Dan has been a tester for 16 years, working within a diverse range of development organisations, mostly in the south-west of England. He has been a freelance test consultant but currently works as a Test Jumper at Medidata, where most of his time is spent coaching and leading testers, developing test strategy and exploring the needs of the business. This includes mentoring, supporting and training members of the team to develop their security skills also.

Dan’s love of testing drives me to become an active member of the testing community, helping to organise local tester meetups in the Bristol and Bath area. He is also a co-facilitator with Weekend Testing Europe and also organises the South West Exploratory Workshop in Testing. He is also a co-host of the podcast Screen Testing, alongside Neil Studd.


Crossing Over: How Developing a Feature Made Me Love Testing Even More
Amber Race

This is the harrowing tale of how a seasoned tester had the opportunity develop a feature and came face to face with the challenge of producing quality code in an environment of shifting design priorities and over-burdened test organisations.

Will she be able to overcome developer myopia and design confusion to deliver her feature on time and avoid major production issues? Or will she succumb to feature bloat and the failure of untested scenarios?

Listen to this true story of how I got to experience feature development from the other side and came through with an even greater appreciation of how developers and testers can work together to create great software.

Takeaways

  • Tips on how to communicate with developers and product owners
  • How to ask developers the right questions to get the information you need to test
  • Spotting potential problems in a feature design
  • A greater appreciation of the tester's role in a development organisation

Amber Race
Amberrace headshot

Amber Race is a Senior SDET at Big Fish Games.

After majoring in Asian Studies, teaching in Japan, and travelling the world, she stumbled into software testing and has been loving it ever since. She has over 15 years of testing experience at Big Fish and Microsoft, doing everything from manual application testing to tools development to writing automation frameworks for web services.

Amber has worked on a wide variety of products and written automation in C#, C++, Python, and Java.

She currently specialises in test automation and performance testing for high volume back-end services supporting iOS and Android games.


Tests Your Pipeline Might be Missing
Gene Gotimer

Developing a delivery pipeline means more than just adding automated deploys to the development cycle. To be successful, tests of all types must be incorporated throughout the process in order to be sure that problems aren’t slipping through. Most pipelines include unit tests, functional tests, and acceptance tests, but those aren’t always enough.

I’ll present some types of testing you might not have considered, or at least might not have considered the importance of. Some tests will address code quality, others code security, and some the health and security of the pipeline itself.

This talk is aimed at people that are trying to build confidence in their software delivery pipeline, whether it is automated or not. I’ll talk about specific tools we use to supplement our pipeline testing. I won’t get into how to use each tool-- this is more of a series of teasers to encourage people to look into the tools, and even letting them know what types of tools and testing opportunities are out there.

Takeaways

  • The pipeline offers a lot of opportunities to do tests that you might not have done if you had to set aside an explicit block of time to do them.
  • The pipeline is about building confidence that the software is a viable candidate for production. Or realising as early as you can that it isn’t.
  • Do just enough of each type of testing at each step in the delivery pipeline to determine if further testing is justified.
  • Do the most expensive tests last. Those are often the manual or subjective ones.
  • Don’t forget the infrastructure. The pipeline needs to be tested just like the software does.

Gene Gotimer
Gene 2 500x500

Gene Gotimer is a senior architect at Coveros, Inc., a software company that uses agile methods to accelerate the delivery of secure, reliable software. As a consultant, Gene works with his customers to build software better, faster, and more securely by introducing agile development and DevOps practices such as continuous integration, repeatable builds, unit testing, automated functional testing, analysis tools, security scanning, and automated deploys.

Gene feels strongly that repeatability, quality, and security are all strongly intertwined; each of them is dependent on the other two, which just makes agile and DevOps that much more crucial to software development.


The Fellowship of the Test: Building a Community Across Agile Teams
Christine McGarry

Your company has grown and your development team is ready to divide into smaller, project-focused groups. Or your development team is already divided and the project teams have started to drift apart. How do you encourage the spark of collaboration and communication between multiple project teams within your organisation?

Join Christine McGarry as she shares the story of how she began The Fellowship of the Test: a gathering of testers to spark collaboration and communication across multiple project teams. Christine will share her experiences and learn about what went well, what was a struggle, and what she learned along the way.

Whether you are a leader of testing within your organisation who is ready to ignite the fire of collaboration or you are an individual tester looking to build a grassroots campaign of a community, you will leave with a series of actions you can take to begin your own journey. (Lord of the Rings references are optional).

Takeaways

  • How we identified project teams were not working as well as a mega-agile team.
  • Tactics to modify the mindset/culture from groups of agile project teams that didn't often communicate with other agile project teams towards a culture of testers as a bridge of communication.
  • Overcoming challenges with remote teams: how to craft the same experience for everyone involved.
  • General tips and suggestions for those who want to try something similar at their organisation.

Christine McGarry

I love working with clever and talented people to build better software. One of my favourite testing tasks is tracing the origin of a bug; problem solving and creativity at its best! I regularly read testing blogs and books and love to try new testing ideas to become a more effective tester. I've delivered talks at KWSQA Quality Conference and STARCanada. I've implemented several quality monitoring initiatives for both the technological side and people side of development teams and I'm currently helping to foster community at a fast-growing startup.

I've been called an adrenaline junkie because I enjoy downhill skiing and rock climbing (not at the same time!). For me, though, it's not the thrill of going fast or being at a great height that I enjoy: it's the way those activities focus your mind while you are doing them. In order to be successful, you must give skiing (or climbing) your 100% attention and focus. It's a form of moving meditation for me.


I'm Hunting Sasquatch – Finding Intermittent Issues Using Periodic Automation
Paul Grizzaffi

In American pop culture, Sasquatch (also known as Bigfoot) is likely a non-existent, ape-like, creature infrequently seen in the Pacific Northwest of North America. In the software realm, we have our own version of Sasquatch: that irritating, "intermittent issue" occurring in the system. These kinds of issues are typically difficult to find and often blamed on anything other than a product defect.

We typically run our automated tests on event boundaries, i.e. when we have a successful build and deployment; we look for problems when we think we may have introduced problems. Logically, these points of change are when we expect to have injected new issues, so, we only look for issues at those times. This approach alone, however, only gives us limited opportunities to reproduce our intermittent issues. If we also ran our automation periodically, we would have additional opportunities to reproduce these types of issues; we simply call this approach periodic automation.

Using a real-world example from his own experience, Paul Grizzaffi will explain how this periodic automation can help hunt down these elusive targets. For additional context, he will explain how this approach relates to High-Volume Automated Testing (HiVAT), as well as some HiVAT basics and examples. He will also explore some considerations of which we need to be mindful when implementing periodic automation in order to avoid desensitisation to failures.

Though we may never find “the real” Sasquatch, applying periodic automation increases our chances of finding our own intermittent issues.

Takeaways

  • The complexity of most current software systems all but guarantees intermittent issues.
  • Running automation on non-event boundaries can help catch intermittent issues.
  • While periodic automation evolved from academic-research, it also has real-world applications.
  • We must be mindful of “failure fatigue” when adding automation runs

Paul Grizzaffi
Paul headshot outside

Paul Grizzaffi is a Principal Automation Architect at Magenic. His career has focused on the creation and deployment of automated test strategies, frameworks, tools, and platforms. He holds a Master of Science in Computer Science and is a Certified ScrumMaster from Scrum Alliance. Paul has created automation platforms and tool frameworks based on proprietary, open source and vendor-supplied tool chains in diverse product environments (telecom, stock trading, E-commerce, and healthcare).

He is an accomplished speaker who has presented at both local and national meetings and conferences. He is an advisor to Software Test Professionals and STPCon, as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas. Paul looks forward to sharing his experiences and expanding his automation and testing knowledge of other product environments.


So Mr Testing Coach - What Do You Do?
Stephen Janaway

The increased usage of agile and lean principles in software development have had a profound effect on testing and in particular on test management. Teams have become cross-functional, and yet as they do so Test Managers came find themselves left out in the cold. Either the management structure does not change and becomes harder as a result, or it disappears altogether and testers find themselves unsupported and alone.

There is another way. And that way is the way of the Testing Coach. Sometimes called Test Chapter Lead, sometimes Test Practice lead, but always someone with the passion for testing and drive to make teams better at it. Someone who enables teams to own quality and understands what that means in practice.

I’ve spent time as a Testing Coach and have also rolled out the discipline based coaching model in various organisations. I’ve learnt what has worked in various contexts and what has not, and I have helped others to transition to a coaching model in their teams. I want to help others to do the same.

This presentation aims to present a view of testing that I think fits with the software development methodologies and team structures that we typically see in software development today. It will help people understand what having a Testing Coach means, why it makes sense to have one, and how as testers, it can present great opportunities to improve and excel at the software testing craft.

Takeaways

  • Why the typical view of Test Management is outdated and why Testing Coaches are a more suitable way forward.
  • What a Testing Coach does and why you may need one.
  • How to sell the idea of a Testing Coach.
  • How you can identify if being a Testing Coach is right for you, and whether it’s right for your organisation.
  • How to incorporate a Testing Coach in an organisation.
  • How a Testing Coach can help you as a tester.
  • Hints, tips and stories for new Testing Coaches, gained from real life experience as a Testing Coach, on how to approach the role and make it a success.
  • How to make the Testing Coach position sustainable.
  • My experiences of having gone through the transition from Test Manager to Testing Coach.

Stephen Janaway
Stephen1

I’m Stephen Janaway.

I help people deliver software more effectively. Over the last 15 years I’ve worked in coaching, training and leadership positions in companies such as Nokia. Ericsson, Motorola and the YOOX NET-A-PORTER GROUP, as well as advising a number of mobile and e-commerce companies on development, testing and delivery strategies.

I have written and presented many times about software development, testing and delivery, frequently with a focus on mobile devices and mobile applications. I am co-curator of the Testing In The Pub podcast and organiser of West London Lean Coffee.

Continuous Quality: Moving Beyond Bug Reports
Neil Studd

In 2015, I ran a workshop at TestBash Brighton entitled "Supercharging Your Bug Reports" (also available in a webinar format on the Ministry of Testing Dojo). In that class, I outlined some of the key components of an effective bug report, with the goal of helping testers to become better advocates for problems in their workplace.

Fast-forward to 2017, and my world (and beliefs) have changed. I've been working in some fast-paced environments with highly-productive teams, where life moves at a pace which is largely incompatible with "traditional" bug reporting processes. How do you effectively act as a champion for quality when one of your most visible forms of communication is taken away?

In this talk, I'll share stories of my transition, and how I coped when I realised that one of my biggest strengths was incompatible with my team's way of working. You'll see how I adapted, what approaches I chose to take, and how my skillset became enhanced when I battled through the fears. I'll also be looking forwards, to see what the future might hold for problem reporting.

Takeaways

  • Why bug reporting matters (but why the method for reporting might not)
  • Ideas for quickly surfacing valuable information about problems
  • New methods for approaching the day-to-day challenges in your workplace

Neil Studd
Nail

Neil has been testing in the UK for almost 15 years, working for a variety of companies ranging from enterprise behemoths to agile startups. He is currently working for Zoopla Property Group, one of Britain's largest providers of property and home-moving services, within a distributed remote team. His passion for quality has led to his involvement with a range of freelance and voluntary projects, including a regular role as a facilitator for the European chapter of Weekend Testing.

In his spare time, Neil is an avid cinemagoer, co-hosting the fortnightly podcast Screen Testing (@ScreenTesting) with Daniel Billing, taking a tester's eye view of movies and TV shows.


But I'm Not A Security Tester!
Kate Paulk

But I'm Not A Security Tester! - Security testing for the functional tester: security is for the whole team.

"But I'm Not A Security Tester!… or so I thought until I discovered a portal to Cthulhu's realm deep in the bowels of the application. With one little change, I summoned the Great Old Ones.

A sensible person would have run screaming in terror. I investigated - until I learned how the tentacled horror was summoned. *Then* I ran. And screamed.

How do you face an Elder God you accidentally summoned? People better than me have failed. If we don't understand the horrors in our applications, who knows what we could unleash on an unsuspecting world?

We've all been tempted to delve into forbidden places despite our "just the specs, ma'am" requirements. That doesn't mean we can't do a little dark magi… ahem … security testing.

If you've ever had to retest an application that had to be rewritten because the professional security testers found a major problem in the fundamental design of the software, you understand that designing and testing for security has to be the whole team's responsibility - but where do you, the functional tester start?

If you don't know much (or anything) about security testing, and you're scared to start - or you think it doesn't apply to you - this session is for you. If you're a functional tester or work primarily with automation, and you test applications that store people's names, their addresses, anything financial, or have some kind of government regulations about your software security, this session is for you.

Takeaways

  • You will see a short video demonstrating introductory security testing techniques using Fiddler, a simple, free tool; with explanations and examples (and tentacles).
  • The demonstration and presentation will allow you to become more confident in the security testing realm.
  • Handouts/Links/References will be provided for helpful introductory sites.
  • Basic security terminology will be explained.
  • Basic protocol for functional testers performing security testing will be explained.

Kate Paulk
Kpselfie %281%29

Kate Paulk refers to herself as a chaos magnet, because if software is going to go wrong, it will go wrong for her. She stumbles over edge cases without trying, accidentally summons demonic entities, and is a shameless geek girl of the science fiction and fantasy variety, with a strange sense of humour.

In what she ironically refers to as her free time, she writes. Novels, short stories, and more recently, articles for the Dojo.


Testing an App That No-one Can See
BJ Aberle

I have been the test lead for Cydalion, a product that utilises Tango technology to assist people with visual impairments. These Android devices utilise point cloud data, a 3D camera as well as the standard camera that allow the device to give feedback to users. The feedback tells them; "there are stairs in front of you", "there is a 'head height' object three feet away", "there is a trip hazard," etc. This has the potential to significantly reduce the limitations associated with mobility in regards to people who are blind.

Testing this app has been a challenge since there is very little information out there on how to test this new technology. We have utilised automated Unit and UI Testing. As well as recruited a cadre of blind and visually impaired people as manual testers. I have also had to get creative in figuring out how to motivate developers and product owners to really empathise with the target end user and develop for them and not develop based on our sighted biases.

Takeaways
Tango enabled devices and 3D cameras will soon be stock features on mobile hardware. They present a new challenge for testing. It is no longer a question of whether or not the the camera turns on and takes an image to be processed. The camera is on and sending a significant amount of data that is more than just pixels. Is that information useful and if so, is that information correct? Attendees will understand at a high-level a couple of ways to approach testing these devices, because, within the next few years they will probably have to.

BJ Aberle
Img 9965

BJ Aberle has been Float’s Quality Assurance Lead for six years. Originally hired in 2003 as the Director of Audio for multimedia projects, it was the journey into game and procedural audio that introduced him to the world of code and development. From there, BJ found that testing and QA allowed him to use those skills to bring new value to the organisation. At Float, no two projects are the same, so he wears many QA hats and morphs many roles into one: automation engineer, manual tester, agile test coach; it just depends on the need at any given time.

BJ is also an adjunct instructor of Sound Design for Interactive Media at Bradley University. When he is not wrestling flaky UI tests, BJ enjoys creating music in his studio and spending time with his wife and three daughters.


Fast Paced Testing for Rapid Prototyping
Tony Gutierrez

An experience report.

With most companies and startups embracing lean startup and agile philosophies many times quality can be sacrificed. Either testing is cut away, not taken seriously or not even part of the team. One of the reasons is that many view testing as a bottleneck that only slows down velocity.

This does not always have to happen! Testing can be restructured so it is not the bottleneck and does not have to impede developer velocity. During my career, I have worked on various prototype projects either as a consultant or as part of an innovation team, where we built MVP prototypes quickly and then iterated either to a production version, handed over the prototype to a production team or scrapped it and pivoted to something else.

In my talk I want to go over how I fought to be included as early as possible and was able to highlight potential areas of concern before development even began, allowing the team keep a steady velocity and own the quality of the prototype. How I was able to overcome difficult obstacles such as Developers that were still used to tossing code over the wall, POs that did not think testers should be included in requirements gathering, and convincing the team that a tester brought more to the table than just finding bugs.

Takeaways

  • A better understanding about how testing can be restructured so it does not sacrifice velocity
  • A different perspective on how quality is defined and how it is a team responsibility
  • A few approaches on how to work with their team so they can be taken seriously
  • An understanding that in order for this to work we have to be willing to push ourselves and expand our role so that we are not looked at as just a person that just finds bugs

Tony Gutierrez
Tgheadshot

Tony has been a tester for roughly 10 years across multiple industries.

He is one of the Co-Organizers of NYC Testers, an NYC-based meetup, and enjoys helping the test community any way he can.

He enjoys Stout beers, whisky, and coffee.


How to Benefit from Being Uncomfortable
Cassandra H. Leung

At TestBash Manchester 2016, I did a 99-second talk on talking and being uncomfortable: approx 33 minutes in.

Since then, a few people have suggested that I turn this into a full talk about learning how to feel comfortable with being uncomfortable, by making yourself uncomfortable!

I feel that we can really benefit from being aware of our emotions, and use the control we have over the situations we find - or put - ourselves into channel our emotions in a way that benefits us, and potentially the people around us.

Situations to make yourself uncomfortable in and benefit from include:

  • Bug advocacy; putting forward a case for why something should be addressed
  • Speaking up in a meeting when you seem to disagree with everyone else
  • Admitting you don't understand something that seems obvious
  • Giving a talk or presentation
  • Approaching new people
  • Singing in public

In most sessions, speakers tell us about their own experiences and give advice on things we can try. But we rarely get to see them in action, doing the things they're suggesting to us. Just like how I nervously recited a cringe-worthy adaptation of 99 problems at the start of my 99 second talk, I also plan to make myself uncomfortable again in front of a live audience - so they can see for themselves that I practice what I preach and that nothing bad will happen if they do the same.

Takeaways

  • How to feel comfortable with being uncomfortable
  • How to actively make yourself uncomfortable
  • How practising and benefiting from the above makes you more comfortable and pro-active than ever!
Cassandra H. Leung
04 21 08.49.37

I've taken what I call a "scenic route" into testing, with previous roles including product owner, business analyst, recruiter and international account manager. I'm very pleased to now be focusing on testing with MaibornWolff in Munich, Germany.

With less than two years' testing experience under my belt, I've still got lots to learn, but I haven't let that stop me from getting involved in the testing community and sharing my experiences. It took some time for me to discover testing, but I fell in love with it in no time!

I speak at conferences and write blogs as a way to learn, engage with other people (testers and non-testers alike), and hopefully inspire others to share their stories too.


Lessons Learned From 60 Days of Performance Testing
Kim Knup

Imagine the following scenario.

You have a new commerce platform and you have just proven that it performs just as well as the old one.

As a team you have a goal of selling a certain number of items over a very specific time period and by using the current knowledge and data you have, you confirm this will not be a problem… until you receive a clearer picture of how the data will be structured and it becomes clear within 10 seconds that the system will not scale as expected.

What now?

Takeaways

  • Why did we need performance testing and how it helped us
  • What are different types of performance tests we chose to employ
  • Why we learned to vary our performance testing strategy
  • The importance of understanding our production data volumes and throughput
  • The importance of understanding our real end-user’s behaviours

All of the above feed into helping you create a performance testing strategy that can not only help you gather information of your system under stress but also help your testing strategy in general.

Kim Knup
20160420 dsc 2931

Kim is a tester by trade and co-organisers of the Brighton tester meet-up; #TestActually. Currently, she works as Head of Test for Matchbox Mobile who provide services focusing on the Internet of Things, custom mobile apps and cloud services. She is passionate about usability and likes to do what the user (apparently) would never do.

Over the years she’s worked in linguistic games testing, worked with big data archiving and asset management tools as well as ticketing systems. She has also recruited and lead a small team of testers. Her interests range from usability testing, to accessibility, to performance testing, as well as using tools to aid exploratory testing.


The Joys of QA Management
Jake Brower

You’re crazy, right?

You spend your days herding cats. You play the good cop. You play the bad cop. You’re a facilitator, a mentor and a firefighter. If all of these apply to you then you’re probably a QA Manager. And you might be thinking, there must be a better way?

What if your staff were continually enhancing their skills through constant learning? What if everyone had consistent mentoring? Are you building a team of testers who can dynamically jump in? With all of these in place, I’m sure you’d agree that your life would be easier.

Throughout my 23-year career in varying Quality roles, I’ve led transformations of Quality teams. QA is no longer a team of “weirdos” just waiting for work to be thrown over the wall. I’ve witnessed the struggles, the compromises, the pains. Where successful companies have needed more from Quality teams, I have helped to instill and embed quality throughout. Through active nurturing and my own passion for quality, I’ve helped people realise their own potential, and become advocates for quality across organisations.

In this talk, I will share my experiences on building highly successful QA teams. We will talk about finding great testers with strong communication abilities, hands-on technical skills, the flexibility to adapt to change, and who embrace cultural diversity. I will explain a few tips on interview techniques that work well vs those that always backfire. I will provide some knowledge on how to balance team dynamics with individual performance. We will also discuss detailed insights on why we love our jobs and what keeps us all going year after year.

Takeaways

  • Understanding the pitfalls and rewards for managing Quality resources
  • Hiring the right people for Quality teams: what works and what doesn’t
  • Why many Quality managers love what they do, year after year
  • Pushing Quality teams to inform, install, and inject quality wherever possible

Jake Brower
Jaketoronto1

Jake Brower is a Quality Engineering Manager, Scrum Master, and Agile Facilitator with Credit Karma in San Francisco and has worked in software testing since 1994 where he started his career building/testing web sites for local nonprofits in Seattle. From there, he worked at various companies testing everything from AS400 mainframe systems to commercial web sites to desktop and mobile apps. He has been in San Francisco now for 15 years and in that time he's led many Quality/company-wide transformations that focus on pushing Quality further left, driving process engineering/adoption, and guiding testing teams to hone their intuition skills beyond the scripted test.

Jake's true passions lie in managing people and building world-class teams. Having been in management for about 10 years, he has acquired deep skills in hiring/managing/mentoring and has a great love of learning what makes people tick as well as their motivations for doing what they do every day. Jake's other passion is electronic music production and he has toured the world making live techno on stages big and small.


Risk Based Testing Because You Can't Do EVERYTHING
Jenny Bramble

Do you feel like you're under the gun to test everything when your team rolls out a new feature? Do you worry that your teammates don't understand why you choose to test the items you do? Are there moments in your life where you deeply question if you can successfully complete the testing requirements of a sprint? Do you just really like cats?

If you answered yes to any of those items--this is the talk for you. We will define and discuss risk as a tangible metric, striving to break it down into components that you can use to talk to developers, product owners, business people, and any other stakeholders. Having a common language of what risk is and what it's made of allows us to decide what we should test and when we should test it. We will also talk about building a risk matrix and why we should even bother. Included will be a heavy dose of jokes, storytelling, anecdotes, and pictures of my cat.

Takeaways

  • Defining the elements of risk to create a tangible metric that can be used in discussions about stories, features, projects, applications, etc
  • How to start the conversation about risk and other talking points
  • Creating a risk matrix
  • Jenny's cat is really pretty adorable
  • Elements of risk that are not always considered like user morale, social karma, and other soft metrics

Jenny Bramble
Jenny

Jenny came up through support and DevOps, cutting her teeth on that interesting role that acts as the 'translator' between customer requests from support and the development team. Her love of support and the human side of problems lets her find a sweet spot between empathy for the user and empathy for my team.

She's done testing, support, or human interfacing for most of her career. She finds herself happiest when she's making an impact on other people--whether it's helping find issues in applications, leading scrum, speaking at events, or just grabbing a coffee and chatting.


Micro Sponsors: