Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Performance Testing Protocol #599

Open
3 of 7 tasks
abriggs-usgs opened this issue Aug 5, 2020 · 13 comments
Open
3 of 7 tasks

Create Performance Testing Protocol #599

abriggs-usgs opened this issue Aug 5, 2020 · 13 comments
Assignees
Labels
Makerspace Processes Tasks needed to keep the team running smoothly

Comments

@abriggs-usgs
Copy link

abriggs-usgs commented Aug 5, 2020

Create Performance Testing Protocol

In order to standardize the process of application performance and usability testing, let's work through an iterative process to develop a generalized protocol

  • Contact other Water Mission Area Teams to see if they have a process for performance testing
  • Make a list of testing that has been done in the past
  • Decide if we should contact the Federal Crowdsource Mobile Testing Program https://digital.gov/services/mobile-application-testing-program/#here-8217-s-how-it-works
  • Generate a method of recruiting live testers
  • Generate a form and protocol for the live testers to follow
  • Decide on order of operations for testing and rough out generalized timeline
  • Generate a document indicating steps involved in performance testing
@abriggs-usgs abriggs-usgs added the Makerspace Processes Tasks needed to keep the team running smoothly label Aug 5, 2020
@abriggs-usgs abriggs-usgs self-assigned this Aug 5, 2020
@abriggs-usgs
Copy link
Author

I contacted members of the Internet of Water and NWIS Modernization teams (Jim Kreft and Carl Schroedl respectively) and asked if they had any procedures or advice related to performance testing. Jim said that they didn't have an established process for performance testing. Carl indicated the same thing, but he did note they had used and still use JMeter, but they are not fond of it and are looking into alternatives, plus JMeter is designed to work with Java applications. So, not much help available from other teams. I am waiting on few more response. Will post if any valuable information arrives.

@abriggs-usgs
Copy link
Author

abriggs-usgs commented Aug 5, 2020

Past testing/analysis has come in five parts

  1. Browserstack – we have an open source license for five developers. This tool loads our application onto real machines running a wide array of operating systems and browser types. We then connect our local machine to the Browserstack remote and screen share with that device. This process works okay. It is a bit clunky, slow, and lacks the interaction one would have with an actual physical touch active device. We mainly use it to ‘smoke test’ new changes on the platforms and operating systems of which the U.S. Government requires maintained compatibility. We usually do this smoke testing at the end of an iteration.
  2. Lighthouse – this is a free application provided by Google and available in the development tools of Google Chrome. Lighthouse will create a report that can show excessively large files or JavaScript bundles. Since only the ‘beta’ and ‘prod’ deployments use the Vue ‘production’ build and Amazon Web Services CloudFront caching Lighthouse only reflects real world performance on those tiers.
  3. vue-cli-plugin-webpack-bundle-analyzer – this is a plugin that produces a visual representation of the ‘chucks’ in the JavaScript bundles. This is useful if a Lighthouse report indicates that a bundle has a high load time. The visual breakdown makes it easier to see where it would be efficient to break up bundles.
  4. test on phone – once the application is formatted and deployed to the ‘beta’ tier it is accessible from a standard smart phone. Here we can test if new features perform as intended on the particular device in use. This is usually done as soon as possible and whenever a new feature is added.
  5. broaden the scope of testing to other devices. This is where we ask for team members to test on their phones. This is best done when the application has all the desired features and has entered the final finish and polish stage and is in the pre-release process

@abriggs-usgs
Copy link
Author

abriggs-usgs commented Aug 6, 2020

This is a rough list of browsers we should support and the associated operating systems

Operating Systems | Browsers in order of Priority (based on percentage of users)

Windows | Chrome, Firefox, Edge
Mac | Safari, Chrome, Firefox, Edge
Android | Chrome, Samsung Internet, Firefox
iOS | Safari, Chrome

@amrhoades
Copy link

How are we going to test Chrome when it's removed from our machines?

@abriggs-usgs
Copy link
Author

Department polices do make testing a challenge, which is why we had to resort to 'beta' and hope people are willing to use personal devices.

@amrhoades
Copy link

12 different pairings

  1. Windows - Chrome
  2. Windows - Firefox
  3. Windows - Edge
  4. Mac - Safari
  5. Mac - Chrome
  6. Mac - Firefox
  7. Mac - Edge
  8. Android - Chrome
  9. Android - Samsung Internet
  10. Android - Firefox
  11. iOS - Safari
  12. iOS - Chrome

At a maximum we would need 12 different test users.

@amrhoades
Copy link

amrhoades commented Aug 6, 2020

Here is a draft template form - @abriggs-usgs looking for feedback/edits

@abriggs-usgs
Copy link
Author

The draft form makes sense to me. It is pretty generic at the moment which opens the opportunity to add more directed questions related to features based on specific use cases once those cases are known.

@amrhoades
Copy link

My thought process here was to create a template that could be duplicated and tailored to a project. Anything I should update/add/omit to the template?

@abriggs-usgs
Copy link
Author

It seems reasonable to me to have the template generic. I can't think of much else to include without it addressing specifics of a particular application/visualization, so I think this is a good starting place. 👍

@amrhoades
Copy link

@mhines-usgs could you give this draft template form a look over and provide any feedback that comes to mind?

@mhines-usgs
Copy link
Contributor

i think the form looks ok as far as a generic form goes, you may consider separating out the questions that asks for input that could more easily be a radio button (great, needs improvement) from the free text 'tell me more about this' just for ease of user's use.

thinking more about the usability test i was creating for wbeep, it seems like you could scrape ideas from there if you want https://doimspp.sharepoint.com/:w:/r/sites/gs-wma-iidd-makerspace/_layouts/15/Doc.aspx?sourcedoc=%7B1D05846E-73B6-482C-9D82-F7262F7AFEA4%7D&file=WBEEP%20Usability%20Study%20-%20Draft.docx&action=default&mobileredirect=true

@mhines-usgs
Copy link
Contributor

i'd say JMeter is very useful for non-java applications, as long as there is an endpoint to hit, we use it to test the speed of the response of web services/data for GCMRC. If you want a demo, I can show you sometime. it's basically a list of queries against a database, but you could also set it up to do URL queries too

@abriggs-usgs abriggs-usgs removed their assignment Aug 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Makerspace Processes Tasks needed to keep the team running smoothly
Projects
None yet
Development

No branches or pull requests

3 participants