Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pending tasks for the E2E activity #1122

Open
yzainee-zz opened this issue Sep 4, 2019 · 9 comments
Open

Pending tasks for the E2E activity #1122

yzainee-zz opened this issue Sep 4, 2019 · 9 comments
Assignees

Comments

@yzainee-zz
Copy link
Member

There are a few tasks which are pending for the E2E activity that was being carried out a couple of sprint back.

  1. The existing test cases have incorrect (reduced) timelines than whats promised in the SLO docs. Probably we need to revisit those test cases and change those numbers to acceptable duration.

  2. Get the task of creating the fetcher function ( API to fetch the details of previous days stack report and create a manifest file out of it ) to completion.

  3. Add more manifest files for all 3 ecosystems in a common location / repo.

@deepak1725
Copy link
Member

deepak1725 commented Sep 12, 2019

1 is done. #1126
2 on Hold.
3 on Hold.

@deepak1725
Copy link
Member

Point 2: Fetcher Function.
Idea is to test Stack Analyses on different manifest files each time.
Technical Architectural Flow:
Venus Report -> Fetch Most Used Stacks -> Parse Stack to generate new Manifest file -> Test Stack Analyses on newly made manifest file.

Approach Problem: Each time Static stacks in E2E will always be most used stacks in Venus Reports and testing on already tested stacks doesn't make sense.

@deepak1725
Copy link
Member

deepak1725 commented Sep 15, 2019

Point 3: Adding more manifest files
Idea is to run Stack Analyses each time on random manifest files.
Technical Architectural Flow::
Fetch manifest files from any of most popular ecosystem (For Ex: Che for Node) -> Select Random amount of dependencies from them and generate new manifest file -> Run Stack Analyses on top of it.

Approach Problem: Running Stack Analyses on randomly generated manifest files will introduce high chances of E2E failure and hence all merging to Production will be blocked for unsolvable reasons.

@tisnik
Copy link
Member

tisnik commented Sep 16, 2019

Actually, the approach problem for point #3 is the same for point #2 too, as the Venus report itself won't generate constant data all the time.

@yzainee-zz
Copy link
Member Author

yzainee-zz commented Sep 16, 2019

@deepak1725 Your interpretation of point number 3 is incorrect. The point 3 states that we add more number of manifest files.
Currently, while running stack analysis, say for npm, it has 1 manifest with 10 deps, 1 for 20 deps and so on. Idea is to have multiple munifest files for 10 deps or 20 deps so that when test case runs, it picks the manifest randomly. There is no such step of creating manifests out of manifests.

Also, we do NOT refer to repos to create manifest files. The files are pre - created.
cc: @tisnik

@tisnik
Copy link
Member

tisnik commented Sep 16, 2019

The problem is in the random selection of stacks to test. Either we trust the platform and then we should check all stacks or we don't trust it and then the high number of semi-random failures happens. Why is the random selection needed for such precise thing as e2e test? Lack of machine time?

@yzainee-zz
Copy link
Member Author

@tisnik As i said earlier as well, i dont disagree to your point. Your point is valid. We can discuss this with wider audience and close it. My comment above was to clarify @deepak1725 's comment as what he had mentioned was not completely correct.

@tisnik
Copy link
Member

tisnik commented Sep 16, 2019

@yzainee yup I agree with you as well.

@deepak1725
Copy link
Member

@yzainee Feel Free to Modify it

@tisnik tisnik removed their assignment Oct 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants