Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Round environment #48

Merged
merged 21 commits into from
Nov 14, 2022
Merged

Round environment #48

merged 21 commits into from
Nov 14, 2022

Conversation

vagechirkov
Copy link
Collaborator

example

@vagechirkov
Copy link
Collaborator Author

I'm wondering how to make better videos 😅

@vagechirkov
Copy link
Collaborator Author

I think the problem with the workflow is because of this:
image

I'm not sure how we can fix this... 🤔

@mezdahun

@mezdahun
Copy link
Contributor

mezdahun commented Nov 8, 2022

@vagechirkov that arena looks awesome! 🤩

regarding the Docker CLI automation you could try to add the missing credentials to your github account (in case you have DockerHub credentials). If that doesn't work, from now on you could make indirect PRs to first just feature branches in the main repo instead of direct PR from fork feature branch to develop in original repo. This automation is only executed on pushes and PRs to origin/develop and is testing installable dependencies on docker environment. Currently if you change e.g. setup.py or Dockerfile in a way that it is messing up the docker image there is no way to catch it before merging to develop as the test must be manually bypassed.

@vagechirkov
Copy link
Collaborator Author

vagechirkov commented Nov 8, 2022

@vagechirkov that arena looks awesome! 🤩

regarding the Docker CLI automation you could try to add the missing credentials to your github account (in case you have DockerHub credentials). If that doesn't work, from now on you could make indirect PRs to first just feature branches in the main repo instead of direct PR from fork feature branch to develop in original repo. This automation is only executed on pushes and PRs to origin/develop and is testing installable dependencies on docker environment. Currently if you change e.g. setup.py or Dockerfile in a way that it is messing up the docker image there is no way to catch it before merging to develop as the test must be manually bypassed.

Hmm, I see. I have the credentials in my fork and the workflow works fine. For example, I enabled workflow for round-environment branch and Docker Image CI works fine https://github.com/vagechirkov/ABM/tree/round-environment but not in the PR from the fork (probably because of the security issues).

image

@vagechirkov
Copy link
Collaborator Author

We can run Docker CI on all branches or all branches with the specific pattern (e.g. "develop/**") but not on the pull request. I don't think it's possible to get secrets from the fork to run a workflow for PR. @mezdahun what do you think? In principle, we can first push into another branch and then merge into the develop branch, but I'm not sure if that's an optimal solution 🤔.

@mezdahun
Copy link
Contributor

mezdahun commented Nov 8, 2022

I see! In that case the preferred workflow would be to not PR directly to develop in the original repo but rather to a feature branch first. In that PR general tests are run. Then I PR in the original repo the feature branch to develop and check Docker CLI. Would that work for you?

@mezdahun
Copy link
Contributor

mezdahun commented Nov 8, 2022

@vagechirkov What do you mean by it's not the optimal solution?

@mezdahun
Copy link
Contributor

mezdahun commented Nov 8, 2022

@vagechirkov danger/danger-js#918 (comment) I guess something like this would be nicer, but it wouldn't solve the problem I mentioned before, namely we merge without checking if the resulting docker image is healthy

@mezdahun
Copy link
Contributor

mezdahun commented Nov 8, 2022

Another option is to do direct PRs from your fork to develop but I will check the result of Docker CLI test on your feature branch before bypassing the check on the PR.

@vagechirkov
Copy link
Collaborator Author

@vagechirkov What do you mean by it's not the optimal solution?

Well, in principle I think it's ok to go through the feature branch before merging commits with the main branch. It's just a bit redundant. 🙃

@vagechirkov
Copy link
Collaborator Author

I'm wondering why this is necessary to run Docker CI before merging changes to the repo. I think we can run tests of the code with pytest in the different workflow and use the Docker CI workflow just to push a new image to Docker Hub. 🤔

@mezdahun
Copy link
Contributor

mezdahun commented Nov 8, 2022

The idea behind is I guess that we want to keep our image on DockerHub always in sync with develop (as this will serve as the environment for HPC simulations). In case something is merged that messes up the image they get out of sync and you have to fix that on a different branch again (after merge). This is why already on the PR side it is checked. What we could do here due to the credential issues from fork is to instead separate the docker build and the push in 2 different github action files. This way the docker build would run on the PR side allowing us to be sure that the resulting image would be okay. The one with push to DockerHub would then be only activated upon/after merge. This would be also relatively easy I think. Which one would be your preferred option?

  1. PR to feature branches instead of develop
  2. Check docker result on remote branch then bypass check at PR to develop
  3. Separate Docker check to 2 checks (1 on PR only build, 2 on merge build and push)

@mezdahun
Copy link
Contributor

mezdahun commented Nov 8, 2022

Or you meant the same thing?

@vagechirkov
Copy link
Collaborator Author

Or you meant the same thing?

Yes indeed, I like the third option! Let me try to build the image without login :)

@vagechirkov
Copy link
Collaborator Author

Cool! I think this should work! It makes a lot of sense to only push the Docker image to Docker Hub after merging PR.

@mezdahun
Copy link
Contributor

mezdahun commented Nov 9, 2022

Awesome, I will do a review later and we can merge this!

Copy link
Contributor

@mezdahun mezdahun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! A few things which we should fix before merging is:

  • fix placing new agents into circular arena when playground tool is used
  • update the position of the agents only once per time step
  • maybe comment a bit on why that check of new angle is needed. i.e. how it can happen that the calculation yields a new angle that points outside of the circle?
  • remove magic numbers
  • BUG: if the resource patch is small and close to the border while user increases the patch radius (detection range) the border gets stuck at the wall.
  • Discuss: seemingly the resulting angles are very sharp. We could think about a recursive turning method inside? Maybe also not necessary if this approach works.

self.prove_orientation()

# relocate the agent back inside the circle
self.position[0] += (self.radius / 2) * np.cos(self.orientation)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are changing the position of the agents twice in a single time step. Once in self.update() and once here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about this solution? b990a13


# make sure that the new orientation points inside the circle and not too
# flat to the border
if np.abs(new_orientation - i_orientation) > np.pi / 4:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand this fixes the behavior, but I am wondering why it is necessary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would guess it has something to do with the curvature of the boundary and the step size per update 😅

@@ -12,3 +12,24 @@ def test_random_walk():

assert dvel == 1.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for me random walk test fails can you please check?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah it is because the env parameters, we might want to mock this so that a new local env file won't influence the test results

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is strange because all the tests in the workflow passed 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you replace the .env file (i.e. you are after running the playground tool) the tests won't pass. Or do they pass for you even with a new .env file?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I never had any problems with tests before or after running the playground 😅🤔

@vagechirkov
Copy link
Collaborator Author

@mezdahun Please let me know if there is anything else we need to fix before merging this PR. I'm not sure why tests are failing for you, but I think we're good as tests are passing in the GitHub workflow 🙂

@vagechirkov vagechirkov requested a review from mezdahun November 11, 2022 10:14
@mezdahun
Copy link
Contributor

In the github workflow tests pass because it uses the same prototype .env file as always. Once you change your env file locally and run pytest they will fail. (as soon as you change env parameters that your tests use via the function to be tested)

Usually this should be mocked, as in env parameters are fixed for the given run of a test case. I guess this si our of scope here, it is enough if we make the tests for the prototype env file.

Should we look into this bug with increasing the detection range of the resource around the wall?

To reproduce: Stop the simulation (with space) and move the resource patch near the wall. Increase the detection range so that the patch is now out of the arena partially. Resume the simulation (with space).

Expected behavior: the resource is getting relocated and bouncing back from wall of arena

Behavior: The patch gets stuck and won't move anymore

@vagechirkov
Copy link
Collaborator Author

In the github workflow tests pass because it uses the same prototype .env file as always. Once you change your env file locally and run pytest they will fail. (as soon as you change env parameters that your tests use via the function to be tested)

Usually this should be mocked, as in env parameters are fixed for the given run of a test case. I guess this si our of scope here, it is enough if we make the tests for the prototype env file.

Should we look into this bug with increasing the detection range of the resource around the wall?

To reproduce: Stop the simulation (with space) and move the resource patch near the wall. Increase the detection range so that the patch is now out of the arena partially. Resume the simulation (with space).

Expected behavior: the resource is getting relocated and bouncing back from wall of arena

Behavior: The patch gets stuck and won't move anymore

Could you fix this .env file problem? Unfortunately, I cannot reproduce it in my setup. I will fix the relocation of recourse inside the arena 🙂

@mezdahun
Copy link
Contributor

In the github workflow tests pass because it uses the same prototype .env file as always. Once you change your env file locally and run pytest they will fail. (as soon as you change env parameters that your tests use via the function to be tested)
Usually this should be mocked, as in env parameters are fixed for the given run of a test case. I guess this si our of scope here, it is enough if we make the tests for the prototype env file.
Should we look into this bug with increasing the detection range of the resource around the wall?
To reproduce: Stop the simulation (with space) and move the resource patch near the wall. Increase the detection range so that the patch is now out of the arena partially. Resume the simulation (with space).
Expected behavior: the resource is getting relocated and bouncing back from wall of arena
Behavior: The patch gets stuck and won't move anymore

Could you fix this .env file problem? Unfortunately, I cannot reproduce it in my setup. I will fix the relocation of recourse inside the arena slightly_smiling_face

09f67da should fix this. I mocked the movementparams submodule's fixed variables that is used by the random walk function if no parameters are passed to it. In that case (when function parameters are None) it fetched the movementparams submodules corresponding attributes (velocity, min and max theta). These are read from the root .env file upon startup of the stack.

Copy link
Contributor

@mezdahun mezdahun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@vagechirkov
Copy link
Collaborator Author

LGTM, let's merge! However, I don't have a merge button 😂

@mezdahun
Copy link
Contributor

Oh sorry I thought you had one

@mezdahun mezdahun merged commit 0c687c2 into scioip34:develop Nov 14, 2022
@vagechirkov vagechirkov deleted the round-environment branch November 14, 2022 10:03
@vagechirkov
Copy link
Collaborator Author

Oh sorry I thought you had one

This is probably set in the protected branch settings 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants