Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Motion planner improvements and instance memory setup #1

Open
wants to merge 288 commits into
base: ros2-migration
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
288 commits
Select commit Hold shift + click to select a range
a1ae198
read action config
Feb 5, 2024
32cb2a9
revert pad
Feb 5, 2024
197bc0f
add config entries from real robot
Feb 5, 2024
655cf61
update simplify v1
cpaxton Feb 5, 2024
8c390d9
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
cpaxton Feb 5, 2024
b539c08
fix a few bugs
Feb 5, 2024
3d34a50
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
cpaxton Feb 5, 2024
d4c291a
update
cpaxton Feb 5, 2024
9601fef
padding must be an int; footprint should match actual hardware
cpaxton Feb 6, 2024
d5888d7
update gpt stuff
cpaxton Feb 6, 2024
1e04109
saving information for placing stuff
cpaxton Feb 6, 2024
9adc742
debug motion by adding constraints
Feb 7, 2024
85eb779
cleanup
Feb 7, 2024
022d3cf
visualize positions
cpaxton Feb 7, 2024
a2efcf6
update things to print out more svms
cpaxton Feb 7, 2024
5d00ca8
add motion planning debug code
cpaxton Feb 7, 2024
1e7aebb
code cleanup and modify margins around the edge
cpaxton Feb 9, 2024
e8b46ca
make sure frontier is a bit back from actual edges of frontier regions
cpaxton Feb 9, 2024
d117946
changing params
cpaxton Feb 9, 2024
db2b4d6
safety threshold fixed and fix other things
cpaxton Feb 9, 2024
20e0514
add retry on failure
cpaxton Feb 9, 2024
6bd8cc3
simplify basic version works - no verification
cpaxton Feb 15, 2024
c600ece
update
cpaxton Feb 15, 2024
26e7516
basic simplification in and supported
cpaxton Feb 15, 2024
6e258cd
update some parameters
cpaxton Feb 21, 2024
e7cfc51
changing some presets to explore more thoroughly
cpaxton Feb 21, 2024
b0691bf
instance embedding will take masked cropped image as input now + SVM …
Feb 28, 2024
a9298ea
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
Feb 28, 2024
d05fa53
add outputs for debugging
cpaxton Feb 29, 2024
e6a756e
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
cpaxton Feb 29, 2024
c38a47e
planning issues; passing planner out; fixing some things in code
cpaxton Mar 1, 2024
6079e1f
fix an issue with simplification which was causing us to skip some turns
cpaxton Mar 1, 2024
4597095
verify things, add shortcutting as an option, break out motion planning
cpaxton Mar 1, 2024
6cea495
update voxel map to fix some interp issues
cpaxton Mar 1, 2024
9a0ed2a
fixing some things
cpaxton Mar 1, 2024
24af2ce
update
cpaxton Mar 1, 2024
2fefeb6
fix that some variables mutate
cpaxton Mar 1, 2024
eeea3e6
remove tons of print statements
cpaxton Mar 1, 2024
aa937b3
motion planning config
cpaxton Mar 1, 2024
dd7dd48
update tolerances and simplify for 2d/se(2) motions
cpaxton Mar 1, 2024
6b8270a
fixing many issues with trajectory simplfication
cpaxton Mar 2, 2024
e6a4470
fix common issues with short trajectories
cpaxton Mar 2, 2024
a0e8647
code cleanup
cpaxton Mar 4, 2024
2b27172
distance incrementing fixed
cpaxton Mar 4, 2024
9561814
update simplify code again - make it a bit more robust to floating point
cpaxton Mar 4, 2024
adeb3b3
fix issues and be done
cpaxton Mar 4, 2024
dcf7936
add some debug information
cpaxton Mar 4, 2024
6654821
improve debug visualizations on real data
cpaxton Mar 4, 2024
5f2a56e
floor height was wrong
cpaxton Mar 4, 2024
50ef8c9
more little changes to improve performance
cpaxton Mar 6, 2024
c7908ea
remove debug code
cpaxton Mar 6, 2024
44cd033
debug real world
Mar 7, 2024
a0e42e0
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
Mar 7, 2024
6d41f4e
add owlv2 templated planning -- need testing on real robot
Mar 10, 2024
304542e
Merge branch 'main' into xiaohan/svm_vlm_sim
cpaxton Mar 13, 2024
5dbebe0
add sophuspy to environment yaml
cpaxton Mar 13, 2024
050c409
add transformers
cpaxton Mar 13, 2024
d83d36b
add some planning options, remove something which was shrinking the
cpaxton Mar 13, 2024
75a01d3
update transformers
cpaxton Mar 13, 2024
5ce5e41
some changes
cpaxton Mar 13, 2024
4dabdde
transformers version
cpaxton Mar 13, 2024
8f17df3
update space
cpaxton Mar 14, 2024
0030eda
conversion here
cpaxton Mar 14, 2024
a5d6364
add config params for instance filtering
Mar 14, 2024
8e8e49e
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
Mar 14, 2024
2e01f3d
fix failing motion planner tests
cpaxton Mar 14, 2024
11d25f4
no need for converstion script
cpaxton Mar 14, 2024
5295693
instance map tuning for real data and symbolic relationship extractio…
Mar 18, 2024
d8fe11a
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
Mar 18, 2024
9eb0713
add visualizations for collision geometry
cpaxton Mar 18, 2024
022e20a
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
cpaxton Mar 18, 2024
bf2fcc3
collisions might have been wrong before
cpaxton Mar 18, 2024
1b0de96
some changes
Mar 19, 2024
930e4fa
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
Mar 19, 2024
cd9af73
improved visualizations for everything
cpaxton Mar 19, 2024
02a61be
update agent to clean up some things
cpaxton Mar 19, 2024
b23254e
rotation steps, fixes and better visualizations
cpaxton Mar 19, 2024
94fadc7
update code
cpaxton Mar 20, 2024
67dc7eb
split into plan to bounds and plan to instance; add support for queries
cpaxton Mar 20, 2024
7c863c5
motion plan for reading things - now a bit more conservative, with
cpaxton Mar 20, 2024
79d0089
implement TAMP with svm, tested in sim
Mar 20, 2024
a7bb7d4
merge
Mar 20, 2024
c1cd4ca
TAMP on real robot, need testing
Mar 20, 2024
ca4d76d
fix config for real robot
cpaxton Mar 21, 2024
2f09fd7
add test vlm planning on offline data
Mar 24, 2024
19a343f
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
Mar 24, 2024
39020fb
write plan into file and move stuff from cortex to homerobot
Mar 27, 2024
f29ebc3
update gitmodules
hello-cpaxton Apr 12, 2024
1c510ac
removed spot sim2real
hello-cpaxton Apr 12, 2024
36c47ae
remove all spot code
hello-cpaxton Apr 15, 2024
6d8574f
Merge branch 'ros2-migration' of github.com:NYU-robot-learning/robot-…
hello-cpaxton Apr 19, 2024
fd0a10f
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
hello-cpaxton Apr 19, 2024
26cf570
fix bugs in sim and add ontopof predicates
Apr 19, 2024
171bf00
use cached plans for accelerating reachable instances computation
Apr 19, 2024
ff3072b
Merge branch 'ros2-migration' of github.com:NYU-robot-learning/robot-…
hello-cpaxton Apr 19, 2024
e77d5f4
cleanup
hello-cpaxton Apr 19, 2024
dc6b850
fix ros2 camera stuff
hello-cpaxton Apr 19, 2024
fd84262
Merge branch 'ros2-migration' of github.com:NYU-robot-learning/robot-…
hello-cpaxton Apr 19, 2024
2b291da
add default cfg
hello-cpaxton Apr 19, 2024
9a4a901
add example code
hello-cpaxton Apr 19, 2024
61608e8
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
hello-cpaxton Apr 22, 2024
520a782
Merge branch 'ros2-migration' of github.com:NYU-robot-learning/robot-…
hello-cpaxton Apr 22, 2024
2fe3393
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton Apr 22, 2024
49c9048
read svm file can now process pkls that don't include homerobot obs
Apr 23, 2024
419af7a
read svm file can now process pkls that don't include homerobot obs
Apr 23, 2024
e2f1c79
debug
Apr 23, 2024
d9eb312
Merge branch 'ros2-migration' of github.com:NYU-robot-learning/robot-…
hello-cpaxton Apr 23, 2024
868a1ec
update
hello-cpaxton Apr 23, 2024
336630a
eval on peiqi's benchmark
Apr 24, 2024
0cfa975
Merge branch 'xiaohan/svm_vlm_sim' of github.com:facebookresearch/hom…
Apr 24, 2024
30474a0
change to a valid start pose
Apr 24, 2024
fee9c3d
Merge branch 'ros2-migration' of github.com:NYU-robot-learning/robot-…
hello-cpaxton Apr 24, 2024
544a447
Merge branch 'xiaohan/svm_vlm_sim' of github.com:cpaxton/home-robot i…
hello-cpaxton Apr 24, 2024
0001970
voxel memory codes
peiqi-liu Apr 25, 2024
7ca2af7
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
Apr 25, 2024
151417a
fixing some issues with broken files
hello-cpaxton Apr 26, 2024
480249e
add installer
hello-cpaxton Apr 26, 2024
eec1816
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton Apr 26, 2024
aa46dc0
update
hello-cpaxton Apr 26, 2024
5d1eff4
add ok robot manip
peiqi-liu Apr 27, 2024
3928bf9
fix bugs
peiqi-liu Apr 27, 2024
aa5f53d
fix bugs
peiqi-liu Apr 27, 2024
6a357de
update visualization
peiqi-liu Apr 27, 2024
fa8f3fc
ros2 implementation
grail-stretch Apr 27, 2024
ce53032
fix urdf bug
peiqi-liu Apr 27, 2024
d771068
update stream when moving
peiqi-liu Apr 29, 2024
20a0620
add anygrasp
Apr 29, 2024
6a23546
make some changes for compatibility
hello-cpaxton Apr 29, 2024
43556e3
update
hello-cpaxton Apr 29, 2024
96b7054
dynamic voxel map
Apr 29, 2024
03f0764
add depth filter in removing points
peiqi-liu Apr 29, 2024
d6d1f2d
Merge branch 'peiqi/ros2-migration' of github.com:NYU-robot-learning/…
hello-cpaxton Apr 30, 2024
621a30d
fixes
hello-cpaxton Apr 30, 2024
f71c717
update things here
hello-cpaxton Apr 30, 2024
03b8011
error messages
hello-cpaxton Apr 30, 2024
202ff98
update strech urdf
hello-cpaxton Apr 30, 2024
1659312
make fixes to sol ver
hello-cpaxton Apr 30, 2024
051956a
updates
hello-cpaxton Apr 30, 2024
5d128fd
making fixes
hello-cpaxton Apr 30, 2024
d34d1f9
instacne mem setup
hello-cpaxton Apr 30, 2024
4c1988f
fixing error case
hello-cpaxton Apr 30, 2024
6e9071e
fix launch file
hello-cpaxton Apr 30, 2024
972b210
fi some issues
hello-cpaxton Apr 30, 2024
96bf976
update robot control; exploration works fine now
hello-cpaxton Apr 30, 2024
88896ea
no rotation
hello-cpaxton Apr 30, 2024
d28491b
fixes and things
hello-cpaxton Apr 30, 2024
483f38e
update ROS navigation config
hello-cpaxton Apr 30, 2024
e9f483e
update python server code
hello-cpaxton Apr 30, 2024
9594af9
add i guess
hello-cpaxton Apr 30, 2024
e5d5eac
update things
hello-cpaxton Apr 30, 2024
3d0a970
network setup works
hello-cpaxton Apr 30, 2024
18c81f2
move it
hello-cpaxton Apr 30, 2024
892a870
sending messages is very slow this way
hello-cpaxton Apr 30, 2024
3c965b3
updates to add jpeg compression
hello-cpaxton Apr 30, 2024
bad9bb1
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton Apr 30, 2024
e847637
updates
hello-cpaxton Apr 30, 2024
db9a2b8
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton Apr 30, 2024
2eecec8
add some info to mak esure it gets printed right
hello-cpaxton Apr 30, 2024
d900983
update receiver data
hello-cpaxton Apr 30, 2024
f2bff6f
compression
hello-cpaxton Apr 30, 2024
911973c
jpeg2000 compression
hello-cpaxton Apr 30, 2024
4253b6c
update receiver
hello-cpaxton Apr 30, 2024
17e768b
fixing server stuff
hello-cpaxton May 1, 2024
7067221
update sender and receiver, now works both ways
hello-cpaxton May 1, 2024
69a8bf2
sending commands around works well
hello-cpaxton May 1, 2024
d7e3bb1
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 1, 2024
93dee53
moving around
hello-cpaxton May 1, 2024
0ab6261
some updates to move robot around
hello-cpaxton May 1, 2024
6ae83b1
make fixes
hello-cpaxton May 1, 2024
c87e305
updat
hello-cpaxton May 1, 2024
0111cf8
update
hello-cpaxton May 1, 2024
85ee503
update
hello-cpaxton May 1, 2024
8a0688d
update all this stuff
hello-cpaxton May 1, 2024
f90f8eb
add config
hello-cpaxton May 1, 2024
95e0a9e
fix name
hello-cpaxton May 1, 2024
ce7475d
fix
hello-cpaxton May 1, 2024
76b3328
updat
hello-cpaxton May 1, 2024
65e8fbb
fixing some things
hello-cpaxton May 1, 2024
479bf82
cleanup and fix
hello-cpaxton May 1, 2024
b4e9b14
update network stuff for global planner
hello-cpaxton May 1, 2024
8fcbf92
update code
hello-cpaxton May 1, 2024
df911ad
step fixes
hello-cpaxton May 1, 2024
84fb879
some things
hello-cpaxton May 1, 2024
61c92f8
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 1, 2024
846ab5f
fixing and showing the action step
hello-cpaxton May 1, 2024
d949dc1
update zmq wrapper stuff
hello-cpaxton May 1, 2024
1547d6b
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 1, 2024
1f08b93
do some updates
hello-cpaxton May 1, 2024
e670a0c
update receiver to add executing trajectories
hello-cpaxton May 2, 2024
44ee51c
running exploration
hello-cpaxton May 2, 2024
c932759
separate send and recv threads
hello-cpaxton May 2, 2024
ae3c098
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 2, 2024
6ce2e75
update things
hello-cpaxton May 2, 2024
be44b11
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 2, 2024
4094a66
my robot cannot explore but things seem to be working ok otherwise
hello-cpaxton May 2, 2024
3e2e79b
exploration
hello-cpaxton May 2, 2024
4a7e25c
some robot things
hello-cpaxton May 2, 2024
eb8fef7
8x in place rotations
hello-cpaxton May 2, 2024
4987579
update stuff with the receiver
hello-cpaxton May 2, 2024
cc0d7fd
make sure to delete the receiver at end of program
hello-cpaxton May 2, 2024
e8803a6
fixing a couple things
hello-cpaxton May 2, 2024
d10bad1
update goal code
hello-cpaxton May 2, 2024
c1f7353
updates
hello-cpaxton May 2, 2024
9a8501b
update to wait for at goal
hello-cpaxton May 2, 2024
98f4653
simplify plans and show visuals
hello-cpaxton May 2, 2024
9609c1e
no in place rotation visuals
hello-cpaxton May 2, 2024
5a3a2c3
switch to async version of slam code
hello-cpaxton May 2, 2024
7f4f51c
remove blocking
hello-cpaxton May 2, 2024
bc8c133
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 2, 2024
7ecaab2
block things
hello-cpaxton May 2, 2024
199c396
moving the robot around
hello-cpaxton May 2, 2024
1d62984
decrease "not moving" limits
hello-cpaxton May 2, 2024
1ffd06a
make sure robot syncs up and we see the robot moving around properly
hello-cpaxton May 2, 2024
8093534
kill stub
hello-cpaxton May 2, 2024
aa8e76c
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 2, 2024
d6a519e
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 3, 2024
9512558
add odom node
hello-cpaxton May 3, 2024
4b337e9
add semantic sensor
hello-cpaxton May 3, 2024
ae2cf69
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 3, 2024
91d2239
update odom tf
hello-cpaxton May 3, 2024
26805d3
update scripts
hello-cpaxton May 3, 2024
da87b6e
upate robot agent to add a fix
hello-cpaxton May 3, 2024
6ae0431
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 3, 2024
57934d6
update demo stuff
hello-cpaxton May 3, 2024
c908999
updates
hello-cpaxton May 20, 2024
b20ed62
updates
hello-cpaxton May 20, 2024
1117f81
doing some code cleanup in the demo script
hello-cpaxton May 20, 2024
c4d33ca
update client
hello-cpaxton May 20, 2024
974528f
update
hello-cpaxton May 20, 2024
137a217
update
hello-cpaxton May 20, 2024
06c3da4
test arm motion
hello-cpaxton May 20, 2024
ec513ec
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
hello-cpaxton May 20, 2024
14bce4d
add some code to debug simple arm motions
hello-cpaxton May 20, 2024
0aed09e
vis code out from test script
hello-cpaxton May 20, 2024
ca05c71
make some changes to the arm control code
hello-cpaxton May 20, 2024
cf91ff7
make sure that we sleep long enough to get into the right space
hello-cpaxton May 20, 2024
e535281
cleanup
hello-cpaxton May 21, 2024
d9a5549
fix startup
hello-cpaxton May 21, 2024
2d17ae9
compatible with ROS setup
hello-cpaxton May 21, 2024
f711fea
update
hello-cpaxton May 21, 2024
c9795e0
socket tweaks?
hello-cpaxton May 21, 2024
bb2dda5
fixing some issues with connection
hello-cpaxton May 21, 2024
d7c567d
code cleanup in server
hello-cpaxton May 24, 2024
4f153d8
updates to config file
hello-cpaxton May 24, 2024
3344ca9
traversible bugs when exploring
hello-cpaxton May 24, 2024
2a0eff0
do not use habitat format
hello-cpaxton May 30, 2024
43fe64b
Merge branch 'cpaxton/ros2-migration' of github.com:cpaxton/home-robo…
hello-cpaxton May 30, 2024
65b7a00
sending gripper commands
hello-cpaxton Jun 3, 2024
bddc826
updates
Jun 27, 2024
d318093
Merge branch 'cpaxton/ros2-migration' of github.com:NYU-robot-learnin…
Jun 27, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -86,4 +86,7 @@ projects/habitat_ovmm/configs/agent/generated/*
# Remove ros2 build, install and log
/build/*
/install/*
/log/*
/log/*

# VLM output
vlm_plan.txt
3 changes: 0 additions & 3 deletions .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,3 @@
[submodule "src/home_robot/home_robot/perception/detection/grounded_sam/Grounded-Segment-Anything"]
path = src/home_robot/home_robot/perception/detection/grounded_sam/Grounded-Segment-Anything
url = https://github.com/IDEA-Research/Grounded-Segment-Anything
[submodule "src/third_party/spot-sim2real"]
path = src/third_party/spot-sim2real
url = [email protected]:Jdvakil/spot-sim2real.git
74 changes: 74 additions & 0 deletions install.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
#!/usr/bin/env bash
# This script (c) 2024 Chris Paxton under the MIT license: https://opensource.org/licenses/MIT
# This script is designed to install the HomeRobot/StretchPy environment.
export PYTORCH_VERSION=2.1.2
export CUDA_VERSION=11.8
export PYTHON_VERSION=3.10
ENV_NAME=home-robot
CUDA_VERSION_NODOT="${CUDA_VERSION//./}"
export CUDA_HOME=/usr/local/cuda-$CUDA_VERSION
echo "=============================================="
echo " INSTALLING STRETCH AI TOOLS"
echo "=============================================="
echo "---------------------------------------------"
echo "Environment name: $ENV_NAME"
echo "PyTorch Version: $PYTORCH_VERSION"
echo "CUDA Version: $CUDA_VERSION"
echo "Python Version: $PYTHON_VERSION"
echo "CUDA Version No Dot: $CUDA_VERSION_NODOT"
echo "---------------------------------------------"
echo "Notes:"
echo " - This script will remove the existing environment if it exists."
echo " - This script will install the following packages:"
echo " - pytorch=$PYTORCH_VERSION"
echo " - pytorch-cuda=$CUDA_VERSION"
echo " - pyg"
echo " - torchvision"
echo " - python=$PYTHON_VERSION"
echo " - This script will install the following packages from source:"
echo " - pytorch3d"
echo " - torch_scatter"
echo " - torch_cluster"
echo " - Python version 3.12 is not supported by Open3d."
echo "---------------------------------------------"
echo "Currently:"
echo " - CUDA_HOME=$CUDA_HOME"
echo " - python=`which python`"
echo "---------------------------------------------"
read -p "Does all this look correct? (y/n) " yn
case $yn in
y ) echo "Starting installation...";;
n ) echo "Exiting...";
exit;;
* ) echo Invalid response!;
exit 1;;
esac
mamba env remove -n $ENV_NAME -y
mamba create -n $ENV_NAME -c pyg -c pytorch -c nvidia pytorch=$PYTORCH_VERSION pytorch-cuda=$CUDA_VERSION pyg torchvision python=$PYTHON_VERSION -y
source activate $ENV_NAME

# Now install pytorch3d a bit faster
mamba install -c fvcore -c iopath -c conda-forge fvcore iopath -y

pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"
pip install torch_cluster -f https://pytorch-geometric.com/whl/torch-${PYTORCH_VERSION}+${CUDA_VERSION_NODOT}.html
pip install torch_scatter -f https://pytorch-geometric.com/whl/torch-${PYTORCH_VERSION}+${CUDA_VERSION_NODOT}.html
pip install torch_geometric
pip install -e ./src[dev]
# TODO: should we remove this?
# Open3d is an optional dependency - not included in setup.py since not supported on 3.12
# pip install open3d scikit-fmm

echo "=============================================="
echo " INSTALLATION COMPLETE"
echo "Finished setting up the StretchPy environment."
echo "Environment name: $ENV_NAME"
echo "CUDA Version: $CUDA_VERSION"
echo "Python Version: $PYTHON_VERSION"
echo "CUDA Version No Dot: $CUDA_VERSION_NODOT"
echo "CUDA_HOME=$CUDA_HOME"
echo "python=`which python`"
echo "You can start using it with:"
echo ""
echo " source activate $ENV_NAME"
echo "=============================================="
32 changes: 32 additions & 0 deletions projects/habitat_ovmm/configs/env/hssd_eval_robot_agent.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
NO_GPU: 0 # 1: ignore IDs above and run on CPU, 0: run on GPUs with IDs above
NUM_ENVIRONMENTS: 1 # number of environments (per agent process)
DUMP_LOCATION: datadump # path to dump models and log
EXP_NAME: eval_hssd # experiment name
VISUALIZE: 0 # 1: render observation and predicted semantic map, 0: no visualization
PRINT_IMAGES: 1 # 1: save visualization as images, 0: no image saving
GROUND_TRUTH_SEMANTICS: 1 # 1: use ground-truth semantics (for debugging / ablations)
seed: 0 # seed
SHOW_RL_OBS: False # whether to show the observations passed to RL policices, for debugging

ENVIRONMENT:
forward: 0.25 # forward motion (in meters)
turn_angle: 30.0 # agent turn angle (in degrees)
frame_height: 640 # first-person frame height (in pixels)
frame_width: 480 # first-person frame width (in pixels)
camera_height: 1.31 # camera sensor height (in metres)
hfov: 42.0 # horizontal field of view (in degrees)
min_depth: 0.0 # minimum depth for depth sensor (in metres)
max_depth: 10.0 # maximum depth for depth sensor (in metres)
num_receptacles: 21
category_map_file: projects/real_world_ovmm/configs/example_cat_map.json
use_detic_viz: False
evaluate_instance_tracking: False # whether to evaluate the built instance map against groundtruth instance ids
use_opencv_camera_pose: True # whether to convert camera pose to opencv convention, set False for OVMM challenge baseline and True for voxel code

EVAL_VECTORIZED:
simulator_gpu_ids: [1, 2, 3, 4, 5, 6, 7] # IDs of GPUs to use for vectorized environments
split: val # eval split
num_episodes_per_env: null # number of eval episodes per environment
record_videos: 0 # 1: record videos from printed images, 0: don't
record_planner_videos: 0 # 1: record planner videos (if record videos), 0: don't
metrics_save_freq: 5 # save metrics after every n episodes
2 changes: 1 addition & 1 deletion projects/habitat_ovmm/eval_robot_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@
get_habitat_config,
get_omega_config,
)
from utils.env_utils import create_ovmm_env_fn

from home_robot.agent.multitask import get_parameters
from home_robot.agent.multitask.robot_agent import RobotAgent
from home_robot.perception import create_semantic_sensor
from home_robot.utils.rpc import get_vlm_rpc_stub
from home_robot_sim.ovmm_sim_client import OvmmSimClient, SimGraspPlanner
from home_robot_sim.utils.env_utils import create_ovmm_env_fn

os.environ["OPENBLAS_NUM_THREADS"] = "1"
os.environ["NUMEXPR_NUM_THREADS"] = "1"
Expand Down
31 changes: 31 additions & 0 deletions projects/habitat_ovmm/test_owlv2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import requests
import torch
from PIL import Image
from transformers import Owlv2ForObjectDetection, Owlv2Processor

processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-ensemble")
model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-ensemble")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)

# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process_object_detection(
outputs=outputs, threshold=0.1, target_sizes=target_sizes
)

i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]

# Print detected objects and rescaled box coordinates
for box, score, label in zip(boxes, scores, labels):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}"
)
155 changes: 155 additions & 0 deletions projects/habitat_ovmm/test_real_traj.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"root_path = \"/home/xiaohan/accel-cortex/\"\n",
"\n",
"import pickle\n",
"\n",
"import numpy as np\n",
"\n",
"# with open(root_path + \"debug_svm.pkl\", \"rb\") as f:\n",
"# svm = pickle.load(f)\n",
"\n",
"\n",
"# observations = svm.observations\n",
"# with open(root_path + \"annotation.pkl\", \"rb\") as f:\n",
"# annotation = pickle.load(f)\n",
"with open(root_path + \"stretch_output_2024-03-13_15-23-13.pkl\", \"rb\") as f:\n",
" obs_history = pickle.load(f)\n",
"\n",
"# print(annotation[\"task\"])\n",
"# key_frames = []\n",
"# key_obs = []\n",
"# for idx, obs in enumerate(observations):\n",
"# perceived_ids = np.unique(obs.obs.task_observations[\"gt_instance_ids\"])\n",
"# for target_id in annotation[\"object_ids\"]:\n",
"# if (target_id + 1) in perceived_ids:\n",
"# print(\"target observation found\")\n",
"# key_frames.append(obs)\n",
"# key_obs.append(obs_history[idx])\n",
"# obs = key_frames[-1]\n",
"key_obs = obs_history[\"obs\"]\n",
"obs = key_obs[-1]\n",
"\n",
"import time\n",
"from pathlib import Path\n",
"\n",
"import imageio\n",
"import yaml\n",
"from PIL import Image\n",
"\n",
"from home_robot.agent.multitask import get_parameters\n",
"from home_robot.mapping.voxel import (\n",
" SparseVoxelMap,\n",
" SparseVoxelMapNavigationSpace,\n",
" plan_to_frontier,\n",
")\n",
"from home_robot.perception import create_semantic_sensor\n",
"from home_robot.perception.encoders import get_encoder\n",
"\n",
"# image_array = np.array(obs.obs.rgb, dtype=np.uint8)\n",
"# print(image_array.shape)\n",
"# # image_array = image_array[..., ::-1]\n",
"# image = Image.fromarray(image_array)\n",
"\n",
"\n",
"parameters = yaml.safe_load(\n",
" Path(\"/home/xiaohan/home-robot/src/home_robot_sim/configs/gpt4v.yaml\").read_text()\n",
")\n",
"config, semantic_sensor = create_semantic_sensor()\n",
"semantic_sensor\n",
"\n",
"# parameters = get_parameters(cfg.agent_parameters)\n",
"encoder = get_encoder(parameters[\"encoder\"], parameters[\"encoder_args\"])\n",
"\n",
"voxel_map = SparseVoxelMap(\n",
" resolution=parameters[\"voxel_size\"],\n",
" local_radius=parameters[\"local_radius\"],\n",
" obs_min_height=parameters[\"obs_min_height\"],\n",
" obs_max_height=parameters[\"obs_max_height\"],\n",
" min_depth=parameters[\"min_depth\"],\n",
" max_depth=parameters[\"max_depth\"],\n",
" pad_obstacles=parameters[\"pad_obstacles\"],\n",
" add_local_radius_points=parameters.get(\"add_local_radius_points\", True),\n",
" remove_visited_from_obstacles=parameters.get(\n",
" \"remove_visited_from_obstacles\", False\n",
" ),\n",
" obs_min_density=parameters[\"obs_min_density\"],\n",
" smooth_kernel_size=parameters[\"smooth_kernel_size\"],\n",
" encoder=encoder,\n",
" use_median_filter=parameters.get(\"use_median_filter\", False),\n",
" median_filter_size=parameters.get(\"median_filter_size\", 5),\n",
" median_filter_max_error=parameters.get(\"median_filter_max_error\", 0.01),\n",
" use_derivative_filter=parameters.get(\"use_derivative_filter\", False),\n",
" derivative_filter_threshold=parameters.get(\"derivative_filter_threshold\", 0.5),\n",
" instance_memory_kwargs={\n",
" \"min_pixels_for_instance_view\": parameters.get(\n",
" \"min_pixels_for_instance_view\", 100\n",
" ),\n",
" \"min_instance_thickness\": parameters.get(\"min_instance_thickness\", 0.05),\n",
" \"min_instance_vol\": parameters.get(\"min_instance_vol\", 1e-6),\n",
" \"max_instance_vol\": parameters.get(\"max_instance_vol\", 10.0),\n",
" \"min_instance_height\": parameters.get(\"min_instance_height\", 0.1),\n",
" \"max_instance_height\": parameters.get(\"max_instance_height\", 1.8),\n",
" \"open_vocab_cat_map_file\": parameters.get(\"open_vocab_cat_map_file\", None), \n",
" },\n",
")\n",
"\n",
"voxel_map.reset()\n",
"# key_obs = [key_obs[5]]\n",
"for idx, obs in enumerate(key_obs):\n",
"\n",
" image_array = np.array(obs.rgb, dtype=np.uint8)\n",
" # print(image_array.shape)\n",
" # # image_array = image_array[..., ::-1]\n",
" image = Image.fromarray(image_array)\n",
" image.show()\n",
"\n",
" obs = semantic_sensor.predict(obs)\n",
" voxel_map.add_obs(obs)\n",
"\n",
"voxel_map.show(\n",
" instances=True,\n",
" height=1000,\n",
" boxes_plot_together=False,\n",
" backend=\"pytorch3d\",\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"voxel_map.get_instances()[3]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "home-robot",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading