forked from crosswalk-project/crosswalk
-
Notifications
You must be signed in to change notification settings - Fork 0
Buildbot
Halton Huo edited this page Aug 16, 2013
·
17 revisions
- Build infrastructure is composed of trying bot and build bot.
- Trying bot will cover linting, compiling and essential test cases, the purpose is to provide reviewer information beyond code.
- Build bot will cover full test, the purpose is to detect regression.
- Code is hosted at https://github.com/otcshare/build-infrastructure, current working branch 1469_work.
- Waterfall: http://wrt-buildbot.bj.intel.com/buildbot/waterfall
- Username/password: Query [email protected] for username/password
- Github worker detailed information
Host name/IP | Role | GUI | How to start
------------------------------|-----------------|--------|-----------------------------------------------
| wrtvms.bj.intel.com | VMWare server | vSphere| N/A
| wrt-buildbot.bj.intel.com | Trybot master | VNC | cd ~/masters/master.tryserver.wrt && make restart
| 10.240.192.170 | xwalk_linux | VNC | cd ~/build/ubuntu-try-1/build-infrastructure/slave && ./run_linux_try.sh ubuntu-try-1 for Crosswalk.
| | content_linux | VNC | cd ~/build/ubuntu-try-11/build-infrastructure/slave, ./run_linux_try.sh ubuntu-try-11
| 10.240.192.172 | xwalk_linux | VNC | cd ~/build/ubuntu-try-2/build-infrastructure/slave && ./run_linux_try.sh ubuntu-try-2
| | content_linux | VNC | cd ~/build/ubuntu-try-12/build-infrastructure/slave && ./run_linux_try.sh ubuntu-try-12
| 10.240.192.171 | xwalk_win | RDP | In D:\build-infrastructure\slave, click run_win_try1.bat
| 10.240.192.173 | xwalk_win | RDP | In D:\build-infrastructure\slave, click run_win_try2.bat
| 10.240.192.181 | content_win | RDP | In D:\build-infrastructure\slave, click run_win_try11.bat
| 10.240.192.183 | content_win | RDP | In D:\build-infrastructure\slave, click run_win_try12.bat
| 10.240.192.174 | content_android | VNC | cd ~/build/android-try-12/build-infrastructure/slave && ./run_linux_try.sh android-try-11
- Waterfall: http://wrt-build.sh.intel.com/buildbot/waterfall
- Username/password: Query [email protected] for username/password
- Detailed information
Host name | Role | GUI | How to start
-------------------------------------------|---------------------|--------|-----------------------------------------------
| wrtvms.sh.intel.com | VMWare server | vSphere| N/A
| wrt-build.sh.intel.com | Buildbot master | VNC | cd ~/masters/master.wrt && make restart
| builder-ubuntu1.sh.intel.com | dev-wrt-linux | VNC | cd ~/build/wrt-ubuntu-builder/build-infrastructure/slave && ./run_dev_wrt_linux_build.sh
| | dev-content-linux | | cd ~/build/content-ubuntu-builder/build-infrastructure/slave && ./run_dev_content_linux_build.sh
| builder-ubuntu32-1.sh.intel.com | dev-wrt-linux32 | VNC | cd ~/build/wrt-ubuntu32-builder/build-infrastructure/slave && ./run_dev_wrt_linux32_build.sh
| | dev-content-linux32 | | cd ~/build/content-ubuntu32-builder/build-infrastructure/slave && ./run_dev_content_linux_build.sh
| builder-win1.sh.intel.com(10.239.97.235) | dev-wrt-win | RDP | In D:\build-infrastructure\slave, click run_dev_wrt_win_build.bat
| builder-win2.sh.intel.com(10.239.97.236) | dev-content-win | RDP | In D:\build-infrastructure\slave, click run_dev_content_win_build.bat
| builder-android1.sh.intel.com | dev-content-android | VNC | cd ~/build/content-android-builder/build-infrastructure/slave && ./run_dev_content_android_build.sh
| | dev-wrt-android | VNC | cd ~/build/wrt-android-builder/build-infrastructure/slave && ./run_dev_wrt_android_build.sh
| builder-aura1.sh.intel.com | dev-wrt-aura | VNC | cd ~/build/wrt-aura-builder/build-infrastructure/slave && ./run_dev_wrt_aura_build.sh
| builder-tizen1.sh.intel.com | dev-wrt-tizen | VNC | cd ~/build/wrt-tizen-builder/build-infrastructure/slave && ./run_dev_wrt_tizen_build.sh
- OS disk should be separated with slave disk
- OS disk contains OS plus build required tools, better be located at /vmfs/volumes/storage/ (this is a 1T raid1 disks)
- Slave disk contains build directory(build-infrastructure, depot_tools), better be located at /vmfs/volumes/ssd-raid0/ (because ssd disks idealy has better access time, which satisfy chromium huge code base)
- One VM -> One OS disk -> Multiple slave disks (depend on how many slaves you want to deploy on one VM)
- Shutdown VM if you want to copy some disk file.
- Login wrtvms.bj.intel.com via vSphere client (if not install, get it from https://wrtvms.bj.intel.com/)
- Right click the server name(wrtvms.bj.intel.com) -> New Virtual Machine
- OS size: Linux: 10GB, windows: 40GB
- VM location: /vmfs/volumes/storage/<your_vm_name>
- Setup build environment by following Build-Crosswalk
- More easy way: copy existing VM OS disk will be very easy. For eg, if you want to create a new VM with Win8 SDK with VM name buildbot-slave-win100
ssh [email protected]
cd /vmfs/volumes/storage/buildbot-slave-win100
cp /vmfs/volumes/storage/buildbot-slave-win2/Win7OS.vmdk .
cp /vmfs/volumes/storage/buildbot-slave-win2/Win7OS-flat.vmdk .
edit the .vmx file, correct the OS filename or rename the copied vmdk
- Login wrtvms.bj.intel.com via vSphere client (if not install, get it from https://wrtvms.bj.intel.com/)
- Right click the slave name -> Edit settings -> Add -> Hard Disk, then restart VM
- Suggestion slave size: Linux: 20GB, windows: 45GB
- Mount newly added disk, for eg the second disk(which is first slave)
- For linux
sudo mkfs.ext4 /dev/sdb
edit /etc/fstab, add the line and reboot
/dev/sdd /home/mrbuild/work/beta ext4 defaults,user_xattr 0 2
- For Windows
* Open Computer management (Menu-> right click on Computer->Manage )
* Storage->Disk Management
* format newly added disk in NTFS
- Prepare build-infrastructure
git clone [email protected]:otcshare/build-infrastructure
create build-infrastructure/site_config/.bot_password
- Start slave as table above.
Bulid master host socks for salves(slaves can not pass lab firewall).
- Edit /etc/polipo/config (Install polipo if do not have)
proxyAddress = "0.0.0.0" # IPv4 only
allowedClients = 127.0.0.1, 10.240.192.0/24
socksParentProxy = "proxy.jf.intel.com:1080"
socksProxyType = socks5
dnsUseGethostbyname = yes
sudo /etc/init.d/polipo restart
Bulid master host http/https for salves(slaves can not pass lab firewall).
- Download srelay to $HOME/srelay/
- Edit $HOME/srelay/srelay.conf
0.0.0.0 any 10.7.211.16 1080
sudo $HOME/srelay/srelay-0.4.8b5/srelay -a n -f -c $HOME/srelay/srelay.conf