- Emilio Perez Juarez (EPJ), Gary Yendell (GY), Joshua Einstein Curtis (JEC), Oliver Copping (OC), Tom Cobb (TC), Erico Nogueira Rolim (ENR), James Souter (JS), Qun Zhang (QZ), Jakub Wlodek (JW), Tomasz Brys (TB)
- ENR: plans to use in future, mapped out ideas, but no actual work done, very interested in getting it working
- ACTION: Raise an issue
JW: something similar to Sumo, yml files and ansible handles downloading/compiling of required modules, replaces the configure folder. Ansible role generates startup scripts. Binaries can be reused if used by multiple IOCs on same server. Binary can be either built on the host or on build server. Version controlled.
TC: very similar to Diamond – we have a generic ioc and create binary once, scripts to install modules (asyn, areaDetector) etc, planning to move to Ansible, TC to compare notes with JW about this. Phoebus bob files and ophyd devices configured from same yml files.
JW: Ansible role builds dependency tree in place. Most straightforward to completely remove configure directory.
TC: Diamond build is in containers
ENR: Less coupled. We use containers, and makeBaseApp.pl, use a project called epics-in-docker (mostly in podman), packages known-to-work versions of epics modules, try to continue using known working versions. People have templates or a comment file with everything they need.
EPJ: How do you specify dbd files etc?
ENR: done manually, some things handled in src/Makefile
EPJ: EPNix may be interesting to explore
- Sumo
- NSLS-II Ansible roles
- ibek in containers
- https://github.com/cnpem/epics-in-docker
- https://github.com/epics-extensions/EPNix
-
areaDetector:
- Issues:
- Improve documentation building
- ENR: we find markdown easier but limited in semantic, e.g. complicated tables, RST has that advantage JEC: Is it worth changing if currently working? TC: we like myst, use markdown for vast majority and use RST when required. Myst is a plugin for sphinx.
- Adding Index waveform and calibration PV
- PVs for user defined pixel size, separate for X and Y, etc, for benefit of phoebus. JEC: seems more like a calculated profile PV, or EGU as a new variable. We don’t currently have a profile feature outside of the plugins. JW: think we do this already for free with displaywidget in CSS/Phoebus ENR: Profiles can be diagonal when using silks (sp?) JW: if just read only Pvs reasonable to add them, maybe just add pixel size and have EGU field be the units. All the drivers would have to be aware of this change. May have to pass it in in constructor or use ioc shell command. Most vendor SDKs don’t have a pixel_size command – would have to check manual and add to IOC /driver manually. EPJ: it could be autosaved. Have to consider how this works with ROI JW: Phoebus could handle most of the processing of these Pvs, rather than the IOC GY: ADDriver doesn’t have pixel size Pvs already but Eiger does JEC & JW: pixel size could be misleading, may be number of bytes per pixel, maybe pixel_spacing?
- change "NDAttributes.xsd" to support .xml containing macro characters, $, (, )
- Improve documentation building
- Issues:
-
ADCore:
- Issues:
- PRs:
- adding video compression (h264)
- EPJ: May not yet be threadsafe, two instance of same plugin could cause failure. Wait until more work is done. Last touched 9 months ago. ENR: I have some reservations about implementations. Concern: how does compressing video stream work when clients connect on and off at different points: before and after starting etc. TC: What happens if we don’t get the keyframe? EPJ: Clear context and start from beginning at every keyframe. TC: What happens when keyframes are compressed, is the delta from that keyframe still useful, where do we get header info? ENR: See video compression PR in ADSupport. EPJ: collect block of frames and compress them? Would increase latency. JEC: ffmpeg required to be installed EPJ: Further review needed ENR: PVA designed to support codec support, but specifically codecs inside a frame. How can we fit video here and does it make sense? TC: There are already ways to serve an H264 stream. Can serve over HTTP rather than PVA JW: I prefer that, lz4 and blosc already exist if we want to serve it over EPICS. Lossy already so HTTP stream should be fine. TC: is there a suitable codec to handle variable frame rate and possibly image size (less of an issue) JW: Nvidia may have proprietary codecs OC: There is AV1, but requires specific hardware ENR: compressing AV1 on CPU is too slow JEC: we can do H264 with custom timecode handling. TC: We don’t want client to buffer/run at a set frame rate, just update when possible JW: H265 is an option JW: was there a PR to update ffmpeg server plugin? EPJ: ffmpeg offers newer API, but there will be another newer one ENR: webRTC uses VP8 – h264 is not supported on Android any more. VP9 may be possible. JEC: SMPTE_2110 use RTP, PTP
- adding video compression (h264)
-
ADEiger:
-
ADAravis:
- Issues:
- No license?
- Shutter control needs to be added
- TC: Does he (Mark Rivers) mean camera shutter or PV shutter? JW: Usually PV shutter just works EPJ: more details needed
- Issues:
-
ADGenICam:
-
ffmpegServer:
-
ADViewers:
-
ADPilatus:
-
ADPcoWin:
-
ADVimba:
-
specsAnalyser:
EPJ to be leaving Diamond soon, would anyone like to run these meetings? Please email. EPJ has made areaDetector/collaboration repository with historical notes, made a ./get_issues_and_prs.py script, specifying number of days since last meeeting, automatically generates from github API