Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker container crashing on lauch #269

Open
yannick-mayeur opened this issue Jul 5, 2018 · 14 comments
Open

Docker container crashing on lauch #269

yannick-mayeur opened this issue Jul 5, 2018 · 14 comments

Comments

@yannick-mayeur
Copy link

When I am running the docker container built with the Docker file in ./examples/docker/production I get the following error:

src/settings.js -> dist/settings.js
src/types.js -> dist/types.js
module.js:472
    throw err;
    ^

Error: Cannot find module '../../third-party/settings.json'
    at Function.Module._resolveFilename (module.js:470:15)
    at Function.Module._load (module.js:418:25)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/usr/src/localCloud/spark-server/node_modules/spark-protocol/dist/lib/FirmwareManager.js:31:18)
    at Module._compile (module.js:571:32)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/usr/src/localCloud/spark-server/node_modules/spark-protocol/dist/server/DeviceServer.js:71:24)
    at Module._compile (module.js:571:32)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/usr/src/localCloud/spark-server/node_modules/spark-protocol/dist/index.js:24:21)
    at Module._compile (module.js:571:32)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/usr/src/localCloud/spark-server/dist/defaultBindings.js:19:22)
    at Module._compile (module.js:571:32)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start:prod: `npm run build && node ./dist/main.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] start:prod script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2018-07-05T13_07_47_354Z-debug.log

@jlkalberer
Copy link

I can't really help here as I haven't used Docker that much. This feature was added by another contributor.

@yannick-mayeur
Copy link
Author

@haeferer do you know where this issue is coming from?

@haeferer
Copy link

haeferer commented Jul 6, 2018

I think its the same problem as #261 or #265
some files are missing (firmware binaries), npm run postinstall is not executing during image build

try to add this line to your docker file (I think you has used another Checkout ID, so these files a needed now ?)

RUN apk add --no-cache git; \
    cd /usr/src/localCloud; \
    git clone https://github.com/Brewskey/spark-server.git; \
    cd  /usr/src/localCloud/spark-server; \
    git checkout c182732cad6075354846f6034076fe987d599994; \
    rm -rf .git; \
    npm install; \
    apk del git; \
    npm run prebuild; \
    npm run build; \
    npm run postinstall   # <- Try to add this line do download binaries before run 

@yannick-mayeur
Copy link
Author

No I am using c182732 as checkout id.
The post install line gives me npm ERR! missing script: postinstall

@haeferer
Copy link

haeferer commented Jul 6, 2018

correct, in this version the command was "update-firmware": "node ./node_modules/spark-protocol/dist/scripts/update-firmware-binaries",

so please try this.

I cant explain why the image crashes, cause this is the same checkout as i used.

@yannick-mayeur
Copy link
Author

yannick-mayeur commented Jul 6, 2018

I now get the following error:

/usr/src/localCloud/spark-server/node_modules/spark-protocol/dist/scripts/update-firmware-binaries.js:123
  throw new Error('You need to set up a .env file with auth credentials');

But I have already setup a .env file with credentials.

I don't understand why either.

@haeferer
Copy link

haeferer commented Jul 6, 2018

... ( ??? Sounds strange.
Is there no other Error visible (like the checkout of c182732 has failed)

I will try to setup a test, but this could take some time (the project is currently at hold).

@yannick-mayeur
Copy link
Author

Oh yeah when scrolling up I have two other error, but nothing in link with the checkout though.

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2018-07-06T11_47_37_235Z-debug.log

and

gyp ERR! configure error 
gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable.
gyp ERR! stack     at PythonFinder.failNoPython (/usr/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:482:19)
gyp ERR! stack     at PythonFinder.<anonymous> (/usr/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:396:16)
gyp ERR! stack     at F (/usr/lib/node_modules/npm/node_modules/which/which.js:68:16)
gyp ERR! stack     at E (/usr/lib/node_modules/npm/node_modules/which/which.js:80:29)
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/which/which.js:89:16
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/which/node_modules/isexe/index.js:42:5
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/which/node_modules/isexe/mode.js:8:5
gyp ERR! stack     at FSReqWrap.oncomplete (fs.js:114:15)
gyp ERR! System Linux 4.15.0-24-generic
gyp ERR! command "/usr/bin/node" "/usr/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /usr/src/localCloud/spark-server/node_modules/dtrace-provider
gyp ERR! node -v v7.10.1
gyp ERR! node-gyp -v v3.6.0
gyp ERR! not ok 

@haeferer
Copy link

haeferer commented Jul 6, 2018

This sounds strange (maybe the alpine Image - pined at version 7) has changed.

If you look at my last commit da7d9ad

i've changed the Dockerfile from Alpine-Node6 to Alpine-Node7. And all the gyp and phyton dependencies could be removed (at this time)

now the same dependencies are missing again ????

you are using the correct dockerfile (without any changes?)

are you using docker on windows or linux

@yannick-mayeur
Copy link
Author

I am using the correct Dockerfile with only the update firmware line added.

I am using Docker on Linux.

@yannick-mayeur
Copy link
Author

I managed to overcome the initial error by creating a settings.json file in node_modules/spark-protocol/third-party with: [] in it.
The file had only this on my machine.

I have a new error though...
I get the following error in my docker container but not when running the server "normally":

[2018-07-10T10:51:14.918Z] ERROR: Handshake.js/252 on 1c001e71ca9a: Handshake failed
    Error: handshake data decryption failed. You probably have incorrect server key for device
        at Handshake._callee4$ (/usr/src/localCloud/spark-server/node_modules/spark-protocol/dist/lib/Handshake.js:298:21)
        at tryCatch (/usr/src/localCloud/spark-server/node_modules/regenerator-runtime/runtime.js:65:40)
        at Generator.invoke [as _invoke] (/usr/src/localCloud/spark-server/node_modules/regenerator-runtime/runtime.js:299:22)
        at Generator.prototype.(anonymous function) [as next] (/usr/src/localCloud/spark-server/node_modules/regenerator-runtime/runtime.js:117:21)
        at step (/usr/src/localCloud/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:17:30)
        at /usr/src/localCloud/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:35:14
        at new Promise (<anonymous>)
        at new F (/usr/src/localCloud/spark-server/node_modules/core-js/library/modules/_export.js:35:28)
        at Handshake.<anonymous> (/usr/src/localCloud/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:14:12)
        at Handshake._readDeviceHandshakeData (/usr/src/localCloud/spark-server/node_modules/spark-protocol/dist/lib/Handshake.js:339:20)
        at Handshake._callee2$ (/usr/src/localCloud/spark-server/node_modules/spark-protocol/dist/lib/Handshake.js:180:26)
        at tryCatch (/usr/src/localCloud/spark-server/node_modules/regenerator-runtime/runtime.js:65:40)
        at Generator.invoke [as _invoke] (/usr/src/localCloud/spark-server/node_modules/regenerator-runtime/runtime.js:299:22)
        at Generator.prototype.(anonymous function) [as next] (/usr/src/localCloud/spark-server/node_modules/regenerator-runtime/runtime.js:117:21)
        at step (/usr/src/localCloud/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:17:30)
        at /usr/src/localCloud/spark-server/node_modules/babel-runtime/helpers/asyncToGenerator.js:28:13
        at process._tickCallback (internal/process/next_tick.js:68:7)
    --
    logInfo: {
      "cache_key": "_28",
      "deviceID": null,
      "ip": "::ffff:141.51.115.146"
    }
[2018-07-10T10:51:14.919Z] ERROR: Device.js/252 on 1c001e71ca9a: Device disconnected (disconnectCounter=1, message={})
    logInfo: {
      "cache_key": "_28",
      "deviceID": ""
    }

@jlkalberer
Copy link

@yannick-mayeur - that really sounds like you didn't set the correct server key before running particle keys doctor ***

@yannick-mayeur
Copy link
Author

I don't know why it behaves like that but when I restart the docker container it works. I was able to reproduce the bug with another docker image.

@jlkalberer
Copy link

@yannick-mayeur - I'm guessing we've fixed this with recent changes to the way we pull system firmware.

I'm not sure if the docker image needs to be updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants