Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

{on_load_function_failed,prometheus_process_collector} error #13

Open
michaelstalker opened this issue Oct 13, 2018 · 16 comments
Open

{on_load_function_failed,prometheus_process_collector} error #13

michaelstalker opened this issue Oct 13, 2018 · 16 comments

Comments

@michaelstalker
Copy link

michaelstalker commented Oct 13, 2018

I'm building a Distillery release in a Docker container running Alpine Linux, and am getting an {on_load_function_failed,prometheus_process_collector} error.

Here's some environment information:

  • The bug happens in prometheus_process_collector version 1.4.0. Version 1.3.1 runs without any problems.
  • I've seen this happen with both Elixir 1.6.5 with OTP 20, and Elixir 1.7.3 with OTP 21
  • The bug happens on Alpine Linux in a Docker container
  • I'm trying to run the app in a Distillery release. I'm using Distillery 2.0.10.
  • I installed the g++ library in the Docker image so prometheus_process_collector would compile
  • prometheus_process_collector is a dependency of one application in an umbrella project.

The Docker container crashes when I try running docker run <image ID>. The entrypoint for the Docker container is the Distillery binary, and it's set to run in the foreground. I'm building my dependencies with mix do deps.get, compile.

The app runs fine when I run it outside of a Docker container in a macOS 10.13.6 environment.

Here's the crash report:

2018-10-13 19:50:41.388715 crash_report        #{label=>{proc_lib,crash},report=>[[{initial_call,{supervisor,kernel,['Argument__1']}},{pid,<0.1197.0>},{registered_name,[]},{error_info,{exit,{on_load_function_failed,prometheus_process_collector},[{init,run_on_load_handlers,0,[]},{kernel,init,1,[{file,"kernel.erl"},{line,212}]},{supervisor,init,1,[{file,"supervisor.erl"},{line,295}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}},{ancestors,[kernel_sup,<0.1171.0>]},{message_queue_len,0},{messages,[]},{links,[<0.1173.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,376},{stack_size,27},{reductions,273}],[]]}
2018-10-13 19:50:41.388764 supervisor_report   #{label=>{supervisor,start_error},report=>[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,{on_load_function_failed,prometheus_process_collector}},{offender,[{pid,undefined},{id,kernel_safe_sup},{mfargs,{supervisor,start_link,[{local,kernel_safe_sup},kernel,safe]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]}
2018-10-13 19:50:42.392873 crash_report        #{label=>{proc_lib,crash},report=>[[{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.1170.0>},{registered_name,[]},{error_info,{exit,{{shutdown,{failed_to_start_child,kernel_safe_sup,{on_load_function_failed,prometheus_process_collector}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,138}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}},{ancestors,[<0.1169.0>]},{message_queue_len,1},{messages,[{'EXIT',<0.1171.0>,normal}]},{links,[<0.1169.0>,<0.1167.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,27},{reductions,193}],[]]}
2018-10-13 19:50:42.393766 std_info            #{label=>{application_controller,exit},report=>[{application,kernel},{exited,{{shutdown,{failed_to_start_child,kernel_safe_sup,{on_load_function_failed,prometheus_process_collector}}},{kernel,start,[normal,[]]}}},{type,permanent}]}
{"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,kernel_safe_sup,{on_load_function_failed,prometheus_process_collector}}},{kernel,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,kernel_safe_sup,{on_load_function_failed,prometheus_process_collector}}},{kernel,sta

I poked around in erl_crash.dump for a bit, but didn't know exactly what to look for. What can I do to help troubleshoot this?

@entone
Copy link

entone commented Nov 1, 2018

I also have this happening with the 1.4 release. I have several other apps on 1.3.1 which work fine. Very similar setup to what @michaelstalker is using.

App release is being built on Docker image elixir:1.6.5-alpine

@deadtrickster
Copy link
Owner

Maybe it happens because I don't use alpine? can someone post me minimal docker file to reproduce

@entone
Copy link

entone commented Nov 1, 2018

so here is the relevant part of my build.

# Dockerfile
FROM elixir:1.6.5-alpine as build

# install build dependencies
RUN apk add --update git build-base

# prepare build dir
RUN mkdir /app
WORKDIR /app

# install hex + rebar
RUN mix local.hex --force && \
    mix local.rebar --force

# set build ENV
ENV MIX_ENV=prod

# install mix dependencies
COPY . .

RUN mix do deps.get, deps.compile

# build release
RUN mix release --no-tar --verbose

@jdewar
Copy link

jdewar commented Jan 8, 2019

also ran into this. i have a feeling that #9, #10, #14 and #13 might be related. we noticed a problem when metrics weren't being collected (#14), we had a startup problem and the errors looked like #9 and this, and finally the suggestion in #10 seemed to fix the startup issue.

@4xposed
Copy link

4xposed commented Jan 9, 2019

I am having the same issue using prometheus_process_collector 1.4.0
Elixir 1.7.4 with OTP 21.2

machine where I make the build:

$ uname -a
Linux ship 4.18.11-gentoo #1 SMP Sat Sep 29 19:11:49 -00 2018 x86_64 Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz GenuineIntel GNU/Linux

machine where I run the build

 $ uname -a
Linux frontend-1 4.14.7-gentoo #1 SMP Tue May 15 19:11:09 -00 2018 x86_64 Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz GenuineIntel GNU/Linux

@deadtrickster
Copy link
Owner

Having this working on various linuxes,MacOS and FreeBSDs feel helpless a bit now.
One thing I could suggest - go to c_src on build machine run make memory-test.
Grab the binary and try to run it on the deployment machine. Maybe will get more naked errors.

@4xposed
Copy link

4xposed commented Jan 9, 2019

seems to be issues with the machine running the release, the release ran without issues in other servers.

sorry for the trouble and thank you for your quick reply!

@4xposed
Copy link

4xposed commented Jan 10, 2019

funny enough changing my dependencies to:

{:prometheus, "~> 4.0", override: true},
{:prometheus_process_collector, "~> 1.3.1"}

fixed the issue on the one server where the release would crash

@frepond
Copy link

frepond commented May 6, 2019

Hi, the same happens here. Not able to upgrade to 1.4.x running on docker with alpine linux. I works with 1.3.1 without problems though.

@Ebtoulson
Copy link

Ebtoulson commented May 29, 2019

Not sure if this is going to help anyone but this is the results of the memory test.

Dockerfile used:

FROM elixir:1.8.2-alpine AS base

ENV DOCKER_APP_ROOT=/prometheus_process_collector MIX_ENV=prod

WORKDIR $DOCKER_APP_ROOT

RUN apk add --no-cache \
  ca-certificates \
  g++ \
  gcc \
  git \
  jq \
  make \
  musl-dev \
  valgrind

COPY . $DOCKER_APP_ROOT/

ENTRYPOINT ["/prometheus_process_collector/bin/checks.sh"]

Output of docker run

Loading files...
Loading src/prometheus_process_collector.erl
Applying rules...
# src/prometheus_process_collector.erl [OK]
Loading files...
Applying rules...
Loading files...
Loading rebar.config
Applying rules...
# rebar.config [OK]
Loading files...
Loading elvis.config
Applying rules...
# elvis.config [OK]
===> Package rebar3_archive_plugin-0.0.2 not found. Fetching registry updates and trying again...
===> Updating package registry...
===> Writing registry to /root/.cache/rebar3/hex/default/registry
===> Generating package index...
===> [appsignal:1.6.2], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.6-beta.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.0], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.7.0-alpha.4], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.0-beta.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.3], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.7.0-alpha.3], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [prometheus_httpd:2.1.10], Bad dependency version for prometheus: ~> 3.5 or ~> 4.2.
===> [appsignal:1.7.0-alpha.2], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.7.0-alpha.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.5], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.6-beta.2], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.6], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.7], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.4], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.0-alpha.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> Writing index to /root/.cache/rebar3/hex/default/packages.idx
===> Fetching rebar3_archive_plugin ({pkg,<<"rebar3_archive_plugin">>,
                                      <<"0.0.2">>})
===> Downloaded package, caching at /root/.cache/rebar3/hex/default/packages/rebar3_archive_plugin-0.0.2.tar
===> Fetching rebar3_elvis_plugin ({git,
                                       "https://github.com/deadtrickster/rebar3_elvis_plugin.git",
                                       "master"})
===> WARNING: It is recommended to use {branch, Name}, {tag, Tag} or {ref, Ref}, otherwise updating the dep may not work as expected.
===> Fetching katana_code ({pkg,<<"katana_code">>,<<"0.1.0">>})
===> Downloaded package, caching at /root/.cache/rebar3/hex/default/packages/katana_code-0.1.0.tar
===> Fetching zipper ({pkg,<<"zipper">>,<<"1.0.1">>})
===> Downloaded package, caching at /root/.cache/rebar3/hex/default/packages/zipper-1.0.1.tar
===> Fetching aleppo ({pkg,<<"inaka_aleppo">>,<<"1.0.0">>})
===> Downloaded package, caching at /root/.cache/rebar3/hex/default/packages/inaka_aleppo-1.0.0.tar
===> Compiling aleppo
_build/default/plugins/aleppo/src/aleppo.erl:6: Warning: record ale_context has field(s) without type information
_build/default/plugins/aleppo/src/aleppo.erl:12: Warning: missing specification for function process_file/1
_build/default/plugins/aleppo/src/aleppo.erl:15: Warning: missing specification for function process_file/2
_build/default/plugins/aleppo/src/aleppo.erl:25: Warning: missing specification for function process_tokens/1
_build/default/plugins/aleppo/src/aleppo.erl:32: Warning: missing specification for function process_tokens/2
_build/default/plugins/aleppo/src/aleppo.erl:307: Warning: missing specification for function scan_file/1

===> Compiling zipper
===> Compiling katana_code
===> Compiling rebar3_elvis_plugin
===> Compiling rebar3_archive_plugin
===> Linking _build/default/plugins/rebar3_archive_plugin to _build/test/plugins/rebar3_archive_plugin
===> Linking _build/default/plugins/rebar3_elvis_plugin to _build/test/plugins/rebar3_elvis_plugin
===> Linking _build/default/plugins/katana_code to _build/test/plugins/katana_code
===> Linking _build/default/plugins/zipper to _build/test/plugins/zipper
===> Linking _build/default/plugins/aleppo to _build/test/plugins/aleppo
===> Verifying dependencies...
===> Fetching prometheus ({pkg,<<"prometheus">>,<<"4.2.2">>})
===> Downloaded package, caching at /root/.cache/rebar3/hex/default/packages/prometheus-4.2.2.tar
===> Linking _build/default/lib/prometheus to _build/test/lib/prometheus
===> Compiling prometheus
===> Compiling prometheus_process_collector
make: Entering directory '/prometheus_process_collector/c_src'
g++ -O3 -finline-functions -fPIC -I /usr/local/lib/erlang/erts-10.3.5.1/include/ -I /usr/local/lib/erlang/lib/erl_interface-3.11.3/include -std=c++11 -Wall  -c -o prometheus_process_collector_nif.o prometheus_process_collector_nif.cc
g++ -O3 -finline-functions -fPIC -I /usr/local/lib/erlang/erts-10.3.5.1/include/ -I /usr/local/lib/erlang/lib/erl_interface-3.11.3/include -std=c++11 -Wall  -c -o prometheus_process_info_linux.o prometheus_process_info_linux.cc
cc prometheus_process_collector_nif.o prometheus_process_info_linux.o -shared -L /usr/local/lib/erlang/lib/erl_interface-3.11.3/lib -lerl_interface -lei -lstdc++ -o /prometheus_process_collector/c_src/../priv/prometheus_process_collector.so
make: Leaving directory '/prometheus_process_collector/c_src'
===> Performing EUnit tests...
.................

Top 10 slowest tests (0.000 seconds, 0.0% of total time):
  prometheus_process_collector_tests:test_process_collector/1:15
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:23
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:19
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:27
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:25
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:24
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:21
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:17
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:29
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:20
    0.000 seconds

Finished in 1.801 seconds
17 tests, 0 failures
g++ -D__STANDALONE_TEST__ -std=c++11 -Wall -L /usr/local/lib/erlang/lib/erl_interface-3.11.3/lib -lerl_interface -lei -lstdc++ prometheus_process_info_mt.cc prometheus_process_info_linux.cc -o /prometheus_process_collector/c_src/../_build/memory_test
valgrind --leak-check=full --error-exitcode=1 /prometheus_process_collector/c_src/../_build/memory_test 2
==329== Memcheck, a memory error detector
==329== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==329== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==329== Command: /prometheus_process_collector/c_src/../_build/memory_test 2
==329==
1
1
==329==
==329== HEAP SUMMARY:
==329==     in use at exit: 73,859 bytes in 4 blocks
==329==   total heap usage: 46 allocs, 42 frees, 101,311 bytes allocated
==329==
==329== LEAK SUMMARY:
==329==    definitely lost: 0 bytes in 0 blocks
==329==    indirectly lost: 0 bytes in 0 blocks
==329==      possibly lost: 0 bytes in 0 blocks
==329==    still reachable: 73,859 bytes in 4 blocks
==329==         suppressed: 0 bytes in 0 blocks
==329== Reachable blocks (those to which a pointer was found) are not shown.
==329== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==329==
==329== For counts of detected and suppressed errors, rerun with: -v
==329== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

@Stratus3D
Copy link

Used @Ebtoulson's Dockerfile and got the same output:

output
Loading files...
Loading src/prometheus_process_collector.erl
Applying rules...
# src/prometheus_process_collector.erl [OK]
Loading files...
Applying rules...
Loading files...
Loading rebar.config
Applying rules...
# rebar.config [OK]
Loading files...
Loading elvis.config
Applying rules...
# elvis.config [OK]
===> Package rebar3_archive_plugin-0.0.2 not found. Fetching registry updates and trying again...
===> Updating package registry...
===> Writing registry to /root/.cache/rebar3/hex/default/registry
===> Generating package index...
===> [appsignal:1.6.2], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.6-beta.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.0], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.7.0-alpha.4], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.0-beta.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.3], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.7.0-alpha.3], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [prometheus_httpd:2.1.10], Bad dependency version for prometheus: ~> 3.5 or ~> 4.2.
===> [appsignal:1.7.0-alpha.2], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.7.0-alpha.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.5], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.6-beta.2], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.6], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.7], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.4], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> [appsignal:1.6.0-alpha.1], Bad dependency version for httpoison: ~> 0.11 or ~> 1.0.
===> Writing index to /root/.cache/rebar3/hex/default/packages.idx
===> Verifying dependencies...
===> Compiling prometheus_process_collector
make: Entering directory '/prometheus_process_collector/c_src'
g++ -O3 -finline-functions -fPIC -I /usr/local/lib/erlang/erts-10.3.5.1/include/ -I /usr/local/lib/erlang/lib/erl_interface-3.11.3/include -std=c++11 -Wall  -c -o prometheus_process_collector_nif.o prometheus_process_collector_nif.cc
g++ -O3 -finline-functions -fPIC -I /usr/local/lib/erlang/erts-10.3.5.1/include/ -I /usr/local/lib/erlang/lib/erl_interface-3.11.3/include -std=c++11 -Wall  -c -o prometheus_process_info_linux.o prometheus_process_info_linux.cc
cc prometheus_process_collector_nif.o prometheus_process_info_linux.o -shared -L /usr/local/lib/erlang/lib/erl_interface-3.11.3/lib -lerl_interface -lei -lstdc++ -o /prometheus_process_collector/c_src/../priv/prometheus_process_collector.so
make: Leaving directory '/prometheus_process_collector/c_src'
===> Failed to restore /prometheus_process_collector/_build/test/lib/prometheus_process_collector/.rebar3/erlcinfo file. Discarding it.

===> Performing EUnit tests...
.................

Top 10 slowest tests (0.000 seconds, 0.0% of total time):
  prometheus_process_collector_tests:test_process_collector/1:15
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:23
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:19
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:27
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:25
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:24
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:21
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:17
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:29
    0.000 seconds
  prometheus_process_collector_tests:test_process_collector/1:20
    0.000 seconds

Finished in 0.747 seconds
17 tests, 0 failures
g++ -D__STANDALONE_TEST__ -std=c++11 -Wall -L /usr/local/lib/erlang/lib/erl_interface-3.11.3/lib -lerl_interface -lei -lstdc++ prometheus_process_info_mt.cc prometheus_process_info_linux.cc -o /prometheus_process_collector/c_src/../_build/memory_test
valgrind --leak-check=full --error-exitcode=1 /prometheus_process_collector/c_src/../_build/memory_test 2
==272== Memcheck, a memory error detector
==272== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==272== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==272== Command: /prometheus_process_collector/c_src/../_build/memory_test 2
==272== 
1
1
==272== 
==272== HEAP SUMMARY:
==272==     in use at exit: 73,859 bytes in 4 blocks
==272==   total heap usage: 46 allocs, 42 frees, 101,925 bytes allocated
==272== 
==272== LEAK SUMMARY:
==272==    definitely lost: 0 bytes in 0 blocks
==272==    indirectly lost: 0 bytes in 0 blocks
==272==      possibly lost: 0 bytes in 0 blocks
==272==    still reachable: 73,859 bytes in 4 blocks
==272==         suppressed: 0 bytes in 0 blocks
==272== Reachable blocks (those to which a pointer was found) are not shown.
==272== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==272== 
==272== For counts of detected and suppressed errors, rerun with: -v
==272== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

@Stratus3D
Copy link

From the description of the elixir:<version>-alpine images:

This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use musl libc instead of glibc and friends, so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice.

Maybe this is the problem? It's odd that it compiles without an errors. I would think if there was an issue with musl libc I would at least see a warning.

@Stratus3D
Copy link

I realized the Dockerfile I was using ran the command mkdir /lib64 && ln -s /lib/libc.musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2 after installing the musl-dev package. I'm not sure why that was needed for the project, seems like it might have been trying to fool something into thinking glibc was present. Either way it seems a bit hacky to me.

I removed that code and added the glibc alpine package and now everything works fine. Does anyone have any experience using glibc and musl together? As best I can tell everything is working fine now, but I'm not sure if there is a better way of doing this.

@ghost
Copy link

ghost commented Jul 5, 2020

Are there any updates on this issue? The issue is still present with Elixir v1.10.3, OTP 22 [erts-10.7.2.1] and alpine:latest docker image. I'm not feeling fine adding glibc package when there's already musl present, unless there has been no issues so far (@Stratus3D).

@Stratus3D
Copy link

@CharlotteDunois if I remember correctly I ended removing this package from my application as I wasn't actively using the metrics it provided.

@ghost
Copy link

ghost commented Jul 7, 2020

I've tested around and got it working with alpine.

You need to make sure that libgcc and libstdc++ are present in the final image. If these arent present, the erlang vm will crash with the mentioned error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants