From 65a63dee938fcb34cfa918bceae3baff4f033a0c Mon Sep 17 00:00:00 2001 From: Daniel Stenberg Date: Fri, 23 Aug 2024 00:16:20 +0200 Subject: [PATCH] misc: address badwords complaints --- build/autotools.md | 4 +-- cmdline/copyas.md | 5 +++- cmdline/exitcode.md | 41 +++++++++++++++--------------- cmdline/urls/ftptype.md | 4 +-- cmdline/urls/globbing.md | 4 +-- helpers/sharing.md | 7 +++-- http/post/multipart.md | 2 +- http/put.md | 2 +- http/redirects.md | 8 +++--- http/response.md | 10 ++++---- install/container.md | 2 +- install/linux.md | 2 +- libcurl/globalinit.md | 12 ++++----- project/comm.md | 10 ++++---- protocols/http.md | 2 +- transfers/conn/keepalive.md | 4 +-- transfers/drive/multi-socket.md | 2 +- transfers/drive/multi.md | 4 +-- usingcurl/connections/keepalive.md | 8 +++--- usingcurl/connections/name.md | 4 +-- 20 files changed, 69 insertions(+), 68 deletions(-) diff --git a/build/autotools.md b/build/autotools.md index ae6787163e..2396cc7018 100644 --- a/build/autotools.md +++ b/build/autotools.md @@ -60,8 +60,8 @@ One of the differences between linking with a static library compared to linking with a shared one is in how shared libraries handle their own dependencies while static ones do not. In order to link with library `xyz` as a shared library, it is basically a matter of adding `-lxyz` to the linker -command line no matter which other libraries `xyz` itself was built to -use. But, if that `xyz` is instead a static library we also need to specify +command line no matter which other libraries `xyz` itself was built to use. +However, if that `xyz` is instead a static library we also need to specify each dependency of `xyz` on the linker command line. curl's configure cannot keep up with or know all possible dependencies for all the libraries it can be made to build with, so users wanting to build with static libs mostly need to diff --git a/cmdline/copyas.md b/cmdline/copyas.md index b8d63b2006..953bea89dd 100644 --- a/cmdline/copyas.md +++ b/cmdline/copyas.md @@ -27,7 +27,10 @@ Chromium)._ ## From Safari -In Safari, the "development" menu is not visible until you go into **preferences->Advanced** and enable it. But once you have done that, you can select **Show web inspector** in that development menu and get to see a new console pop up that is similar to the development tools of Firefox and Chrome. +In Safari, the "development" menu is not visible until you go into +**preferences->Advanced** and enable it. Once you have done that, you can +select **Show web inspector** in that development menu and get to see a new +console pop up that is similar to the development tools of Firefox and Chrome. Select the network tab, reload the webpage and then you can right click the particular resources that you want to fetch with curl, as if you did it with diff --git a/cmdline/exitcode.md b/cmdline/exitcode.md index 95a76c74b0..35b2ece9af 100644 --- a/cmdline/exitcode.md +++ b/cmdline/exitcode.md @@ -41,18 +41,17 @@ A basic Unix shell script could look like something like this: not enabled or was explicitly disabled at build-time. To make curl able to do this, you probably need another build of libcurl. - 5. Couldn't resolve proxy. The address of the given proxy host could not be + 5. Could not resolve proxy. The address of the given proxy host could not be resolved. Either the given proxy name is just wrong, or the DNS server is - misbehaving and does not know about this name when it should or perhaps - even the system you run curl on is misconfigured so that it does not - find/use the correct DNS server. + misbehaving and does not know about this name when it should or perhaps even + the system you run curl on is misconfigured so that it does not find/use the + correct DNS server. - 6. Couldn't resolve host. The given remote host's address was not - resolved. The address of the given server could not be resolved. Either - the given hostname is just wrong, or the DNS server is misbehaving and - does not know about this name when it should or perhaps even the system you - run curl on is misconfigured so that it does not find/use the correct DNS - server. + 6. Could not resolve host. The given remote host's address was not resolved. + The address of the given server could not be resolved. Either the given + hostname is just wrong, or the DNS server is misbehaving and does not know + about this name when it should or perhaps even the system you run curl on is + misconfigured so that it does not find/use the correct DNS server. 7. Failed to connect to host. curl managed to get an IP address to the machine and it tried to set up a TCP connection to the host but @@ -103,15 +102,15 @@ A basic Unix shell script could look like something like this: passive mode. You might be able to work-around this problem by using PORT instead, with the `--ftp-port` option. - 15. FTP cannot get host. Couldn't use the host IP address we got in the + 15. FTP cannot get host. Could not use the host IP address we got in the 227-line. This is most likely an internal error. 16. HTTP/2 error. A problem was detected in the HTTP2 framing layer. This is somewhat generic and can be one out of several problems, see the error message for details. - 17. FTP could not set binary. Couldn't change transfer method to binary. This - server is broken. curl needs to set the transfer to the correct mode + 17. FTP could not set binary. Could not change transfer method to binary. + This server is broken. curl needs to set the transfer to the correct mode before it is started as otherwise the transfer cannot work. 18. Partial file. Only a part of the file was transferred. When the transfer @@ -199,9 +198,9 @@ A basic Unix shell script could look like something like this: asking to resume a transfer that then ends up not possible to do, this error can get returned. For FILE, FTP or SFTP. - 37. Couldn't read the given file when using the FILE:// scheme. Failed to - open the file. The file could be non-existing or is it a permission - problem perhaps? + 37. Could not read the given file when using the FILE:// scheme. Failed to + open the file. The file could be non-existing or is it a permission problem + perhaps? 38. LDAP cannot bind. LDAP "bind" operation failed, which is a necessary step in the LDAP operation and thus this means the LDAP query could not be @@ -287,12 +286,12 @@ A basic Unix shell script could look like something like this: 57. **Not used** 58. Problem with the local certificate. The client certificate had a problem - so it could not be used. Permissions? The wrong pass phrase? + so it could not be used. Permissions? The wrong passphrase? - 59. Couldn't use the specified SSL cipher. The cipher names need to be - specified exactly and they are also unfortunately specific to the - particular TLS backend curl has been built to use. For the current list - of support ciphers and how to write them, see the online docs at + 59. Could not use the specified SSL cipher. The cipher names need to be + specified exactly and they are also unfortunately specific to the particular + TLS backend curl has been built to use. For the current list of support + ciphers and how to write them, see the online docs at [https://curl.se/docs/ssl-ciphers.html](https://curl.se/docs/ssl-ciphers.html). 60. Peer certificate cannot be authenticated with known CA certificates. This diff --git a/cmdline/urls/ftptype.md b/cmdline/urls/ftptype.md index 9834660bd5..dc3bd61c7c 100644 --- a/cmdline/urls/ftptype.md +++ b/cmdline/urls/ftptype.md @@ -13,8 +13,8 @@ ASCII could then be made with: curl "ftp://example.com/foo;type=A" -And while curl defaults to binary transfers for FTP, the URL format allows you -to also specify the binary type with type=I: +curl defaults to binary transfers for FTP, but the URL format allows you to +specify the binary type with `type=I`: curl "ftp://example.com/foo;type=I" diff --git a/cmdline/urls/globbing.md b/cmdline/urls/globbing.md index cf23802a6d..d475bf86ea 100644 --- a/cmdline/urls/globbing.md +++ b/cmdline/urls/globbing.md @@ -64,8 +64,8 @@ Or download all the images of a chess board, indexed by two coordinates ranged curl -O "http://example.com/chess-[0-7]x[0-7].jpg" -And you can, of course, mix ranges and series. Get a week's worth of logs for -both the web server and the mail server: +You can, of course, mix ranges and series. Get a week's worth of logs for both +the web server and the mail server: curl -O "http://example.com/{web,mail}-log[0-6].txt" diff --git a/helpers/sharing.md b/helpers/sharing.md index 80ba375e23..2c846218c5 100644 --- a/helpers/sharing.md +++ b/helpers/sharing.md @@ -62,10 +62,9 @@ run its own thread and transfer data, but you still want the different transfers to share data. Then you need to set the mutex callbacks. If you do not use threading and you *know* you access the shared object in a -serial one-at-a-time manner you do not need to set any locks. But if there is -ever more than one transfer that access share object at a time, it needs to -get mutex callbacks setup to prevent data destruction and possibly even -crashes. +serial one-at-a-time manner you do not need to set any locks. If there is ever +more than one transfer that access share object at a time, it needs to get +mutex callbacks setup to prevent data destruction and possibly even crashes. Since libcurl itself does not know how to lock things or even what threading model you are using, you must make sure to do mutex locks that only allows one diff --git a/http/post/multipart.md b/http/post/multipart.md index 1304ff5f12..28f9dd1247 100644 --- a/http/post/multipart.md +++ b/http/post/multipart.md @@ -63,7 +63,7 @@ submitted. The particular boundary you see in this example has the random part `d74496d66958873e` but you, of course, get something different when you run curl (or when you submit such a form with a browser). -So after that initial set of headers follows the request body +After that initial set of headers follows the request body --------------------------d74496d66958873e Content-Disposition: form-data; name="person" diff --git a/http/put.md b/http/put.md index ae891d001c..bb41917683 100644 --- a/http/put.md +++ b/http/put.md @@ -11,7 +11,7 @@ identifies the resource and you point out the local file to put there: curl -T localfile http://example.com/new/resource/file -`-T` implies a PUT and tell curl which file to send off. But the similarities +`-T` implies a PUT and tell curl which file to send off. The similarities between POST and PUT also allows you to send a PUT with a string by using the regular curl POST mechanism using `-d` but asking for it to use a PUT instead: diff --git a/http/redirects.md b/http/redirects.md index 9848209a58..5825337c17 100644 --- a/http/redirects.md +++ b/http/redirects.md @@ -107,10 +107,10 @@ a particular site, but since an HTTP redirect might move away to a different host curl limits what it sends away to other hosts than the original within the same transfer. -So if you want the credentials to also get sent to the following hostnames -even though they are not the same as the original—presumably because you trust -them and know that there is no harm in doing that—you can tell curl that it is -fine to do so by using the `--location-trusted` option. +If you want the credentials to also get sent to the following hostnames even +though they are not the same as the original—presumably because you trust them +and know that there is no harm in doing that—you can tell curl that it is fine +to do so by using the `--location-trusted` option. # Non-HTTP redirects diff --git a/http/response.md b/http/response.md index 63053de151..72575fdd99 100644 --- a/http/response.md +++ b/http/response.md @@ -92,11 +92,11 @@ in fact any other compression algorithm that curl understands) by using A less common feature used with transfer encoding is compression. -Compression in itself is common. Over time the dominant and web compatible -way to do compression for HTTP has become to use `Content-Encoding` as -described in the section above. But HTTP was originally intended and specified -to allow transparent compression as a transfer encoding, and curl supports -this feature. +Compression in itself is common. Over time the dominant and web compatible way +to do compression for HTTP has become to use `Content-Encoding` as described +in the section above. HTTP was originally intended and specified to allow +transparent compression as a transfer encoding, and curl supports this +feature. The client then simply asks the server to do compression transfer encoding and if acceptable, it responds with a header indicating that it does and curl then diff --git a/install/container.md b/install/container.md index 084ebe19e3..50e7de8913 100644 --- a/install/container.md +++ b/install/container.md @@ -42,7 +42,7 @@ Invoke curl with `podman`: alias -s curl='podman run -it --rm docker.io/curlimages/curl' -And simply invoke `curl www.example.com` to make a request +Simply invoke `curl www.example.com` to make a request ## Running curl in kubernetes diff --git a/install/linux.md b/install/linux.md index 1d25fbc068..4ad1c2fa8a 100644 --- a/install/linux.md +++ b/install/linux.md @@ -100,7 +100,7 @@ instead of `zypper`. To install the curl command-line utility: transactional-update pkg install curl -And to install the libcurl development package: +To install the libcurl development package: transactional-update pkg install libcurl-devel diff --git a/libcurl/globalinit.md b/libcurl/globalinit.md index fe8acd70db..f67925640d 100644 --- a/libcurl/globalinit.md +++ b/libcurl/globalinit.md @@ -11,10 +11,10 @@ global state so you should only call it once, and once your program is completely done using libcurl you can call `curl_global_cleanup()` to free and clean up the associated global resources the init call allocated. -libcurl is built to handle the situation where you skip the `curl_global_init()` call, but -it does so by calling it itself instead (if you did not do it before any actual -file transfer starts) and it then uses its own defaults. But beware that it is -still not thread safe even then, so it might cause some "interesting" side -effects for you. It is much better to call curl_global_init() yourself in a -controlled manner. +libcurl is built to handle the situation where you skip the +`curl_global_init()` call, but it does so by calling it itself instead (if you +did not do it before any actual file transfer starts) and it then uses its own +defaults. Beware that it is still not thread safe even then, so it might cause +some "interesting" side effects for you. It is much better to call +curl_global_init() yourself in a controlled manner. diff --git a/project/comm.md b/project/comm.md index fa3dd1ba89..81b852e9d5 100644 --- a/project/comm.md +++ b/project/comm.md @@ -19,11 +19,11 @@ debugging or whatever. In this day, mailing lists may be considered the old style of communication — no fancy web forums or similar. Using a mailing list is therefore becoming an art that is not practiced everywhere and may be a bit strange and unusual to -you. But fear not. It is just about sending emails to an address that then -sends that email out to all the subscribers. Our mailing lists have at most a -few thousand subscribers. If you are mailing for the first time, it might be -good to read a few old mails first to get to learn the culture and what's -considered good practice. +you. It is just about sending emails to an address that then sends that email +out to all the subscribers. Our mailing lists have at most a few thousand +subscribers. If you are mailing for the first time, it might be good to read a +few old mails first to get to learn the culture and what's considered good +practice. The mailing lists and the bug tracker have changed hosting providers a few times and there are reasons to suspect it might happen again in the future. It diff --git a/protocols/http.md b/protocols/http.md index dc068c5072..5a1b8e5b47 100644 --- a/protocols/http.md +++ b/protocols/http.md @@ -43,7 +43,7 @@ A server always responds to an HTTP request unless something is wrong. ## The URL converted to a request -So when an HTTP client is given a URL to operate on, that URL is then used, +When an HTTP client is given a URL to operate on, that URL is then used, picked apart and those parts are used in various places in the outgoing request to the server. Let's take an example URL: diff --git a/transfers/conn/keepalive.md b/transfers/conn/keepalive.md index cc298e8a35..1396db1e5a 100644 --- a/transfers/conn/keepalive.md +++ b/transfers/conn/keepalive.md @@ -2,8 +2,8 @@ Once a TCP connection has been established, that connection is defined to be valid until one side closes it. Once the connection has entered the connected -state, it will remain connected indefinitely. But, in reality, the connection -will not last indefinitely. Many firewalls or NAT systems close connections if +state, it will remain connected indefinitely. In reality, the connection will +not last indefinitely. Many firewalls or NAT systems close connections if there has been no activity in some time period. The Keep Alive signal can be used to refrain intermediate hosts from closing idle connection due to inactivity. diff --git a/transfers/drive/multi-socket.md b/transfers/drive/multi-socket.md index 9745ec6570..759efff5da 100644 --- a/transfers/drive/multi-socket.md +++ b/transfers/drive/multi-socket.md @@ -89,7 +89,7 @@ registered: ### timer_callback -The application is in control and waits for socket activity. But even without +The application is in control and waits for socket activity. Even without socket activity there are things libcurl needs to do. Timeout things, calling the progress callback, starting over a retry or failing a transfer that takes too long, etc. To make that work, the application must also make sure to diff --git a/transfers/drive/multi.md b/transfers/drive/multi.md index 6367bf6f18..584de439b1 100644 --- a/transfers/drive/multi.md +++ b/transfers/drive/multi.md @@ -90,8 +90,8 @@ codes*): Both these loops let you use one or more file descriptors of your own on which to wait, like if you read from your own sockets or a pipe or similar. -And again, you can add and remove easy handles to the multi handle at any -point during the looping. Removing a handle mid-transfer aborts that transfer. +Again: you can add and remove easy handles to the multi handle at any point +during the looping. Removing a handle mid-transfer aborts that transfer. ## When is a single transfer done? diff --git a/usingcurl/connections/keepalive.md b/usingcurl/connections/keepalive.md index 1a52fd6896..9d83a6594b 100644 --- a/usingcurl/connections/keepalive.md +++ b/usingcurl/connections/keepalive.md @@ -18,10 +18,10 @@ frames" back and forth when it would otherwise be totally idle. It helps idle connections to detect breakage even when no traffic is moving over it, and helps intermediate systems not consider the connection dead. -curl uses TCP keepalive by default for the reasons mentioned here. But there -might be times when you want to *disable* keepalive or you may want to change -the interval between the TCP "pings" (curl defaults to 60 seconds). You can -switch off keepalive with: +curl uses TCP keepalive by default for the reasons mentioned here. There might +be times when you want to *disable* keepalive or you may want to change the +interval between the TCP "pings" (curl defaults to 60 seconds). You can switch +off keepalive with: curl --no-keepalive https://example.com/ diff --git a/usingcurl/connections/name.md b/usingcurl/connections/name.md index 1acb067a85..f27409ec68 100644 --- a/usingcurl/connections/name.md +++ b/usingcurl/connections/name.md @@ -80,7 +80,7 @@ want to send a test request to one specific server out of the load balanced set (`load1.example.com` for example) you can instruct curl to do that. You *can* still use `--resolve` to accomplish this if you know the specific IP -address of load1. But without having to first resolve and fix the IP address +address of load1. Without having to first resolve and fix the IP address separately, you can tell curl: curl --connect-to www.example.com:80:load1.example.com:80 \ @@ -110,6 +110,6 @@ end of the DNS communication to a specific IP address and with use for its DNS requests. These `--dns-*` options are advanced and are only meant for people who know -what they are doing and understand what these options do. But they offer +what they are doing and understand what these options do. They offer customizable DNS name resolution operations.