If anything goes wrong with the deploy script, such as failing tests, no harm will be done because the script exits upon the first error encountered.
How do you clean up? Once the deploy script is fixed, how do you know what’s been done and what needs redoing?
Have you considered ansible/puppet/chef/salt — environments dedicated to deployment and cleanup, with idempotency to allow for fixing and repeating the deployment, across multiple operating systems and versions?
Cleanup can be as simple as deleting the latest deployment directory, if the script gets that far. The article is about using built-in Linux tools for 'easy' application deployments. One can also use dedicated tools, as you suggested, to further automate the deployment process.
MetaGer is a metasearch engine focused on protecting users’ privacy. Based in Germany, and hosted as a cooperation between the German NGO ‘SUMA-EV - Association for Free Access to Knowledge’ and the University of Hannover, the system is built on 24 small-scale web crawlers under MetaGer’s own control. In September 2013, MetaGer launched MetaGer.net, an English-language version of their search engine.
That you can hide yourself behind our proxyserver just by opening the result anonymously? Use “OPEN ANONYMOUSLY”; this also affects the following links.
I feel this sort of endeavour is just a poorly researches attempt at reinventing the wheel. Packaging formats such as Debian's .DEB format consist basically of the directory tree structure to be deployed archived with Zip along with a couple of metadata files. It's not rocket science. In contrast, these tricks sound like overcomplicated hacks.
Author here. In case it’s not clear, this article isn't about installing Linux packages; it's about deploying multiple versions of software to development and production environments.
Same, it's super simple with Docker and you don't even need to fiddle with ports or anything. I should probably try running it at my work PC now that I think of it.. Anyway, duckduckgo has been good to me for all these years.
An employer is unlikely to waste time on deep candidate analysis. If they see you as a public code contributor, it's an upside in activity, experience, and conversation starter, and discussion points for any interviews. If they look at your code, it won't be deep. I doubt they would go through the effort of correlating from a public coder profile (e.g. on GitHub) to a Lemmy profile and then look at their posts.
Once they're at the point where that would be a reasonable investment, they already know you personally and don't care about online content anymore.
Maybe some big companies use online analysis tools though.
Anyway, I know what I'm worth as a developer/an employed. I don't think I post that kind of divisive or sensitive stuff that does or possibly should be related to my employment and work. If they see it as such, then I'm fine with it not being a match.
I actually think the public nature could and should be upsides. Related to work or not.
I used to have to put !g (redirect to Google) on like half my searches to get the results I wanted. These days, I actually generally prefer DDG's results over Google's.
You know the state and progress of a program from the line you are on. A goto breaks that.
You can index the progress of a program through static line indexes and a dynamic loop index and function call stack. A goto breaks that. Including a "statements/lines since beginning of execution" is infeasible for understanding.
I also spun up my own yacy instance. It was pretty terrible. It could be good, but you would need a pretty beefy machine with a lot of storage and a lot of time for it to index for it to be anything approaching good.
Programming
Hot