Nice, nice baby
Most Linux users know of the nice command but few actually use it. Nice is one of those commands that sound really good, but you can never think of a reason to use. Occasionally though, it can be incredibly useful. Nice can change the priority of a running process, giving it a greater or smaller share of the processor. Usually this is handled by the Linux scheduler. The scheduler guarantees that processes with a higher priority (like those that involve user input) get their share of the resources. This should ensure that even when your system is at 100% CPU, you will still be able to move the windows and click on the mouse.
The scheduler doesn't always work smoothly, however; certain tasks can take over your computer. This could be a wayward find command that's triggered by a distro's housekeeping scripts; or encoding a group of video files that makes your computer grind to a halt.
You'd typically hunt these processes out with the top comand before killing them. Nice presents another, more subtle and more useful option. It reduces the offending task's priority so that your system remains usable while still serving the offending process. Running a command with a different priority is as easy as entering
nice --10 updatedb
This runs the updatedb with a reduced priority of -10. If you run top you can see the nice value under the column labelled 'NI'.
If you wish to reduce the priority of a program that's already running, you need to use the renice command with the process ID:
# renice -10 -p 1708217082: old
priority 0, new priority -10
This will also reduce the process's priority by 10, and depending on the nice value of the other processes, will lessen the amount of CPU time it will share with the other tasks.
SSH by proxy
Cryptographic tunnels are a useful way to establish a secure connection between your local PC and a remote machine or server. If you use VNC, the remote desktop client, you've probably already burrowed your way through a tunnel; a sensible technique is to use SSH, which is commonly employed for remote logins.
One of the best uses of SSH tunneling is to access Webmin, the remote config tool that runs on a web server. You can change almost anything on your system using Webmin, so it's unwise to leave it open to the internet. But if you close it off, you lose the ability to configure your machine. You can get around this limitation by tunneling with SSH from the port that Webmin uses to your local machine, like so:
ssh -L 8090:localhost:10000 remotehost
Just point your web browser at https://localhost:8090 to connect to your remote Webmin server. You could also forward a proxy service using SSH. If you were at a location where you couldn't access Google or eBay, for example, you could create a tunnel to the proxy server and browse from there. Most distributions include a proxy server, such as Squid. This needs to be installed and running on the remote machine first. Squid uses port 3128, so the command to tunnel Squid would look something like this:
ssh -L 8090:localhost:3128 remotehost
It's then just a matter of configuring your browser to use localhost:8090 as the proxy server, and all subsequent web requests will be passed through the SSH tunnel. Using a proxy server in this way enables you to connect to other machines on the proxy's local network, such as 192.168.1.1, and that also includes services like router configuration servers.
No one can hear you screen
Virtual terminals are like children: having one, two or even three brings joy to your life, but more than that puts a strain on your resources. When working remotely, some people miss having the ability to open mutliple terminals, so they simply open many SSH connections to the same machine. Not only is this a waste of bandwidth, it's also a sign you're a newbie - which you're not, right? Veterans know there's a much better way to open multiple terminals, and it comes in the form of the GNU screen program. To get started, open up a terminal, type screen, then hit Enter. Your terminal will be replaced with an empty prompt and you may think nothing has changed, but actually it has - as you'll see.
Type any command you like, eg uptime, and hit Enter. Now press Ctrl+a then c, and you should see another blank terminal. Don't worry, your old terminal is still there, and still active; this one is new. Type another command, eg ls.
Now, press Ctrl+a then 0 (zero) - you should see your original terminal again. As you can see, Ctrl+a is the combination that signals a command is coming - Ctrl+a then c creates a new terminal, and Ctrl+c then a number changes to that terminal. You can use Ctrl+a then Ctrl+a to switch to the previously selected window, Ctrl+a then Ctrl+n to switch to the next window, or Ctrl+a then Ctrl+p to switch to the previous window. To close windows, just type exit.
When your last window is closing, you also exit screen and it will print 'screen is terminating' to remind you. Alternatively - and this is the coolest thing about screen - you can press Ctrl+a then d to detach your screen session. Then, from another computer later on, use screen -r to pick up where you left off, with all the programs and output intact just as you left it - magic!
Better than a browser
If you often need to retrieve pages from the net and find that using a browser is like using a sledgehammer to crack an egg, then wget is for you. Its info page soberly describes it as a utility for the non-interactive download of files from the web; but what they're trying to say is that sometimes it works better than using a browser. You can use wget in a script to download web pages or files, and it's perfect for synchronising local web archives. You don't have to use it in a script either - it works just as well when executed directly from the shell (http://wget.sunsite.dk).
The most straightforward use for wget is to simply download a file referenced by a URL:
$ wget http://localhost/somefile.tar.gz
This should present you with a text-based download bar. Unfortunately, if the site uses the HTTP protocol, wget won't support wildcards; so you couldn't use *.gz for downloading multiple files (but you could if the site used FTP instead). wget is used most often to mirror a whole website. Here's an example for downloading a site:
$ wget --mirror -p --html-extension --convert-links http://localhost
Wget traverses the site and downloads the content into the current directory. The mirror argument enables options suitable for mirroring a website - in particular, recursion for traversing the whole website tree. htmlextension is used for sites that use either CGI scripts to generate HTML, or ASP files that need to be renamed after they're downloaded. If wget recognises the contents, it will just add the HTML extension.
After the transfer has finished, wget goes through the local files to change any remote references so the site can be viewed offline.
0 comments:
Post a Comment