...making Linux just a little more fun!
Ben Okopnik [ben at okopnik.com]
[cc'd to the Answer Gang]
Hi, Long -
On Thu, Mar 10, 2011 at 10:54:18AM -0800, Long Chow wrote:
> Hello Ben Okopnik, > > Yesterday I bumped into a su (substitute user) permission error similar > to your Apr. 2000 article, "Cannot execute /bin/bash: Permission denied". > I was attempting to run an expect script in non-root user mode on Fedora 8: > > su netter -c "expect try.exp" > > and it failed: > > couldn't read file "try.exp": permission denied > > No problem if I run: > su root -c "expect try.exp" > expect try.exp > > I pored over permission related avenues for the whole day and failed. > It was around midnight when I googled upon your article that my hope was > rekindled. > > So the first thing coming into work today... > Using your approach (especially strace), I found the execution bit for others > for /root > was not set. After setting it, my non-root mode command string started to > work!
That's actually not a good solution; the correct permissions for /root are 0700. Setting it to 0701, as you have, allows other users to enter that directory - a really bad idea!
ben@Jotunheim:~$ ls -ld /root drwx------ 11 root root 4096 2011-03-10 21:14 /root ben@Jotunheim:~$ head -n 1 /root/.bashrc head: cannot open `/root/.bashrc' for reading: Permission denied
OK, this is what's supposed to happen. But here's what happens when I change the permissions as you specified:
ben@Jotunheim:~$ sudo chmod 0701 /root [sudo] password for ben: ben@Jotunheim:~$ head -n 1 /root/.bashrc # ~/.bashrc: executed by bash(1) for non-login shells.
Whoops...
I suspect that the right solution for you would be to put 'try.exp' somewhere other than /root; then you won't have to do anything with those permissions (other than hopefully set them back as quickly as possible.)
Ben
-- OKOPNIK CONSULTING Custom Computing Solutions For Your Business Expert-led Training | Dynamic, vital websites | Custom programming 443-250-7895 http://okopnik.com http://twitter.com/okopnik
[ Thread continues here (2 messages/3.66kB) ]
Ben Okopnik [ben at linuxgazette.net]
On Sat, Mar 19, 2011 at 02:00:47AM +0000, Jimmy O'Regan wrote:
> http://www.readwriteweb.com/archives/crowdsourcing_us_war_papers.php > > Source here: https://github.com/chnm/Scripto
Beautiful. Whether Open Source /qua/ Open Source takes over the world or not, its key methods - i.e., accomplishing major tasks by winning brainshare - already have.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
Ben Okopnik [ben at okopnik.com]
Hi, Rui -
I assume that you wanted to send this question to The Answer Gang rather than asking me for a private consult (those tend to be expensive , so I've forwarded it there. Please direct any further emails about it there (tag@lists.linuxgazette.net), and I'll try to answer your questions.
On Tue, Feb 08, 2011 at 11:33:53AM +0000, Rui Fernandes wrote:
> Dear Ben Okopnik, > > I've read your article "Installing Perl Modules as a Non-Root User", and > regarding including the "myperl" in @INC" it worked. But now I have a problem, > that maybe you can help. > I▓m trying to install a local module in a webserver that isn▓t mine. I get no > error with the following Makefile.PL
Do you mean when you run 'make', 'make test', 'make install', or all of them?
> CODE: > #!/usr/local/bin/perl > use 5.008007; > use ExtUtils::MakeMaker; > # See lib/ExtUtils/MakeMaker.pm for details of how to influence > # the contents of the Makefile that is written. > WriteMakefile( > NAME => 'Kepler', > VERSION_FROM => 'lib/Kepler.pm', # finds $VERSION > PREREQ_PM => {}, # e.g., Module::Name => 1.1 > ($] >= 5.005 ? ## Add these new keywords supported since 5.005 > (ABSTRACT_FROM => 'lib/Kepler.pm', # retrieve abstract from module > AUTHOR => 'Rui Fernandes <rui.kepler@gmail.com>') : ()), > LIBS => ['-L/home/username/usr/local/lib -lswe'], # e.g., > '-lm' > # LIBS => ['-lswe'], # e.g., '-lm' > DEFINE => '', # e.g., '-DHAVE_SOMETHING' > INC => '-I/home/username/usr/local/include', # e.g., '-I. -I/ > usr/include/other' > INSTALL_BASE => '/home/username/myperl', > # DISTVNAME => 'perl_kepler', # > # Un-comment this if you add C files to link with later: > # OBJECT => '$(O_FILES)', # link all the C files too > ); > > END CODE > > But when I run the test script, the module isn't found, not even in the "myperl > /lib" directory.
I'm having trouble parsing that last sentence. Do you mean the module is actually not in myperl/lib, or does your test script not find it?
I suspect that it's the latter. If that's the case, then what's happening is that your web server isn't seeing the correct path. This often happens because the actual path to your home directory is not necessarily the same thing as you see when you log in via, say, SSH. For example, in one of my former webservers, the path reported by 'pwd' when I was in my home directory was '/home/ben' - but the real path was something like '/homepages/41/d322536930/'. As a result, using '/home/ben/myperl' as part of my 'use lib' statement was worthless: the web server didn't know anything about a path like that.
Perhaps the easiest way to find out what the server is seeing as your real path is to look at the server environment. Here's an easy way to do that with Perl:
[ ... ]
[ Thread continues here (1 message/4.22kB) ]
Ben Okopnik [ben at linuxgazette.net]
I've always wondered why "ls" doesn't just have this as an option. Got tired of wondering, so I went ahead and wrote it.
This script is intended to be a drop-in replacement for "ls" - in other words, just put it somewhere accessible and alias it to 'ls'. It takes all the same options that 'ls' does (no wonder; it simply passes the entire argument string to 'ls'), and works in the same way, unless the first option that you specify - and it must be specified by itself - is "-O" (capital "o", not a zero.) In that case, it does all the same stuff but reformats the output a little - just the filetype/permissions section. I haven't done a huge amount of testing on it, so it might be fragile in some unexpected places (reports would be appreciated). Seems OK, though, so I'm releasing it to the unsuspecting world. Enjoy.
#!/usr/bin/perl -w # Created by Ben Okopnik on Sat Mar 26 19:00:46 EDT 2011 use strict; if ($ARGV[0] ne '-O'){ exec '/bin/ls', @ARGV } else { shift; } for (qx#/bin/ls @ARGV#){ my ($t, $p, $r) = /^(.)([rwxsStT-]+)(\s+\d+\s+\w+.+)$/; print and next unless $p; my $out = 0; my %d = map {split//} qw/sx S- r4 w2 x1 -0/; $out += 01000 if $p =~ y/tT/x-/; $out += 02000 if $p =~ s/(s)(?=.{3})$/$d{$1}/i; $out += 04000 if $p =~ s/(s)(?=.{6})$/$d{$1}/i; $p =~ s/([rwx-])// and $out += $d{$1} * oct($_) for (100)x3, (10)x3, (1)x3; printf "[%s] %04o %s\n", $t, $out, $r; }
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Thread continues here (7 messages/10.83kB) ]
Ben Okopnik [ben at okopnik.com]
Hi, Hans-Peter -
On Mon, Feb 07, 2011 at 06:52:06PM -0500, hanspetersorge@aim.com wrote:
> Hi Ben, > > I just read the thread - no idea how to append..
Just send your message to The Answer Gang (tag@lists.linuxgazette.net), and we'll add it to the next issue. I've cc'd this response there, so it'll get used that way.
> My 2-cents: USB is being polled.
You're probably right - and very likely, it's getting polled very rapidly, given the task. I'm not sure what could be done about that - some kernel setting, perhaps?
> strace -p .... might give you some clues too.
The question is, what would I attach it to? 'rsync' wouldn't make a whole lot of sense, since it's not involved in USB polling.
Ben
-- OKOPNIK CONSULTING Custom Computing Solutions For Your Business Expert-led Training | Dynamic, vital websites | Custom programming 443-250-7895 http://okopnik.com http://twitter.com/okopnik
[ Thread continues here (5 messages/9.12kB) ]
Ben Okopnik [ben at linuxgazette.net]
On Fri, Apr 01, 2011 at 10:14:43AM -0700, Mike Orr wrote:
> I got two monthly reminders today about my LG mailing-list > subscriptions. Both to my same email address, both with the same list > URL (lists.linuxgazette.net), but with different passwords and > different lists subscribed. One of the passwords works, the other one > doesn't. So is there a phantom old server sending out reminders?
Yep. Except it's not coming from the LG address; it's the difference between the 'From ' and the 'From:' addresses (the latter can be faked.)
> Here's the headers for the message with the working password:[snip]
> Return-Path: <mailman-bounces@lists.linuxgazette.net>^^^^^^^^^^^^^^^^^^^^^^
> Here are the headers for the message with the non-working password:[snip]
> Return-Path: <mailman-bounces@linuxmafia.com>^^^^^^^^^^^^^^
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
Marcello Romani [marcello.romani at libero.it]
Hi, I had a horror story similar to Ben's one, about two years ago. I backed up a PC and reinstalled the OS with the backup usb disk still attached. The OS I was reinstalling was a version of Windows (2000 or XP, I don't remember right now). When the partition creation screen appeared, the list items looked a bit different from what I was expecting, but as soon as I realized why, my fingers had already pressed the keys, deleting the existing partitions and creating a new ntfs one. Luckily, I stopped just before the "quick format" command... Searching the 'net for data recovery software, I came across TestDisk, which is target at partition table recovery. I was lucky enough to have wiped out only that portion of the usb disk, so in less than an hour I was able to regain access to the all of my data. Since then I always "safely remove" usb disks from the machine before doing anything potentially dangerous, and check "fdisk -l" at least three times before deciding that the arguments to "dd" are written correctly...
Marcello Romani TAG mailing list TAG@lists.linuxgazette.net http://lists.linuxgazette.net/listinfo.cgi/tag-linuxgazette.net
Ben Okopnik [ben at linuxgazette.net]
This was submitted yesterday, but I figure that it's interesting enough for anyone here that might be interested in high-end CAD (I dimly recall a conversation about it here...) that I'd give you folks an early preview. IIRC, Dassault Systemes produces the CAD system thar was used to design the Airbus.
----- Forwarded message from Christina Feeney <cfeeney@shiftcomm.com> -----
Date: Wed, 9 Mar 2011 10:11:59 -0500 From: Christina Feeney <cfeeney@shiftcomm.com> To: "'ben@linuxgazette.net'" <ben@linuxgazette.net> Subject: 2D CAD DraftSight Now Available for LinuxHi Ben,
The wait is over – free 2D CAD software DraftSight is officially available for Linux! The demand for this operating system has been overwhelming and DraftSight is thrilled to be able to offer it to everyone today. The news just crossed the wire early this morning and I wanted to make sure you had all of the details. The full release is embedded below. Please let me know if you have any questions or would like screenshots.
Thanks!
Christina
Dassault SystХmes’ DraftSight Now Available for Linux
Linux Users Can Now Create, Edit and View DWG Files with DraftSight
VиLIZY-VILLACOUBLAY, France, – March 9, 2011 – Dassault SystХmes (DS) (Euronext Paris: #13065, DSY.PA), a world leader in 3D and Product Lifecycle Management (PLM) solutions, today announced the availability of a beta release of DraftSight for Linux. DraftSight is a no-cost 2D CAD product for CAD professionals, students and educators that can be downloaded at DraftSight.com.
DraftSight for Linux allows users to create, edit and view DWG files. DraftSight generally takes a few minutes to download and runs on multiple operating systems, including Linux and Mac OS in beta, and Windows XP, Windows Vista and Windows 7 in general release.
“We’re very excited to finally announce to the DraftSight community the availability of Linux in beta for DraftSight,” said Aaron Kelly, senior director, DraftSight, Dassault SystХmes. “We’ve been working on the Linux version since the launch of DraftSight and have seen a significant rise in demand for this over the last few months. It’s been our objective since the start to respond to users by providing them with products that will meet their needs.
DraftSight beta users have access to no-cost Community Support available within the DraftSight open, online SwYm community where they can access support and training resources, along with an environment to interact, ask questions and share their opinions. The DraftSight community is one of the first social networks designed by engineers for engineers, designers and architects.
For more information, please visit DraftSight.com. Also, check out DraftSight on Facebook and Twitter.
###
Christina Feeney | Senior Account Executive| SHIFT Communications |
phone: 617.779.1805 | mobile: 617.240.9181 | email: cfeeney@shiftcomm.com |
web: www.shiftcomm.com| blog: www.pr-squared.com | ----- End forwarded message -----
[ ... ]
[ Thread continues here (1 message/3.49kB) ]
Henry Grebler [henrygrebler at optusnet.com.au]
Hi Francis,
-->I see one possible typo in a not-yet-much-used one: --> -->""" -->I have just changed this alias to --> --> alias w '(pwd; /bin/pwd ) | uniq; df -h | tail -1' --> -->I often also want to know whether I'm on a local disk or not. -->""" --> -->That probably wants to be "df -h ." (at least on recent-ish Debian and -->RedHat systems).
Absolutely correct. Thanks for that.
-->And if you are likely to be on a system where the "Filesystem" -->name is quite long (such as "/dev/mapper/VolGroup00-LogVol00"), then -->using "tail -n +2" might be handy too -- but I can't comment on the -->portability/compatibility of that option across systems.
Perfect!
Another example of "write in haste, repent at leisure." I should stick to presenting the tried and true - and forget about improvising.
Thank you for your fixes. I will adjust my files immediately.
Cheers, Henry TAG mailing list TAG@lists.linuxgazette.net http://lists.linuxgazette.net/listinfo.cgi/tag-linuxgazette.net
Ben Okopnik [ben at linuxgazette.net]
Hello, all -
After a number of years of running LG, I've reached a stopping point: I can't continue producing it, for a variety of interconnected reasons. In short, the group of people currently involved in making LG happen has become so small that almost the entire thing has fallen on me - and the technical side of producing it is so complicated that my current level of business and other life involvement does not leave me with enough spare time to do it. As a result, there was no LG issue for this month - and despite giving it my best shot over the past two weeks, I have still not managed to release one.
That seems to me to be an extraordinarily clear signal that the time has come for me to step down. Whether that means that someone else gets to take over, or whether LG simply ends at this point, I don't know. I've tried to prevent the latter... but it appears that I have reached the end of what I can do in that regard. Whether it continues or not has to become someone else's concern. I am no longer able to carry the load.
I'm sure that someone with better management skills than mine could organize and structure a working team that would distribute the load; someone with more time could replace the totally outdated and rather arcane production system, which currently takes many hours of work to release an issue (by contrast, a CMS such as WordPress could easily be configured to replicate the LG site structure, and production - as well as adding proofreaders and editors - would become a trivial matter rather than the complicated and fragile SVN-based system currently in place.) I would be happy to advise whoever takes over, assuming that someone does.
The current state, however, is that LG is no longer being actively produced by me. My most sincere apologies to anyone that I have let down.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Thread continues here (35 messages/74.71kB) ]
Ben Okopnik [ben at linuxgazette.net]
Hi, all -
I'm hoping that someone here has experience in setting up SVN. I'm running into several issues in setting it up for LG... frankly, it's mostly my own mental state more than anything else; 'buffer full' at the moment. Unsurprising, after a couple of weeks of full-time training course development, setting up LG in its new home, and learning a new language during all that. My brain is pretty cooked, and I desperately need a break. I'm just hoping that someone here has a shortcut; it shouldn't be that difficult, really.
What I'd like to do is set it up so that I don't have to create a system account for everyone who needs SVN access. We don't even need web access to the repo; we just need our editors and proofreaders to authenticate via whatever mechanism SVN uses, pull a copy of the repo, and check their work back in whenever they're done with it.
Any help - and the best of all possible world would be an offer like "I'd be happy to do it for you!" - would be very welcome.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Thread continues here (18 messages/35.79kB) ]
Ben Okopnik [ben at okopnik.com]
On Tue, May 10, 2011 at 01:00:10AM +0100, Jimmy O'Regan wrote:
> I've got a bunch of photos from a Picasa installation, that have face > regions marked in the .picasa.ini file that I've been trying to get > into a more useful form. > > So far, I can convert to the format used in Microsoft's photo region > schema (which is about as close as it gets to a standard): > > sub padrect($) {^^^
You probably shouldn't do that. Unless you know exactly why you're doing it (hint: "I want this sub to only take a single scalar argument" is not a good reason) and what the side effects and the pitfalls are. For the full treatment, see "perldoc perlsub". The takehome rule of thumb is "don't".
> The method I'm using to pad out the string is horrible and ugly - is > there a nicer way to do it?
Yep. Thomas' advice is right on the dot; in this kind of situations, 'pack' (and often 'vec') are your very good friends.
Ben
-- OKOPNIK CONSULTING Custom Computing Solutions For Your Business Expert-led Training | Dynamic, vital websites | Custom programming 443-250-7895 http://okopnik.com http://twitter.com/okopnik
Ben Okopnik [ben at linuxgazette.net]
On Thu, Mar 03, 2011 at 10:05:43PM +0000, Jimmy O'Regan wrote:
> IBM's Watson, DeepBlueQA, Wins on Jeopardy! > > Um... wouldn't it have been worth noting that Watson ran (in part) on Linux?
<blonde moment>Why, is that important?</blonde moment>
That's what I get for trying to get an issue out the door and not having anyone backstop me. We really need a post-production reviewer besides just me.
> There's an article with some of the details here: > http://www.stanford.edu/class/cs124/AIMagzine-DeepQA.pdf
The Linux Gazette is always happy to accept submissions, corrections, and contributions from qualified volunteers.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
Ben Okopnik [ben at okopnik.com]
On Tue, Apr 26, 2011 at 02:36:47PM +1000, Amit Saha wrote:
> > Disk /dev/sdc: 6469 MB, 6469189632 bytes > 200 heads, 62 sectors/track, 1018 cylinders, total 12635136 sectors > Units = sectors of 1 * 512 = 512 bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > Disk identifier: 0x00000000^^^^^^^^^^
> Disk /dev/sdc doesn't contain a valid partition table^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Presuming that it did have one, and had a reasonable identifier, it looks like your card's not readable. Assuming that there's no "protect" switch on it - you did look for one, right? - I suspect that it has failed. That is, it can be read enough to determine its physical characteristics - I don't know how the two processes differ, I only know that they do - but you can't access its storage. In other words, it's junk.
> Anything else I can try? Any info appreciated.
Try a raw read.
# See if you can copy the first 512 bytes sudo dd if=/dev/sdc of=/tmp/mbr bs=512 count=1 # If that succeeds, then back it up, quick! sudo dd if=/dev/sda of=/tmp/sdc_backup.raw bs=4096
Ben
-- OKOPNIK CONSULTING Custom Computing Solutions For Your Business Expert-led Training | Dynamic, vital websites | Custom programming 443-250-7895 http://okopnik.com http://twitter.com/okopnik
[ Thread continues here (5 messages/8.44kB) ]
Ben Okopnik [ben at linuxgazette.net]
On Wed, Mar 09, 2011 at 04:38:18PM +0100, Predrag Ivanovic wrote:
> <courtesy of LWN> > > "One of the most deeply held beliefs in the culture of *nix (and everything that springs from it) > is that the steep learning curve pays off. Yes, the tools seem cryptic and “hard-to-use”, with hardly > any crutches for the beginner. But if you stick with it and keep learning you will be rewarded. > When you grok the power of economical command lines, composability and extensibility, you’re glad > you didn’t run back to the arms of the GUI on the first day. It was worth it.[...]" > > Yes it was, and still is . > > Full text at > http://blog.vivekhaldar.com/post/3339907908/the-cognitive-style-of-unix
This is, or closely related to, the thing that attracted me to Linux in the first place. In the Windows world, you're either a user - which is defined by a very narrow, small set of pointy-clicky skills and not much if any understanding of the mechanisms you use - or you're some sort of a "wizard", which gets defined in all sorts of arcane ways, mostly meaning "knows some stuff beyond what users know." All that "stuff", however, doesn't form any kind of a coherent whole: it was all chunks and bits and pieces, no relation of anything to anything else. The only choices, if you wanted more than the minimum, were 1) specialize - meaning something like learning a certain language or a given application, or 2) gather enough critical mass of random stuff until you formed a gravity well of your own and could pull out some sort of a related useful fact when a problem came along. All very haphazard, and somewhat akin to stumbling around in a dark dungeon until you found some treasure or (far more likely) ran into some kind of a monster.
YOU'RE TRAPPED IN A MAZE OF TWISTY PASSAGES, ALL ALIKE, AND YOU'RE LIKELY TO BE EATEN BY A GRUE.
(It is worth noting that I functioned in that world, professionally, for a number of years, all the way from a wet-behind-the-ears teenage computer repairman to working as CIO for an insurance company. This perspective is not formed by my desire to promote Linux; quite the opposite, if anything - I advocate Linux usage because I have this perspective, which was formed by long experience.)
[ ... ]
[ Thread continues here (1 message/4.86kB) ]
Ben Okopnik [ben at linuxgazette.net]
http://www.google.com/webhp?hl=xx-hacker
Ran across this purely by accident. Wild.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
Ben Okopnik [ben at linuxgazette.net]
On Wed, Apr 20, 2011 at 06:48:52PM +1000, Amit Saha wrote:
> > # from http://stackoverflow.com/questions/54139[...]cting-extension-from-filename-in-python/ > file_type = os.path.splitext(filename)[1]
I generally try to avoid importing modules if there's a built-in method that works just as well. In this case, you're importing "os" anyway, but this also works in scripts where you don't.
file_type = filename.split('.')[-1]
> # Uses execlp so that the system PATH is used for finding the program file > # location > os.execlp(program,'',filename)
The main problem with this type of script is that you have to be a Python programmer to add a filetype/program pair. I'd suggest breaking out the dictionary as an easily-parseable text file, or adding a simple interface ("Unsupported filetype. What program would you like to use?") that updates the list.
> Is there any Linux command line tool which can easily do this?
Midnight Commander is quite good at this; in fact, I've used that functionality of 'mc' in some of my shell scripts for just this purpose. They use a mix of regex matching, MIME-type matching, and literal extension recognition to work with various files. E.g., the ".gz" extension in "foo.1.gz" doesn't tell you anything about the type of file that it is (a man page).
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Thread continues here (8 messages/18.25kB) ]
Share |
By Deividson Luiz Okopnik and Howard Dyckoff
Contents: |
Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net. Deividson can also be reached via twitter.
At the March AnDevCon Conference, OpenLogic, Inc., a provider of open source governance solutions. announced the results of a scan and license compliance assessment of 635 leading mobile applications. Among other findings, the results show that 71% of Android, iPhone and iPad apps containing open source failed to comply with basic open source license requirements.
OpenLogic found several apps with extensive EULAs that claimed all of the software included was under their copyright and owned by them - when in fact some of the code in the app was open source.
Using its scanner, OSS Deep Discovery, OpenLogic scanned compiled binaries and source code where available for 635 mobile applications to identify open source under GPL, LGPL and Apache licenses. For the 66 applications scanned that contained Apache or GPL/LPGL licenses, 71% failed to comply with four key obligations that OpenLogic analyzed. These included:
GPL/LGPL license requirements to:
Apache license requires to:
"Many mobile and tablet developers may not have a complete picture of the open source they are using and the requirements of the open source licenses. This has real-world implications. For example, the Free Software Foundation has stated that the GPL and iTunes license are not compatible, and Apple has already pulled several apps from the store that were determined to be under the GPL," said Kim Weins, senior vice president of products and marketing at OpenLogic. "Google has also received takedown requests for Android market apps that violated the GPL. App developers need to pay attention to open source license compliance to ensure their apps are not impacted by legal actions."
Out of the 635 apps scanned, OpenLogic identified 52 applications that use the Apache license and 16 that use the GPL/LGPL license.
OpenLogic found that among the applications that use the Apache or GPL/LGPL licenses, the compliance rate was only 29%. Android compliance was 27% and iPhone/iOS compliance was 32%. Overall compliance of Android applications using the GPL/LGPL was 0%.
Although the research did not specifically analyze conflicts between different licenses, OpenLogic noted that 13 of the applications came from the Apple App Store, used GPL/LGPL. The App Store has already removed other applications that included GPL/LGPL licenses. In addition, two of the applications on Android contained LGPLv2.1. This license could have potential conflicts with Apache 2.0 - which is the major license of the Android operating system. OpenLogic provides enterprises with a certified library of open source software that encompasses hundreds of the most popular open source packages via OpenLogic Exchange (OLEX), a free web site where companies can find, research, and download certified, enterprise-ready open source packages on demand. For more information, visit: https://http://olex.openlogic.com/.
IronKey, a leader in securing data and online access, released a survey of IT security professionals working at UK-based organisations including, Lloyds Banking Group, HP, Fujitsu, Siemens, Worcester County Council and Cleveland Police. The study showed that 31 per cent suffered one or more organised attacks in the last 12 months resulting in theft of data or money.
Besides suffering at least one cyber attack in the last 12 months, 45 per cent believed their organisation is a target of organised cyber-crime which could result in the theft or sabotage.
"Unfortunately the results of our research don't really come as a shock, as the past 12 months have seen some of the biggest and most successful cyber-attacks our industry has ever witnessed," said Dave Jevans, founder of IronKey and the Anti-Phishing Working Group.
When asked about the significant information security threat facing their organisation today, 54 per cent of respondents highlighted accidental data leakage by staff, contractors or vendors as the biggest threat. The past five years of highly publicised data breaches and the power of the Information Commissioner's Office (ICO) to levy £500,000 have gained the attention of organisations. In contrast, only 10 per cent fear external attack on networks and systems and only 13 per cent see Trojans that steal data, money, or sabotage systems as a significant threat to their organisation.
While 44 per cent of respondents believed an untrusted desktop or laptop is the most vulnerable location for an advance persistent threat (APT) attack, it appears respondents prefer traditional methods, such as end user education (44 per cent) or anti-virus (29 per cent), as opposed to technology that isolates user and data from threats (19 per cent), as the most effective tool to prevent APT attacks.
"Unfortunately, end user education and anti-virus were all in place at organisations that suffered painful losses as a result of APT attacks. Doing the same thing over and over won't make the problem go away - criminals are only more encouraged," commented Jevans. "As an industry, we need to shift away from trying to be all knowing and detecting threats we can't know about until they happen. Instead, we need to isolate users of sensitive data and transactions away from the problem."
As a result of cyber-crime, British business is estimated to be losing £20bn a year. Targeted attacks on the global energy industry as part of the Night Dragon attacks, the breach of infrastructure at RSA, compromise of digital certificate issuance at Comodo, and theft of millions of customer records from Epsilon all show that any organisation is a potential target.
IronKey also announced the availability of IronKey Trusted Access for Banking 2.7 which addresses the continuing needs of banks to isolate customers from the growing threat of crimeware and online account takeovers. The new update includes IronKey's keylogging protection that blocks the capture of user credentials, one-time passcodes (OTP), challenge questions, and other sensitive data criminals can otherwise easily steal.
Don't miss the 3-day conference program filled with cutting-edge systems research, including invited talks by Stefan Savage on "An Agenda for Empirical Cyber Crime Research," Finn Brunton on "Dead Media: What the Obsolete, Unsuccessful, Experimental, and Avant-Garde Can Teach Us About the Future of Media," and Mark Woodward on "Linux PC Robot." Gain insight into a variety of must-know topics in the paper presentations and poster session. Since USENIX ATC '11 is part of USENIX Federated Conferences Week, you'll also have increased opportunities to mingle with colleagues and leading experts across multiple disciplines.
Register by May 23, and save! Additional discounts are available.
http://www.usenix.org/atc11/lg
Back for 2011, WebApps '11 features cutting-edge research that advances the state of the art, not only on novel Web applications but also on infrastructure, tools, and techniques that support the development, analysis/testing, operation, or deployment of those applications. The diverse 2-day conference program will include invited talks by industry leaders including Finn Brunton, NYU, on "A Tour of the Ruins: 40 Years of Spam Online," a panel on "The Future of Client-side Web Apps," a variety of topics and new techniques in the paper presentations, and a poster session. Since WebApps '11 is part of USENIX Federated Conferences Week, you'll also have increased opportunities for interaction and synergy across multiple disciplines.
Register by May 23, and save! Additional discounts are available.
http://www.usenix.org/atc11/lg
ApacheCon 2011: Open Source Enterprise Solutions, Cloud Computing, Community Leadership
Apache technologies power more than 190 million Websites and countless mission-critical applications worldwide; over a dozen Apache projects form the foundation of today’s Cloud computing. There’s a reason that five of the top 10 Open Source software downloads are Apache, and understanding their breadth and capabilities has never been more important. ApacheCon is the official conference, trainings, and expo of The Apache Software Foundation, created to promote innovation and explore core issues in using and developing Open Source solutions "The Apache Way". Highly-relevant sessions demonstrate specific professional problems and real-world solutions focusing on “Apache and”: Enterprise Solutions, Cloud Computing, Emerging Technologies + Innovation, Community Leadership, Data Handling, Search + Analytics, Pervasive Computing, and Servers, Infrastructure + Tools. Join the global Apache community of users, developers, educators, administrators, evangelists, students, and enthusiasts 7-11 November 2011 in Vancouver, Canada. Register today http://apachecon.com/
Share |
The newest release of Ubuntu Linux was released at the end of April and features the new Unity user interface - a source of some controversy among beta testers - and added support for Cloud deployments and DevOps automation.
Ubuntu 11.04 Server, while not a release with Long Term Support (LTS) , will include will upgrades for the core cloud solution, Ubuntu Enterprise Cloud. The release includes the latest stable version of technology from Eucalyptus for those looking to build private clouds. Also included is OpenStack's latest release, 'Cactus.
For those working on public clouds, Ubuntu Server 11.04 will be available from Amazon Web Services (AWS). In addition, Canonical is announcing Ubuntu CloudGuest later in May, allowing individuals and businesses to test and develop on the cloud with support and systems management from Canonical. For the first time, potential users can test-drive Ubuntu online using only a web browser. Visitors to Ubuntu.com will be able to access a complete version of the latest product without having to download anything.
In part, the CloudGuest program replaces the free CD program that Canonical has discontinued.
With the Unity UI, Ubuntu has better support for touch screens and multi-touch gesture controls. There also better support for Sandy Bridge and Radeon graphics chip sets and drivers. Also Ubuntu Software Center, used to download free applications, has been integrated with the Unity program launcher and shows users reviews of potentially useful software.
Ubuntu 11.04 includes Cobbler and MCollective to help automate system administration tasks and orchestrate operations, with policy-based configurations and RPC calls.
Check here the Ubuntu Downloads Page.
Here comes Slackware Linux 13.37, a new version of the world's oldest surviving Linux-based operating system.
Slackware 13.37 has been released with enhanced performance and stability from a year of rigorous testing. Slackware 13.37 uses the 2.6.37.6 Linux kernel and also ships with 2.6.38.4 kernels for those want to be at the bleeding edge. Firefox 4.0 is the default web browser, the X Window System has been upgraded (and includes the open source nouveau driver for NVIDIA cards) and the desktop is and KDE 4.5.5.. Even the Slackware installer has been improvedl with support for installing to btrfs, a one-package-per-line display mode option, and is an easy to set-up PXE install server that runs off the Slackware DVD.
The Speakup driver, used to support speech synthesizers providing access to Linux for the visually impaired community, has now been merged into all of the provided kernels.
See the complete list of core packages in Slackware 13.37.
See the list of official mirror sites.
Two Linux Foundation Working Groups released major updates of their collaborative work in April.
vThe Yocto 1.0 Project Release became availabile and includes major improvements to its developer interface and build system, providing developers with even greater consistency in the software and tools they're using across multiple architectures for embedded Linux development. For more information, Click here.Carrier Grade Linux 5.0 covers several specification categories that include Availability, Clustering, Serviceability, Performance, Standards, Hardware, and Security. Also, a number of requirements have been dropped from the specification due to the mass adoption and ubiquity of CGL and its inclusion in the mainline Linux kernel, which allows these specifications to become more consistent fixtures across different distributions. For more information and to review the CGL 5.0 specification, please visit Carrier Grade Linux' Page.
This next generation open application platform provides a broad choice of developer frameworks and cloud deployment options.
VMware delivered Cloud Foundry in April, an open Platform as a Service (PaaS) architected specifically for cloud computing environments and delivered as a service from both enterprise datacenters and public cloud service providers. Cloud Foundry enhances the ability of developers to deploy, run and scale their applications in cloud environments while allowing wide choice of public and private clouds, developer frameworks and application infrastructure services.
VMware is introducing a new VMware-operated developer cloud service, a new open source PaaS project and the first ever "Micro Cloud" PaaS solution. VMware introduced Cloud Foundry at an event where developer community leaders highlighted the value of an open PaaS in advancing highly productive development frameworks for the cloud. Speakers include: Dion Almaer and Ben Galbraith, co-founders of FunctionSource, Ryan Dahl, creator of Node.JS from Joyent, Ian McFarland, VP, Technology, Pivotal Labs, Roger Bodamer, 10Gen, steward of MongoDB, and Michael Crandell, CEO and co-founder of RightScale. Further industry support and blogs are available from 10Gen and RightScale.
Modern application development faces a growing set of challenges such as diverse application development frameworks, choices in new data, messaging, and web service application building blocks, heterogeneous cloud deployment options, and the customer imperative to deploy and migrate applications flexibly across enterprise private clouds and multiple cloud service providers.
PaaS offerings have emerged as the modern solution to the changing nature of applications, increasing developer efficiency, while promising to let developers focus exclusively on writing applications, rather than configuring and patching systems, maintaining middleware and physical machines and worrying about network topologies.
Early PaaS offerings, however, restricted developers to a specific or non-standard development frameworks, a limited set of application services or a single, vendor-operated cloud service. "For all of the developer interest in the potential benefits to PaaS solutions, actual adoption has been slowed by their employment of non-standard components and frameworks which raise the threat of lock-in," said Stephen O'Grady, Principal Analyst at RedMonk." With Cloud Foundry, VMware is providing developers a PaaS platform with the liberal licensing and versatility to accommodate the demand for choice in developer programming languages."
Cloud Foundry is a modern application platform built specifically to simplify the end-to-end development, deployment and operation of cloud era applications. Cloud Foundry orchestrates heterogeneous application services and applications built in multiple frameworks and automates deployment of applications and their underlying infrastructure across diverse cloud infrastructures.
Cloud Foundry supports popular, high productivity programming frameworks, including Spring for Java, Ruby on Rails, Sinatra for Ruby and Node.js, as well as support for other JVM-based frameworks including Grails. The open architecture will enable additional programming frameworks to be rapidly supported in the future. For application services, Cloud Foundry will initially support the MongoDB, MySQL and Redis databases with planned support for VMware vFabric services.
Cloud Foundry is not tied to any single cloud environment, nor does it require a VMware infrastructure to operate. Rather, Cloud Foundry supports deployment to any public and private cloud environment, including those built on VMware vSphere those offered by VMware vCloud partners, non-VMware public clouds and demonstrated support for Amazon Web Services by cloud management provider RightScale.
Cloud Foundry will be offered in multiple delivery models:
In April, Intel released the Intel Atom chip formerly codenamed "Oak Trail," which will be available in devices starting in May. In addition, at the Intel Developer Forum in Beijing, the company gave a sneak peak of its next-generation, 32nm Intel Atom platform, currently codenamed "Cedar Trail." This solution will help to enable a new wave of fanless, cool and quiet netbooks, entry-level desktops and all-in-one designs.
The new Intel Atom processor Z670, part of the "Oak Trail" platform, delivers improved video playback, fast Internet browsing and longer battery life. The rich media experience available with "Oak Trail" includes support for 1080p video decode, as well as HDMI. The platform also supports Adobe Flash, enabling rich content and Flash-based gaming.
The platform also helps deliver smaller, thinner and more efficient devices by packing integrated graphics and the memory controller directly onto the processor die. The processor is 60 percent smaller than previous generations with a lower-power design for fanless devices as well as up to all-day battery life. Additional features include Intel Enhanced Deeper Sleep that saves more power during periods of inactivity. An integrated HD decode engine enables smooth 1080p HD video playback with less power used.
The Intel Atom processor Z670 allows applications to run on various operating systems, including Google Android, MeeGo and Windows. This flexibility aides hybrid designs that combine the best features of the netbook and tablet together.
Splunk has released version 4.2 of its software that collects, indexes and harnesses any machine data logs generated by an organization's IT systems and infrastructure - physical, virtual and in the cloud.
Splunk 4.2 builds on the innovation of previous releases, adding real-time alerting, a new Universal Forwarder, improved usability and performance, and centralized management capabilities for distributed Splunk deployments.
Machine data holds a wealth of information that can be used to obtain operational intelligence and provide valuable insights for IT and the business. Splunk is the engine for machine data that helps enterprises improve service levels, reduce operations costs, mitigate security risks, enable compliance and create new product and service offerings.
Splunk 4.2 new features include:
Real-time alerting. Provides immediate notification and response for events, patterns, incidents and attacks as they occur. Universal Forwarder. New dedicated lightweight forwarder delivers secure, distributed, real-time data collection from thousands of endpoints with a significantly reduced footprint. Easier and faster. New ways to visualize data, quick start guides for new users, integrated workflows for common tasks and up to 10 times faster search experience in large-scale distributed deployments. Better management of Splunk. New centralized monitoring and license management facilitate the management of multiple Splunk instances from one location.
For more on the Splunk 4.2 release, download a free copy here..
Share |
Talkback: Discuss this article with The Answer Gang
Deividson was born in União da Vitória, PR, Brazil, on 14/04/1984. He became interested in computing when he was still a kid, and started to code when he was 12 years old. He is a graduate in Information Systems and is finishing his specialization in Networks and Web Development. He codes in several languages, including C/C++/C#, PHP, Visual Basic, Object Pascal and others.
Deividson works in Porto União's Town Hall as a Computer Technician, and specializes in Web and Desktop system development, and Database/Network Maintenance.
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Howard maintains the Technology-Events blog at
blogspot.com from which he contributes the Events listing for Linux
Gazette. Visit the blog to preview some of the next month's NewsBytes
Events.
By Anonymous
Coming back again to Ubuggy in the text console, be sure you are relaxed and in tolerance mode.
First, if you do not know it by its legal name, Ubuggy goes mostly by the nickname of Ubuntu. I have no problems using the nickname.
Second, assume that we have switched from the Ubuntu desktop to pure text mode with Ctrl-Alt-F1 or Ctrl-Alt-F2 or whatever. We are not in a graphic terminal.
Third, adjustments to the keymap in text mode do not affect the desktop.
Fourth, you have superuser privileges or else you can stop here.
And what are we dealing with this time? The caps lock key. When pressing it intentionally or inadvertently in the text console, you notice that other keys are not responding or are responding the wrong way. The caps lock LED is not lighting up.
[ Just for the record: I'm using Ubuntu, and my text terminal keys all seem to work fine - although the caps lock LED does fail to come on. -- Ben ]
Google to the rescue: the problem is at least 6 years old. Has it been solved? You have one guess and if you fail, nobody can help you.
It turns out that Ubuntu has just inherited the problem, although you can assume they made it worse. The problem is:
http://old.nabble.com/Bug-514212%3A-console-setup%3A-on-UTF-8-console,-caps-lock-is-turned-into-a-shift-lock-td21848145.html
http://www.mail-archive.com/debian-boot@lists.debian.org/msg106545.html
So the traditional caps lock action - called Caps_Lock in the kernel keymap - only works for ASCII or more precisely for latin1.
The intended solution is to replace it with CtrlL_Lock which is nothing but a sticky Ctrl key to help produce Unicode capitals when set.
The way to hell is paved with Unicode intentions. For the moment, the patches do not work properly. Or maybe they do in a different distro but Ubuntu has further complications:
This was a project with alternative loadkeys/dumpkeys, launched in the late 90s and stuck into non-maintenance shortly afterwards. Its short life was, however, long enough to lure Debian into a costly mistake they still have not recovered from. OK, but Ubuntu is Ubuntu and they are responsible for what they do even if they are based on Debian. Just thank console-tools if you change your text mode keymap and later on it is rolled back unbeknownst to you.
So what has to be done until isolated kernel patchers close in on a safe caps lock solution and Ubuntu converts to a safe text mode keymap?
I was asking myself the question and couldn't find an answer. But then it occurred to me that normally you and me do not type lengthy texts all caps. For a short text of half a screen line it is not difficult to hold down the Shift key and type the letters. For lengthy text, should necessity really arise, type the lowercase text in an editor, turn it upper case with the editor and paste it wherever it is supposed to go. In other words, who needs Caps Lock besides the Nigerians offering you millions?
And where does this key originate from in the first place? Decades ago in the world of telegraph and telex, there was no lower case. Typewriters could be shifted to upper case. When ca. 1970 dedicated text processing machines were introduced, they had keyboards with lower case and were mimicking typewriters, so they got the caps lock key. But why the hell is it still there firmly planted on our keyboards forty years on?
Let's solve the problem without waiting for the perfect kernel patch: let's nuke Caps Lock.
In step 1, issue:
dumpkeys -1 > kdump.txt
In step 2, load kdump.txt into an editor and replace all instances of CtrlL_Lock with VoidSymbol. Save the modified file.
In step 3, issue:
loadkeys -s kdump.txt
Done. Of course, you may also assign to the key some other action of your choice. For instance, I came across a guy who assigned Tab to it because his normal Tab key was damaged. If you want to do that, do nuke CtrlL_Lock first as indicated above, then issue:
echo "keycode 58 = Tab" | loadkeys
or whatever you want to use.
This solution is ephemeral. At the next boot you will again get the CtrlL_Lock treatment. So save the keymap as you need it and load it automatically from your bash profile.
Hello, Dell? Please extend an invitation to Microsoft and others and remove the Caps Lock key from our PC keyboards.
Share |
Talkback: Discuss this article with The Answer Gang
A. N. Onymous has been writing for LG since the early days - generally by
sneaking in at night and leaving a variety of articles on the Editor's
desk. A man (woman?) of mystery, claiming no credit and hiding in
darkness... probably something to do with large amounts of treasure in an
ancient Mayan temple, and a beautiful dark-eyed woman with a snake tattoo
winding down from her left hip. Or maybe A.N. is just into privacy. In
any case, we're grateful for the contributions.
-- Editor, Linux Gazette
PostgreSQL is an enterprise-level database. It is an open-source software, competes by some features with products from Oracle, so it is no surpise that more and more projects delegate such important duties like data-mining and data handling to PostgreSQL. And what's more important, the architectural design for this database is extremely powerful and conforms to KISS principal very close, so it is really very fun to deal with PGSQL internal programming, as well as maintaining it.
One of the projects that I need to take care of is based on PostgreSQL and Liferay Portal, which in turn is based on Tomcat. Actually, it has a 3-tier architecture, where Tomcat works as a web-server, i.e. it basically is a front-end, that serves requests from around the world. Liferay's web-pages can be as: (a) plain HTML, (b) JSP (Java Server Pages), (c) or can be programmed as servlets (portlets) as well. The latter two scenarios require you to have an IDE (Integrated Development Environment) with JSP-, portlet-, JDBC-bindings deployed. Basically, either JSP, or portlet contains code, that just fetches actual SQL-data from database instance (for example, news_portal) and prepares lovely HTML-page, which shows today's weather forecast, or currency rates. However, you might be interested in generating the same page without doing time consuming efforts like downloading and installing IDE, programming new servlet and deploying it afterall. How to make it happen? Simply execute necessary SQL-requests at backyard, i.e. within operating system space, where Tomcat and PostgreSQL servers reside. You can program it in 10 minutes - in bash, python or any other scripting language. In my case I generate HTML-page, that consists of thousand lines of text, and push it back into Liferay's CMS database engine (news_lportal), so HTML contents of this page is to be displayed by Liferay itself. I also scheduled via cron how often I regenerate this information page, so Liferay would always show up-to-date news, rates, etc.
There's a native client, that comes with PostgreSQL server, called psql. Although psql is console application only it has essentially the same capabilities as it's counterpart - a GUI, GTK-based PgAdmin. If you don't have it installed in your system, please run aptitude (for Debian):
# aptitude install postgresql-client Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Reading task descriptions... Done The following NEW packages will be installed: postgresql-client postgresql-client-8.3{a} postgresql-client-common{a} 0 packages upgraded, 3 newly installed, 0 to remove and 6 not upgraded. Need to get 2028kB of archives. After unpacking 5276kB will be used. Do you want to continue? [Y/n/?]
Listing 1. Postgresql package consists of psql as well as other auxiliary utilities
It will install psql, as well as pg* utilities (pg_dump, pg_restore and others).
Surely, you can install a GUI application as well for performing complex tasks, like data analysis:
# aptitude install pgadmin3
Figure 1. PgAdmin - graphical application for handling SQL-queries
With the help of psql you can quite easily run any SQL-statement, like this:
psql -q -n -h 127.0.0.1 news_lportal -U root -c "select userid, emailaddress from user_"
where used:
host to connect to - 127.0.0.1 desired database within PostgreSQL pool - news_lportal username, which is granted to execute SQL-command - root and SQL-command itself - select userid, emailaddress from user_
Alternatively, you can run psql with update operator, like this:
psql -q -n -h 127.0.0.1 news_lportal -U root -c "update journalarticle set content = '<H1>Hello, World!</H1>' where id_ = 24326"
Where ID with number 24326 is my HTML document previosuly created by CMS-engine on top of Liferay and stored inside PostgreSQL database - news_lportal.
In such a way, you could refresh any information, that is stored inside journalarticle table. The only thing you should remember about is the correct ID for your article.
However, in real life this update trick won't work as it should. I've prepared an update script (import_table.sh), where the contents of table_news.html file should be uploaded into PostgreSQL.
#!/bin/sh ct=`cat table_news.html` psql -t -l -q -n -h 127.0.0.1 news_lportal -U root -c "update journalarticle set content = '$ct' where id_ = 24326;"
Listing 2. Very simple import script (import_table.sh), first version
Argh! It didn't work out - PostgreSQL client refused to run an update command.
$ ./import_table.sh ./import_table.sh: line 4: /usr/bin/psql: Argument list too long
At first glance, the file table_news.html seems to be quite good. But a closer look shows another catch-up - the file is a bit too large - 400Kb in size.
$ file table_news.html table_news.html: UTF-8 Unicode text, with very long lines $ cat table_news.html | wc 617 2505 408460
Is there a mechanism to load any text file larger than 2Kb into database? Yes! Luckily, PostgreSQL has import/export functions that will ease communicating with file I/O operations. Let's declare our own procedure get_text_document_portal() that will load into database any text file.
- Function: get_text_document_portal(character varying) -- DROP FUNCTION get_text_document_portal(character varying); CREATE OR REPLACE FUNCTION get_text_document_portal(p_filename character varying) RETURNS text AS $BODY$ SELECT CAST(pg_read_file(E'liferay_import/' || $1 ,0, 100000000) AS TEXT); $BODY$ LANGUAGE sql VOLATILE SECURITY DEFINER COST 100; ALTER FUNCTION get_text_document_portal(character varying) OWNER TO postgres;
Listing 3. Our new procedure will call pg_read_file() function and read text file from disk
In order to load a text file into the database named news_lportal, I've written the script below (import_table_2.sh), which takes filename - in this example, table.text - as a parameter for get_text_document_portal() procedure and places it's contents into corresponding field of table journalarticle.
#!/bin/sh psql -q -n -h 127.0.0.1 news_lportal -U root -c "update journalarticle set content = get_text_document_portal('table.text') where id_ = 24326;"
Listing 4. Import script (import_table_2.sh) that triggers our new pgsql-procedure
All you need to do is to change the source HTML file, named table.text and run import_table_2.sh. Please pay attention to a location, where the imported file should be placed - this is a subdirectory liferay_import under /var/lib/postgresql/8.3/main/ tree.
$ ls -l /var/lib/postgresql/8.3/main/ total 48 -rw------- 1 postgres postgres 4 Nov 9 10:20 PG_VERSION drwx------ 10 postgres postgres 4096 Nov 10 11:16 base drwx------ 2 postgres postgres 4096 Mar 4 16:44 global drwx------ 2 postgres postgres 4096 Dec 3 18:27 liferay_import drwx------ 2 postgres postgres 4096 Nov 9 10:20 pg_clog drwx------ 4 postgres postgres 4096 Nov 9 10:20 pg_multixact drwx------ 2 postgres postgres 4096 Mar 1 13:29 pg_subtrans drwx------ 2 postgres postgres 4096 Nov 9 10:20 pg_tblspc drwx------ 2 postgres postgres 4096 Nov 9 10:20 pg_twophase drwx------ 3 postgres postgres 4096 Mar 4 12:43 pg_xlog -rw------- 1 postgres postgres 133 Feb 11 22:09 postmaster.opts -rw------- 1 postgres postgres 53 Feb 11 22:09 postmaster.pid lrwxrwxrwx 1 root root 31 Nov 9 10:20 root.crt -> /etc/postgresql-common/root.crt lrwxrwxrwx 1 root root 36 Nov 9 10:20 server.crt -> /etc/ssl/certs/ssl-cert-snakeoil.pem lrwxrwxrwx 1 root root 38 Nov 9 10:20 server.key -> /etc/ssl/private/ssl-cert-snakeoil.key
Listing 5. Owners' information for PostgreSQL disk storage pool
It is owned by postgres and can be written by this user only. Or, by root account. Of course, you could add an entry into root's crontab, but a good practice is - split jobs between different accounts. Assigning database jobs to postgres only, and every other task trust to, for instance, tomcat account. So how can tomcat user be able to write to liferay_import directory with postgres owner access bits? By making a link - symlink doesn't work, but hardlink will do!
# ln /var/lib/postgresql/8.3/main/liferay_import/table.text /home/tomcat/db/table.text
Listing 6. Hardlink allows to override owners' limitations provided by symlink
#!/bin/sh /home/tomcat/db/prepare_table_news.sh > /home/tomcat/db/table.text /home/tomcat/db/import_table_2.sh
Listing 7. Script (mk_db.sh), that prepares arbitrary HTML-document and loads it into database
Hooray! Now I can place an entry into tomcat's crontab and get the news information updated every hour. And this is done from under tomcat account. Really nice.
$ crontab -l # m h dom mon dow command 0 * * * * /home/tomcat/db/mk_db.sh > /dev/null
Listing 8. One entry in tomcat's crontab that should be executed every hour in order to update news
There exist different approaches how to provide up-to-date information when you deal with Liferay Portal and portlet-technology. First way requires to have dedicated developer environment preinstalled (NetBeans IDE with portlets bindings), whilst another way needs only to have a basic shell-scripting knowledge and be able to correctly construct SQL-queries. Of course, the better way is a good full-time developer with hands on IDE, JSR168 / JSR268 portlet standards, that would program whatever web-application you need, especially HTML-page with dynamically changed information. However, you can achieve the same results much quicker - simply rely upon casual Linux console tools.
[1] http://www.postgresql.org/
[2] "Practical PostgreSQL" - Joshua D. Drake, John C. Worsley, O'Reilly Media
Anton has jumped in into Linux world in 1997, when he first tried a tiny muLinux distribution, being run from a single floppy. Later on, Red Hat and Slackware became his favorite choice. Nowdays, Anton designs. Linux-oriented applications and middleware, and prefers to work with hardware labeled as "Powered by Linux".
Share |
Talkback: Discuss this article with The Answer Gang
Anton jumped into Linux world in 1997, when he first tried a tiny muLinux distribution, being run from a single floppy. Later on, Red Hat and Slackware became his favorite choice. Nowdays, Anton designs Linux-oriented applications and middleware, and prefers to work with hardware labeled as "Powered by Linux".
By Silas Brown
Encyclopaedia Britannica is a commercial encyclopedia and is therefore non-free, but old versions of the CDs and DVDs can often be purchased cheaply in second-hand markets. Of particular interest are the 2004 editions, as its publishers have made available an unofficial Linux support script for 2004 editions of Britannica (the Windows installer does not yet work on WINE). If you follow their instructions, you can expect to see the basic encyclopedia articles and images, but you might have a more difficult time with the Webster dictionary entries and some of the multimedia.
The instructions tell you to download version 1.3.1 of the Java Runtime Environment, and the script refuses to run on any other version. Nowadays we're up to at least version 1.6, and it can be difficult to find the old 1.3.1 version. However, if you edit their script and comment out the version check (at the time of writing it's on line 112 of linux-launch2.0.pl), you should find that most things work on newer versions.
You also need to set the JAVA_HOME environment variable. If you're not sure, try setting it to /usr. I added $ENV{JAVA_HOME} = "/usr"; to the linux-launch2.0.pl file itself, and also deleted the "Enter location of the Britannica Software" question and replaced it with a hard-coded location so I can launch it more quickly.
Unfortunately the preferences dialog doesn't work, and manually editing the preferences file has limited success when it comes to adjusting the font size. It may seem you're stuck with the default small-sized fonts, and you have to copy and paste the text elsewhere if you want to read it larger. However, if all else fails, you can use VNC for magnification. To set this up:
sudo apt-get install x11vnc xtightvncviewer vnc4server vncpasswd # give yourself a password cat > .vnc/xstartup <<EOF #!/bin/bash x11vnc -display :1 -rfbport 5902 -forever -scale 2:nb & xsetroot -solid grey exec $HOME/britannica/linux-launch2.0.pl EOF chmod +x .vnc/xstartup
and then to run it:
vncserver :1 -geometry 700x560 -depth 16 # this is about the minimum dimensions for EB sleep 2 # allow x11vnc to start xtightvncviewer :2 -passwd $HOME/.vnc/passwd -geometry 1275x720 # (adjust this for your desktop) killall Xvnc4
xtightvncviewer tends to have better display-update logic than straight xvncviewer.
One disadvantage of this approach is that you can't copy text from articles and paste them into other applications that are outside of the VNC server. This appears to be a limitation of x11vnc. If you want to copy text then you'll have to run Encyclopaedia Britannica without magnification.
There has been much debate about how Britannica compares with Wikipedia, and I won't go into that too much. Suffice to say that you can't expect any encyclopedia to be absolutely perfect; after all they are all human creations, just like software, so you will need to keep your head about you when using any of them. Wikipedia tends to be more up-to-date and covers some areas that Britannica does not, but Britannica tends to give you more of that "finished product" feeling: you won't spend your time sifting through work in progress, or finding articles that look more like the fallout of a heated debate than something that belongs in the quiet back room of a traditional reference library. If you sometimes get tired of the noisier wiki atmosphere[clarification needed] then you might want to add Britannica to your reading at times.
Share |
Talkback: Discuss this article with The Answer Gang
Silas Brown is a legally blind computer scientist based in Cambridge UK. He has been using heavily-customised versions of Debian Linux since 1999.
This is very full month, a Red Hat/MeeGo/Synergy month.
The first week is dominated by the combined Red Hat Summit and co-located JBoss World in Boston. The joint venue is billed as "...open source events that bring together engineers, business decision makers, developers and community enthusiasts to learn about the latest in open source cloud, virtualization, platform, middleware, and management technologies." Right. As does every major conference.
But this event has the Red Hat imprimatur and a very long history with the Linux Community, making it a Gold Standard Linux conference.
At the Red Hat Summit, you can learn about open source advancements through:
Technical and business seminars
Hands-on labs and demos
Customer case studies
Networking opportunities
Keynotes by Open Source leaders, and Red Hat partners
Direct collaboration with Red Hat and JBoss engineers
It's this last item - talking with the engineers - that can justify a trip to Boston.
Among keynotes by executives from Cisco, HP, IBM and Intel, will be Jeremy Gutsche, the founder of TrendHunter.com and an author of thousands of postings and articles on viral trends and innovation. He has been described as "a new breed of trend spotter" by The Guardian, "an eagle eye" by Global TV, an "Oracle" by the Globe and Mail and "on the forefront of cool" by MTV.
Also, there is Red Hat's Cloud Evangelist and author of the Pervasive Datacenter blog on CNET, Gordon Haff. He has been speaking with enterprise architects and CIOs about their aspirations for cloud computing, how they're approaching it, and areas of potential concern. Gordon will be sharing these experiences in his session "Trends in Cloud Computing: What You Need to Know." In addition, he'll be moderating the panel discussion "Real World Perspectives Panel: Cloud" and he will also participate in the Expert Forum panel where he and other experts will answer questions about cloud computing.
As a preview, here are the 2010 presentations, organized by tracks:
http://www.redhat.com/promo/summit/2010/presentations/
Early Bird rates are past but were $1295; registering the week of the conference is $1595. That's high. Alumni rates for prior attendees are $400 less in either category, which is a fairer rate for a 3 day event. You can Register for Red Hat Summit and JBoss World until May 2.
If you work with Google technologies, especially Android OS, then Google IO is the place to be. But only Alumni from the last 2 years were able to register in advance, and only a lucky few got to the registration site within the first 30-40 minutes. I missed it too....
But there was a silver lining in that: the sell-out at Google IO led to sell out attendance at the recent AnDevCon in San Mateo and the Android Builders Conference that was co-located with the recent Embedded Linux Conference. Those were good events and they should be noted for next year. Try to follow some Google IO alumni on Twitter if you want to get a jump on the 2012 registration.
Here is a link to content from Google IO 2010:
http://www.google.com/events/io/2010/sessions.html
Open Source has become almost common at mainline businesses. But OSBC brings long term leaders of Open Source businesses together to discuss new trends and developments. If you or your company is planning to jump into the Open Source world, this is an event to attend.
Admittedly, the pronouncements and trends discussed have been less stunning than in the first years of this conference. But as shown by the NewsBytes story on the non-compliance of mobile apps, a lot of developers don't do Open Source correctly. Expect this and larger issues, like the Oracle-Google conflict, to be addressed.
The OSBC agenda sports many panels over its 2 day run. These are often more interesting than the prepared presentations. Last year, a panel with Microsoft executives had them talking about improving their relationship with the Open Source community and pledging quick support for HTML 5.
Bob Sutor of IBM gave a keynote called "Asking the Hard Questions about Open Source Software" at the OSBC 2010 conference. It distilled the wisdom he and other IBMers acquired after a decade of Linux and Open Source contributions. He concluded that OSSw is still SW - that it needs to be measured and compared with regular SW in the enterprise. There should be no free pass due to ideology or community affiliation.
Here's a link to a PDF version of that talk, aimed at developers and enterprise IT staff: http://www.sutor.com/d/Sutor-OSBC-2010.pdf
One of the best talks I experienced was given by Matt Aslett, an analyst with The 451 Group: "The Evolution of Open Source Business Strategies" This focused on the relations between the developer community and supporting companies in varying business models. Aslett discussed the evolution of those models and showed that there was a tendency to both open the underlying platforms more and also to move to a mix of open and proprietary licensing. In the end, he argued that there was no basic model for open source businesses and the future remained, literally, "mixed" For more info on his point of view, visit his blog at The 451 Group, http://blogs.the451group.com/opensource/author/maslett/ .
This year, OSBC moves to the Hilton near Union Square from the slightly grander Palace Hotel. This could be a sign that the event has expanded. Let's hope the quality remains the same.
This is the second MeeGo conference and is sponsored by Intel, Nokia, and the Linux Foundation.
Created by merging Moblin and Maemo, this distribution is designed for smartphones, netbooks, entry level desktop computers and in-vehicle enteratainment systems. Its been annointed by Intel and the Linux Foundation. But Android seems to be getting more developer attention, so far.
MeeGo netbooks and tablets will be coming out shortly, so this may be the moment to ride an up-and-coming MeeGo wave. Jim Zemlin of the Linux Foundation thinks so.
We didn't attend last year but from the quality of sessions at the Linux Foundation's Embedded Linux Conference this year, which included some MeeGo sessions, and the Android Builder's Conference in April, we think this is a good bet for Linux developers and OEMs of portable and embedded devices. And the price is right: Free.
Register at http://sf2011.meego.com/ .
This is the Xen Conference and Citrix, after buying XenSource, is like EMC to VMware. This is a good event for the Xen faithful, but Citrix is increasingly commercializing Xen offerings. And the spreading adoption of KVM for hosting guest OSes is putting Xen out of fashion. But this is still a solid technology and business conference with good sessions and decent grub. If you are using XenServer or other Xen technologies, or if you use the popular NetScaler appliances, this is the place to be.
Arguably, Citrix has had success positioning XenServer and its management tools as a safe niche between VMware and Microsoft's HyperV. The underlying hypervisor technology in both Xen and HyperV come out of the same Cambridge virtualization research and the Citrix tools integrate exceedingly well with Microsoft servers. So the slideware and market-tecture lean a bit more toward Redmond and many of the Expo businesses are Microsoft Partners.
More than 75 breakout sessions are planned for the 2011 Synergy. Here is the current list:
http://www.citrixsynergy.com/sanfrancisco/learning/breakoutsessions.html
There are also new self-paced labs and instructor-directed labs and many whiteboard sessions with leading Citrix engineers.
Those self-paced Learning Labs will be held more than half a mile away at the Hilton San Francisco Union Square. Just find an available workstation, select your topic from the menu and get set for a 90-minute lab experience. There will be 200 seats and computers are provided. And no reservations are required for these labs.
Sneak peak: SYN203 Managing VM networking across the datacenter with XenServer
http://www.citrix.com/tv/#videos/3781
This session will give a glimpse into distributed virtual switching (DVS) in Citrix XenServer will describe best practices for using DVS to deliver more secure, measurable and reliable network services for virtualized applications.
And here are additional video sessions from Synergy San Francisco 2010:
http://www.citrix.com/tv/#search/synergy+SF
Unfortunately, aside from those edited session videos, there is no archive of materials for previous Synergy conferences - not for previous attendees or the public. You got to be there to get the Synergy content.
The gala Synergy party on Thursday evening, May 26, will wrap up the conference with great entertainment by Train, the Grammy award-winning group that recently released a very appropriate album title – "Save Me San Francisco" – that has since gone gold.
This could be a lead-up to a sweet Memorial Day Weekend in the San Francisco Bay Area.
Summary list:
Red Hat Summit and JBoss World
May 3-6, 2011, Boston, MA
Google I/O 2011
May 10-11, Moscone West, San Francisco, CA
(Sorry - this conference filled in about an hour - We missed it too!)
OSBC 2011 - Open Source Business Conference
May 16-17, Hilton Union Square, San Francisco, CA
http://www.osbc.com
MeeGo 2011
May 23-25, Hyatt Regency, San Francisco
http://sf2011.meego.com/
Citrix Synergy 2011
May 25-27, Convention Center, San Francisco, CA
http://www.citrixsynergy.com/sanfrancisco/
SemTech is the world's largest, most authoritative conference on semantic technology. This is the eigth year of the conference and the second year the event is at a San Francisco Hotel - in this case, the Hilton Union Square. About 1000 attendees are expected to attend SemTech 2011, and for the first time there will be a SemTech London at the end of September and a SemTech East in Washington at the end of November.
One new theme for this 8th year of SemTech is the Future of Media. In the session, "The paradox of content liberation: More structure means more freedom" Rachel Lovinger will discuss how applying greater structure to content via semantic metadata allows it to be more widely re-used, referenced and adapted for different purposes. Semantic metadata is what runs the Web know, extending and sharpening searches, enhancing ecommerce, and developing social context.
If you can fit it into your schedule, this one is worth attending.
The goal of the Ottawa Linux Symposium is to provide a vendor neutral and open environment to bring together Linux developers, enthusiasts, and systems administrators as well as to strengthen the personal connections within the Linux Community. This conference has been running for the last 13 years. so they must be doing several things right. This year the main keynote is presented by the irascible Jon "Maddog" Hall, a long-time advocate for the open source community.
OLS usually opens with an update of the current state of the Linux Kernel and key events and contributions over the past 12 months. These presentations were first done by Jon Corbett from LWN.net - who runs a kernel panel at the Linux Collaboration Summit annually as well - but this year the OLS kernel session will be done by Jon C. Masters, who is an author and also the producer of the Kernel Podcast.
Costs are fairly low as the event is hosted by the University of Ottawa so the full Enthusiast/Small Business Rate is only $500 and the full Corporate rate is only $800 for 3 days. This represents their first increase in the 13 years OLS has been running.
OLS has set up a travel fund to help people make it to the event who would otherwise not be able to in the current economic climate. They are taking contributions by Paypal to Community Travel Assistance Fund page: http://www.linuxsymposium.org/2011/travel_assistance.php
They also have an active following on Twitter and run contests like the May 16th-20th raffle to win admission at the heavily discounted student rate of $200.
Here's the link for OLS list of presentations and tutorials:
http://www.linuxsymposium.org/2011/speakers.php?types=talk,tutorial
The AWS Summit is actually held on 2 different days: June 10th in New York, and June 14th in both San Francisco and London. These regional full-day conferences will provide information on Amazon's Cloud offerings and feature a keynote address by Amazon.com CTO Werner Vogels, customer presentations, how-to sessions, and tracks designed for new users.
Among the reasons to attend:
-- Gain a deeper understanding of Amazon Web Services, including best practices for developing, architecting, and securing applications in the Cloud.
and
-- Learn how Solutions Providers from the AWS community have helped businesses launch applications in the Cloud, utilizing enterprise software, SaaS tools, and more.
These events are free if you register early, and there will be lunch.
This event focuses on the convergence of communications and emphasizes the impact of new technology and also open source software on digital networks. Its a fairly young event but has been steadily growing, even in the time of the Great Recession.
The conference has moved this year from its traditional early Spring time to the beginning of Summer, June 27-29. There was some risk earlier this year that it might be postponed but the main organizer, Lee Dryburgh, is fully committed to making it happen in the new time slot.
In 2008, eComm was billed as the communications industry rethink. It tries to be like a TED conference for the telcom sector, said Dryburgh. It certainly remains the place to examine the varied strands of digital communications.
You help shape the event by contributing to the on-going and open topic discussion document - no registration required - at http://bit.ly/fT9L7y .
Here's a slide share link to earlier presentations at eComm:
http://www.slideshare.net/search/slideshow?type=presentations&q=ecomm
I have always found interesting speakers and attendees at eComm and the evenings are set up like mixers to encourage interaction. The food isn't bad either. So come if you can.
The event list:
Semantic Technology Conference
June 5-9, 2011, San Francisco, CA
http://www.semanticweb.com
Amazon Web Services Summit 2011
10 June, New York, USA
14 June, London, UK
14 June, San Francisco, USA
http://aws.amazon.com/about-aws/aws-summit-2011/
Ottawa Linux Symposium (OLS)
Jun 13 – 15 2011, Ottawa, Canada
http://www.linuxsymposium.org/2011/
eComm - Emerging Communications Conference
June 27-29, 2011, Airport Marriott, San Francisco, CA
http://america.ecomm.ec/2011/
Share |
Talkback: Discuss this article with The Answer Gang
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Howard maintains the Technology-Events blog at
blogspot.com from which he contributes the Events listing for Linux
Gazette. Visit the blog to preview some of the next month's NewsBytes
Events.
Let's get the disclaimers out of the way quickly.
These are my impressions. I do not have a special pipeline to FreeBSD gurus.
I started using FreeBSD in January 2010, a bit more than a year ago. I was not a total stranger to FreeBSD, having done a little on it a few years earlier. But I have not used FreeBSD anywhere near as long as I've used Linux.
There are 3 main BSDs and a handful of minor variations. The main ones are FreeBSD, OpenBSD, NetBSD. I can't tell you how they differ. Possibly, they are the analogues of Red Hat, Debian and SUSE. Of the 3, FreeBSD has the lion's share of users.
NetBSD and FreeBSD began with the same initial code base around 1993. OpenBSD is a later fork from NetBSD. To varying degrees, code from FreeBSD's precursor is used in Microsoft Windows, Darwin (Mac OS X), Solaris.
What does it feel like to use FreeBSD on a daily basis? The short answer is that it's a lot like Linux. Or any other Unix, for that matter.
Readers of Linux Gazette may be aware of my HAL (Henry Abstraction Layer); that I use HAL to hide the differences between different platforms. But, in fact, I needed to make very few changes to my HAL to adapt to FreeBSD.
Linux users might notice that some of the standard commands don't have exactly the same options. FreeBSD ls is not GNU ls. For instance, they have conflicting interpretations of the "-T" option. In FreeBSD, "-T" produces complete time information, but not as detailed as "--full-time".
However, FreeBSD includes GNU ls as gls (/usr/local/bin/gls).
Further, if you really want something Linux, FreeBSD includes Linux binary compatibility which "allows FreeBSD users to run about 90% of all Linux applications without modification" (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/linuxemu.html).
So on top of gls, you can also get /usr/compat/linux/bin/ls. (That's 3 versions of ls for the price of 1!)
What's the difference?
$ /usr/compat/linux/bin/ls --version ls (GNU coreutils) 6.12 Copyright (C) 2008 Free Software Foundation, Inc. ... $ gls --version ls (GNU coreutils) 7.5 Copyright (C) 2009 Free Software Foundation, Inc. ...
That's probably more than you ever wanted to know about ls (or any other command).
As a last resort, if you have a Linux program, you can try to run it under FreeBSD. There's a good chance that it will just work. And it's not an emulation. It runs at pretty much native speed.
FreeBSD comes without bash. But, in fairness, on FreeBSD /bin/sh is vastly more than Bourne shell. But it's not bash.
Standard stuff that comes with FreeBSD can be found in /bin and /usr/bin. Alien software usually goes into /usr/local/bin. You can install bash and it will wind up in /usr/local/bin. Most of my scripts are Bourne shell; their first line consists of:
#! /bin/sh
However, I do have a few bash scripts; these tend to have a first line:
#! /bin/bash
I guess I could have found another solution, but it seemed simplest to make /bin/bash a symlink to /usr/local/bin. Some people might raise their eyebrows at this, but my system, my rules.
su is a bit different. FreeBSD has truss (similar to Solaris) rather than strace.
The sheer number of packages. FreeBSD has an astonishing number of packages. Looking at my system, I estimate over 22,000 (available packages - I haven't installed them all!). This number probably overestimates because some of these are to do with different languages (arabic, chinese, french, german, hebrew, hungarian, japanese, polish, portuguese, russian, ukrainian, vietnamese). But even without the languages, there were more than 21,800 packages.
I may not be interested in any of these languages since I am most comfortable in English. But I suspect that there are a lot of people in the world who would be quite grateful to be able to use their native tongue. This sort of thing only becomes an issue when one's preferred language is omitted.
There is heaps of documentation (are heaps?). And the documentation is pretty good.
I like the idea that there is a Master of the FreeBSD Universe. Consequently, no one can change a standard or the framework. But, individually, a user can still make any conceivable modification - because, ultimately, you are encouraged to build anything and everything for yourself.
The result is that you never get dynamic library conflicts. (I did once - it's a long story and an unusual scenario).
There is an overwhelming sense of quality, an expectation that things will just work. Whenever something didn't work for me, I assumed that it was simply a matter of my ignorance. Although this is usually a safe bet, it nonetheless speaks volumes about the quality of FreeBSD.
There are some exceptions at the margins.
Here is something I noticed on the Internet:
"... being a long time Linux person, I'm sad to say that FreeBSD ports/packages is really confusing to me."
I agree. I finally have a handle on packages. I'm still confused about ports. I'd like some tutorials or HowTos on how one is meant to manage ports. There is a plethora of similarly named tools. Which one(s) should I use? What's best practice?
At its simplest, packages are, or more specifically, pkg_add is, a lot like an inferior yum. It's like "yum install" without "yum list": it takes care of the installation, but it's an act of faith as to how much will be downloaded. Until recently, this was an ISP issue for me. It's still an issue for another reason. If I install openoffice blindly, I might end up filling a disk. "yum list" gives me some idea of what's involved *before* I commit.
There's another annoyance. Let's say you want to run Firefox. Of course you do. No problems. You can't go to the Firefox website and simply download the latest Firefox. The choices there are "Windows, Mac OS X, Linux" in many languages. Instead, you download it as a package. It's a big download, but, like yum, pkg_add takes care of dependencies.
Good, your happy. Your Firefox works like a bought one, straight out of the box. Until you go to, say, YouTube. Or any site that needs extensions. Let me be a little more precise: until you need some sort of plugin that executes proprietary binary code. Who makes Flash? Adobe. Is the source freely available? No, sir. Clearly a solution is needed. The solution is rather convoluted:
(From http://www.freebsd.org/doc/handbook/desktop-browsers.html)
6.2.3 Firefox and Macromedia® Flash Plugin Macromedia® Flash plugin is not available for FreeBSD. However, a software layer (wrapper) for running the Linux version of the plugin exists. This wrapper also supports Adobe® Acrobat® plugin, RealPlayer® plugin and more.
Now before you start laughing, let me say that I encountered a similar problem when I ran a 64-bit version of Firefox on CentOS at work (on a 64-bit machine). The Firefox works, but I couldn't get any plugins to work because there was no 64-bit Flash plugin. I don't understand why it couldn't run a 32-bit plugin, but there it is.
At least I have been able to get Flash working under FreeBSD. I can see video and hear audio. It was a complicated install, but now it's done.
Don't expect FreeBSD to be like Microsoft Windows or even Ubuntu. You are expected to know what you are doing. If you don't, you can ask for help. But you will not be mollycoddled the way Microsoft or Apple do it. Personally, this is how I prefer it. I like my platforms a little user-antagonistic.
By default, FreeBSD uses ufs which is not at all like ext2 or ext3. And it talks of slices rather than partitions. So to install FreeBSD, you use something (eg fdisk) to partition the disk. Whichever partition you allocate to FreeBSD, it will slice up that partition into several bits. So, for example, I allocated partition 3 of the second drive (/dev/ad1s3) to FreeBSD:
Device Boot Start End Blocks Id System /dev/ad1s3 * 10608 25846 7680456 a5 FreeBSD
(I've omitted the other partitions).
Here's the relevant bits of a df:
Filesystem Size Used Avail Capacity Mounted /dev/ad1s3a 372M 268M 74M 78% / /dev/ad1s3e 310M 117M 168M 41% /tmp /dev/ad1s3f 4.9G 3.5G 1.0G 78% /usr /dev/ad1s3d 558M 51M 462M 10% /var
FreeBSD uses the term "slice". It has taken the partition I gave it (/dev/ad1s3) and sliced it up (/dev/ad1s3a, /dev/ad1s3b, etc).
But the good news is that FreeBSD understands ext2. So, when I ran out of room in my FreeBSD partition, I simply mounted some Linux partitions. Further, FreeBSD not only understands NFS, I think it wrote the original book on it. So I am able to run both my Linux machine and my FreeBSD machine and mount drives from either on the other and just be on whichever machine without being too conscious of any differences.
I don't know the relative merits of ufs and ext2. It also looks like FreeBSD understands all of ext2/ext3/ext4. I wonder why I'm using ext2 and not ext3.
Good question. Difficult question.
I was trying to solve a problem. Maybe several problems.
My first problem is that I run an idiosyncratic environment. Why?
Imagine you have a car. I guess that's not a difficult exercise for most people. Let's accept, for the purposes of this exercise, that you are not a rev-head. You are not particularly interested in cars per se. You're a lot like my wife: you want to get into your car and drive to your destination. The only choice you want is the colour of the Duco.
So, after a while it's time to replace your car. But the latest model cars don't have the same controls as your previous car had. Steering wheel? No, we don't have one of them any more. We've got this new gadget. You sit on it; and when you want to steer, you sort of wriggle around. Sounds odd, but you'll soon get used to it. It's not actually better than a steering wheel, but it's the latest must-have. Everybody wants one. They were all getting bored with steering wheels. Brakes? Yes, we have a new way of stopping. We stick a gizmo in your ear and ...
That's what new releases of distros feel like to me. I went to "upgrade" from Fedora 5 to 10. I had had no problems going from 2 to 5. But 10 was another matter altogether. Just out of the blue, the names of disks changed. My Fedora 5 (which I'm still running) refers to /dev/hda. Fedora 10 decided to call it /dev/sda. Perhaps I'm naive, but I don't expect gratuitous changes like that from FreeBSD.
Every new rev of emacs introduces incompatible changes.
I don't turn on a computer so that I can learn new tricks. Not those sorts of new tricks. One of my computers has to be bread and butter, unchanging from day to day. I don't want to spend my time learning somebody's idea of a better editor. It's an editor! I just want the one I was using yesterday. Not better, not worse, not different. I DO NOT NOT NOT WANT ANY NEW FEATURES. If you want to do that, here's an idea: deliver emacs-classic and new-emacs. Put all your changes into new-emacs and leave emacs-classic as it is. Coke, anyone?
Every time they make it better, they make it worse. And that goes for Solitaire and aspell and ... the list goes on.
And these gratuitous changes come at a price. I used to run Slackware on a 16MB 100 Mhz Pentium 1. I was trying to install Fedora 10 on a spare machine and getting intermittent problems. After a while, they went away. It wasn't till much later that I twigged. I happened to notice in some documentation a declaration that Fedora would no longer run in 256MB; it needed a minimum of 512MB. While fiddling with my machine, I'd added some extra memory from another (dead) machine.
I decided I didn't like the notion of releases. Especially since Fedora is committed to producing a new release every year. That wouldn't be a disaster if there were any respect for backward compatibility. But there's not. There's the tacit assumption that the newest and latest will necessarily be the best. For everyone.
I asked around. Gentoo and Arch Linux were suggested as solutions to my problems. I was given to understand that they are more into the idea of just allowing the system to update organically. If you need a latter version of application X, just go get it. They had gone away from the idea of discrete releases. Sure, if you wanted to switch sects, you'd get the latest release. But you would not be penalised for running an old release. Maybe I misunderstood.
I tried Arch but found it difficult to get information. The idea is possibly ok, but I really felt isolated.
LG readers may be aware of my History files, records of what I do on the machines I administer. Here's a note from the day I installed FreeBSD:
The documentation is excellent, by the way.
And that probably says it all. I dare say I encountered the same problems with FreeBSD that I did with Arch. But, with FreeBSD I was able to get answers.
I hope to find the source of a really old rev of emacs and build it. And keep it forever. Because rolling your own is encouraged with FreeBSD.
That's not to say I've abandoned Linux altogether. Recently I had power-supply problems with this machine. After I had replaced the power supply, the machine wouldn't boot. I tried running live FreeBSD off a CD, but could not get an environment which would allow me to examine the disks.
I booted a live Fedora and soon had things under control.
I've got KVM/QEMU working under FreeBSD. My next step might be to take my Fedora 5 machine and virtualise it. I've already got a virtual XP (which I've not returned to after proving I could do it), an Arch, a later rev of FreeBSD. And the first Linux I ever ran, a Slackware from 1995.
FreeBSD is comfortable with ext2/ext3/ext4 but my Fedora 10 was not as comfortable with ufs. When everything is running fine, these things may be much of a muchness. When it comes to recovery, be careful what you wish for. I could get access to some of my ufs slices from Fedora 10 - well, actually, only 1 - but not the rest. It's not normally a consideration, but perhaps it ought to be.
A similar argument could, perhaps, be applied to other less common or more refined file systems: Reiser, xfs, zfs, LVM. In fact, some time back, I had similar problems with LVM.
No platform is perfect. The best that I can hope for is a platform which is a reasonably good fit, particularly in the style in which it operates. I had come to the conclusion that Fedora was getting too concerned with look and feel and had consigned functionality to the status of second-class citizen.
I think FreeBSD provides me with a comfortable environment for keyboard, video, mouse and audio. For anything else, there are virtual machines.
http://en.wikipedia.org/wiki/Bsd
http://en.wikipedia.org/wiki/FreeBSD
Share |
Talkback: Discuss this article with The Answer Gang
Henry has spent his days working with computers, mostly for computer manufacturers or software developers. His early computer experience includes relics such as punch cards, paper tape and mag tape. It is his darkest secret that he has been paid to do the sorts of things he would have paid money to be allowed to do. Just don't tell any of his employers.
He has used Linux as his personal home desktop since the family got its first PC in 1996. Back then, when the family shared the one PC, it was a dual-boot Windows/Slackware setup. Now that each member has his/her own computer, Henry somehow survives in a purely Linux world.
He lives in a suburb of Melbourne, Australia.
These images are scaled down to minimize horizontal scrolling.
Flash problems? All HelpDex cartoons are at Shane's web site,
www.shanecollinge.com.
Share |
Talkback: Discuss this article with The Answer Gang
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in his brightly-coloured underwear fighting criminals. During the
day... well, he just runs around in his brightly-coloured underwear. He
eats when he's hungry and sleeps when he's sleepy.
More XKCD cartoons can be found here.
'; digg_topic = 'linux_unix'; | Share |
Talkback: Discuss this article with The Answer Gang
I'm just this guy, you know? I'm a CNU graduate with a degree in physics. Before starting xkcd, I worked on robots at NASA's Langley Research Center in Virginia. As of June 2007 I live in Massachusetts. In my spare time I climb things, open strange doors, and go to goth clubs dressed as a frat guy so I can stand around and look terribly uncomfortable. At frat parties I do the same thing, but the other way around.
Ben Okopnik [ben at linuxgazette.net]
From a conversation I had yesterday with a man in his 80s that I met on the street; all this in a slow, unhurried Southern accent.
I used to work f' the Forestry Service, y'know. A man there - he must be dead b'now - tole me once: "Never hire a man t'do a job o'work for you who eats salt herrin' for breakfast, rolls his own cig'rettes, er wears a straw hat. Reason bein', he's either chasin' his hat down th' road, rollin' a cig'rette, er lookin' fer a drink o'water!"
Mark Twain would have felt right at home.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Thread continues here (3 messages/4.78kB) ]
Ben Okopnik [ben at linuxgazette.net]
On Wed, Mar 09, 2011 at 02:30:54PM -0800, Mike Orr wrote:
> Wow, a second one in the same day, this one with more details.
You must be famous or something. Can I have your autograph?
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *
[ Thread continues here (2 messages/2.45kB) ]
Share |