Raspberry Pi Android Installation Linux User & Developer Magazine Issue 120 2012
275KVADescripción completa
Descripción: Cost Straregic
£5,400 WORTH OF POSTGRESQL TRAINING www.linuxuser.co.uk
THE ESS SENT TIAL MAGAZINE FORTH HE GN NU GENERATION REMOTE ACCESS
WIN
£750 OF RAS PI PRIZES
WAYS TO
MAST PI ZER PLUS
A+
B+
PRO PYTHON TOOLS GET PI ZERO ONLINE
2B
SPEED UP YOUR
HOW TO USE GPIOS
LINUX PC How to make your computerlightning fast
SECRETS OF PULSEAUDIO
GRAPHICS IN MONOGAME
SPRINGS.IO CONTAINERS
Discover the power of the sound server
Make games for multiple platforms
Manage scalable cloud servers
“WE CAN CHANGE THE CONFIGURATION OF THE PI” element14 enables customers to redesign the Raspberry Pi board
ALSO IN Manage projects with Git version control Hack lights with the Energenie Pi-mote IR Systems programming: Files & information CamJam EduKit 3 & UPS PIco range reviewed
[email protected] 01202 586257 Production Editor Rebecca Richards Designer Sam Ribbits Photographer James Sheppard Senior Art Editor Andy Downes Editor in Chief Dan Hutchinson Publishing Director Aaron Asadi Head of Design Ross Andrews
This issue
Contributors Dan Aldred, Joe Bernard, Christian Cawley, Jo Cole, Liam Fraser, Gareth Halfacree, Tam Hanna, Richard Hillesley, Jon Masters, Swayam Prakasha, Richard Smedley, Nitish Tiwari, Alexander Tolstoy, Mihalis Tsoukalos
Advertising Digital or printed media packs are available on request. Head of Sales Hang Deretz 01202 586442 [email protected] Sales Executive Luke Biddiscombe 01202 586431 [email protected]
FileSilo.co.uk Assets and resource iles for this magazine can now be found on this website. Support [email protected]
International Linux User & Developer is available for licensing. Head of International Licensing Cathy Blackman +44 (0) 1202 586401 [email protected]
Circulation Head of Circulation Darren Pearce 01202 586200
Production Production Director Jane Hawkins 01202 586200
Finance
Look for issue 161ary on 14 Janu
er? Want it soon
Finance Director Marco Peroni
Founder Group Managing Director Damian Butt
e Subscrib ! y toda
Printing & Distribution Printed by William Gibbons, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT Distributed in the UK, Eire & the Rest of the World by: Marketforce, 5 Churchill Place, Canary Wharf London, E14 5HU 0203 148 3300 www.marketforce.co.uk Distributed in Australia by: Network Services (a division of Bauer Media Group) Level 21 Civic Tower, 66-68 Goulburn Street Sydney, New South Wales 2000, Australia +61 2 8667 5288
Disclaimer The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Imagine Publishing Ltd. Nothing in this magazine may be reproduced in whole or part without the written permission of the publisher. All copyrights are recognised and used speciically for the purpose of criticism and review. Although the magazine has endeavoured to ensure all information is correct at time of print, prices and availability may change. This magazine is fully independent and not afiliated in any way with the companies mentioned herein. If you submit material to Imagine Publishing via post, email, social network or any other means, you automatically grant Imagine Publishing an irrevocable, perpetual, royalty-free license to use the material across its entire portfolio, in print, online and digital, and to deliver the material to existing and future clients, including but not limited to international licensees for reproduction in international, licensed editions of Imagine products. Any material you submit is sent at your risk and, although every care is taken, neither Imagine Publishing nor its employees, agents or subcontractors shall be liable for the loss or damage.
» Make your computer faster » Discover the Raspberry Pi Zero » Get more from any Raspberry Pi model » Win £5,400 worth of PostgreSQL training Welcome to the latest issue of Linux User & Developer, the UK and America’s favourite Linux and open source magazine. It’s not even been four years since the release of the original Raspberry Pi Model B and the Raspberry Pi Foundation has already one-upped itself. Not content with releasing a computer for $35, it has launched a brand new model that costs just $5 – the Raspberry Pi Zero. It’s an amazing feat that will see far more educational institutions than ever – all over the world – finally able to afford entire classroom sets for their students, and it takes the Foundation’s educational mission into a whole new orbit. It’s a lot smaller, and a little limited in terms of the available connections, but it’s actually more powerful than the original model B was on its release. You can find out exactly what’s changed with the board on pages 58-59. And if you’ve already got one, you’ll pleased to know that all 50 of our masterclass tips apply to your new Pi Zero – there’s also a guide to getting the new board online. Turn to page 60 to get cracking. We also spent some time figuring out the best ways to make your Linux system much faster. From diagnostic tools through distro tweaks to lightweight FOSS, you can find our best optimisations starting over on page 20. Enjoy the issue! Gavin Thomas, Editor
90 CamJam EduKit 3 The Cambridge Raspberry Jam’s latest Ras Pi components kit takes on robotics
Handle effective and real IDs for processes
36 Make data engaging with Chart.js
91 UPS PIco This uninterruptible power supply promises to protect your Pi from outages
Create HTML5 canvas-based visualisations
40 Render a 3D object in MonoGame Take your C# and MonoDevelop skills to the next level by learning to program games
92 Free software
44 Launch scalable Linux containers on Springs.io Run your website on a pay-as-you-go server
48 Master version control with Git Learn to work with a project repository
52 Discover the power of PulseAudio Fix Skype, remove noise, duck audio and more
20 Speed Up Linux
57 Practical Raspberry Pi
Optimisation tips for the best performance
60 50 Ways To Master Raspberry Pi A wealth of useful guides and info for any Raspberry Pi user
Richard Smedley recommends some excellent FOSS packages for you to try
96 Free downloads
Find out what we’ve uploaded to our digital content hub FileSilo for you this month
Discover the Raspberry Pi Zero, learn new tricks for every Pi, embed Python into C code, control your lights with the Pi-Mote IR and create a Tempestinspired space shooter in FUZE BASIC.
Join us online for more Linux news, opinion and reviews www.linuxuser.co.uk 4
WorldMags.net
WorldMags.net
Easy to use – ready to go The next generation of virtual server offers unbeatable performance in terms of CPUs, RAM and SSD storage! Implement your cloud projects with the perfect combination of flexibility and powerful features.
Load balancing SSD storage Billing by the minute Intel® Xeon® Processor E5-2660 v2 and E5-2683 v3
1 month free! Then from £4.99 per month*
1
TRIAL TRY FOR 30 DAYS
1
CLICK
UPGRADE OR DOWNGRADE
1
CALL
SPEAK TO AN EXPERT
0333 336 5509 * 1&1 Cloud Server 1 month free trial, then from £4.99 per month. No minimum contract period. Prices exclude 20% VAT. Visit 1and1.co.uk for full offer details, terms and conditions. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. 1&1 Internet Limited, Discovery House, 154 Southgate Street, Gloucester, GL1 2EX.
WorldMags.net
1and1.co.uk
WorldMags.net
MINI-PC
Raspberry Pi Zero is world’s first $5 PC
See
Page 58 for more!
Main Raspberry Pi Zero distributors sold out almost immediately The Raspberry Pi Foundation has released a new model in its family of Raspberry Pi singleboard computers: the Raspberry Pi Zero. Retailing for just $5, the Raspberry Pi Zero sacrifices a few components for greater cost effectiveness, while performing to the same standard as the original Raspberry Pi. The BCM2835 processor has been used but has been overclocked to 900MHz. The same 40-pin GPIO header found in the Model B+ and newer has been used, although it is unpopulated and users will need to solder the pins themselves. The Raspberry Pi Zero is HDMI-compatible with a micro-HDMI to HDMI adaptor, and two microUSB ports for the power and one peripheral. “When I was a kid, the high cost of computers like was a real barrier to me learning about computers,” said Raspberry Pi founder Eben Upton, of the launch. “What we’ve been trying to do with Raspberry Pi is make sure that cost is never going to be a barrier to anyone who’s interested in getting involved in coding.”
6
NEED TO KNOW
Ras Pi Zero essential info
3 Cheap
1 Tiny
4 HAT-compatible
This is the smallest Raspberry Pi ever, at just 65 x 30 x 5 mm (about 2.6 x 1.2 x 0.2 inches) – it’s half the size of your debit card. That means it’s perfect for embedded projects, whether that’s attaching them to mini quadcopters or making wearable devices.
The $5 price point means that this device is hugely affordable – making bulk orders for school IT departments even more feasible, and even kids can get it on a pocket money budget.
There are versions of popular HAT add-ons made to fit the Raspberry Pi Zero, but the board itself is compatible with the original HAT specification, so your existing HAT add-ons will work just fine.
2 Powerful
5 Adaptable
Despite the fact it uses the older BCM2835, the Raspberry Pi Zero is still about 40% faster than the original Raspberry Pi Model B, according to Eben Upton. It runs the standard Raspbian distro too, which means it can be used as a regular desktop PC.
As well as the GPIO pins, you can also solder on an RCA cable, so you can use an old TV as a display. You can expand on the single spare micro-USB port by using that for a powered USB hub, so you can then plug in your keyboard, mouse and Wi-Fi dongle.
WorldMags.net
WorldMags.net HARDWARE
Below The maze-like circuit has to remain intact to function
TOP FIVE
New features in Linux 4.4 1 Pi mode-setting driver The first fruit of Eric Anholt’s efforts to create an open source graphics driver for the popular Raspberry Pi, the driver bundled with Linux 4.4, offers kernel mode setting alone. 3D acceleration is expected to follow in a future kernel release – but don’t expect to see it as soon as 4.5.
2 Virtual 3D acceleration
Design Shift funds ‘hardened’ Linux PC Boasts a maze-like circuit to protect it from attack San Francisco-based Design Shift has taken to crowd-funding service Kickstarter to take orders for what it claims is the world’s first open source, physically secure computer. A compact, disc-shaped device based on Intel’s latest Skylake family of x86-64 processors, ORWL’s primary selling point is a focus on security. A wireless dongle is provided with every system, which contains a unique key paired with one on the system itself. When the key is out of range, the system locks down: the processor is put into sleep mode and all USB and HDMI ports are disabled until the key is returned. To protect against physical attacks on a locked machine, each ORWL is built using what the company describes as an “active mesh shell casing,” the surfaces of which are covered in a maze-like circuit pattern. This circuit must remain unbroken for the system to operate; if broken by someone attempting access, an embedded microcontroller erases the local key – ensuring the data on the system can never be decrypted. Design Shift, who successfully crowd-funded a smartphone dubbed Robin in partnership with Nextbit, but who has yet to ship a physical reward to backers, has promised to release its designs under an open source licence if its campaign is successful, though it has not provided details of the exact licence it plans to use.
Linux 4.4 brings the VirtIO VirGL DRM code developed by David Airlie into mainline for the first time, giving the QEMU/KVM stack the ability to offer 3D acceleration to guest virtual machines through the host machine’s hardware.
3 More AMD GPU support For users of AMD graphics hardware, Linux 4.4 enables the new AMDGPU scheduler by default, adds new support for Carrizo, Tonga, and Fiji-family AMD APU’s, and adds initial support for upcoming Stoney parts. Power management, though, won’t arrive until Linux 4.5.
Above With a secure key and strengthened casing, the ORWL is a security fan’s dream
Pricing for the OWRL starts from $399 for an ‘early bird’ developer’s kit, which lacks the clever outer shell, rising to $1,399 for a top-end model with active mesh protection and glass outer shell. These, the company has claimed, represent a near 50 per cent discount on the planned retail price – suggesting that the devices, if funded, will take a place at the very top of the compact computer price table. Each machine is powered by an Intel Core M low-power processor, and includes one or two encryption key dongles, an external power supply, up to 8GB of RAM and a 512GB solid-state drive. While its design has generated considerable interest, the company has struggled to raise funds for production: at the time of writing, the campaign had raised just $24,000 of its $175,000 goal.
WorldMags.net
Above Linux 4.4 adds new support for several GPU families, like these Carrizo chips (pictured)
4 Network enhancements Linux’s networking subsystem has enjoyed considerable attention in the development of Linux 4.4, with new features including a brand-new Realtek USB Wi-Fi dongle driver, non-privileged eBPF execution with persistent maps, simultaneous IPv4 and IPv6 on VXLAN devices, and finally a lockless TCP listener.
5 Skylakeimprovements Intel’s latest microarchitecture, Skylake, offers a number of improvements over previous incarnations. Sadly, some of these have proven glitchy under Linux. Anyone experiencing graphics or sound issues on Skylake systems should upgrade to Linux 4.4 for the best support.
www.linuxuser.co.uk
7
WorldMags.net
OpenSource
Your source of Linux news & views
OPINION FREE SOFTWARE
Againstthegrain “We rank last in the G8 for government R&D spending,” says Paul Dreschler, president of CBI We owe the Internet to research undertaken at various universities and funded by the US government through DARPA in the Sixties. The World Wide Web grew out of the work of Tim Berners-Lee at CERN. Free software owes its origins to Project MAC and the AI Lab at MIT which developed LISP, the first time-sharing operating system, the LISP machine, the first computer games, the first music software and the first display hacks. What all these projects have in common is that they were publicly funded, had few limits on time and scope, and have had profound implications over the way we work, think and play. The Internet and the World Wide Web were truly innovative breakthroughs that have transformed our lives and were only made possible because of long-term public investment. Both rely on common protocols that allow access to all. If either had been developed in isolation we would be looking at a very different beast. Public funding allowed the development of protocols that gave access to all. Without open access, the Internet would not be the universal tool it is today. Mariana Mazzucato, economist and Professor of Economics in Innovation at the University of Sussex, contends that significant innovation more often than not originates from publicly-funded projects. There are some good reasons for this. Original ideas often come from mavericks, while companies need certainty and relatively rapid returns. In the modern business environment where the financial markets demand nothing less than immediate results, companies have neither the resources nor the time to invest in large scale research and development that may or may not produce a financial return. Such a proposition goes against the grain of current wisdom, but Mazzucato points to Apple as a prime example of how publicly-funded research has contributed to the growth of an industry. Apple is often regarded as an innovative company, but it invests very little in research and development, and its major innovations have been in the areas of design, packaging and marketing. The key technologies that make the smartphone smart, she points out, such as GPS, the Siri voice-recognition service and multi-touch
8
Richard Hillesley
writes about art, music, digital rights, Linux and free software for a variety of publications
Publicly funded research and development projects have had a profound role in shaping markets and promoting new technologies
screen, were not developed by Apple or Google, but were the result of long-term, state-funded research. Steve Jobs may or may not have been a ‘genius’, as his supporters claim, but he did have a talent for spotting gaps in the market and knowing the importance of design in selling a product. Mazzucato asserts that governments have “actively shaped and created markets. This is the case in IT, biotech, nanotech and in today’s emerging green economy. Public sector funds have not only supported basic research, but also applied research and even early-stage, high-risk company finance. This is important because most venture capital funds are too short-termist and exit-driven to deal with the highly uncertain and lengthy innovation process.” Publicly-funded research and development projects have had a profound role in shaping markets and promoting new technologies, and this has profound implications not just for how industry is shaped in this country, but for the future development of free and open source software. The emergence of GNU/Linux as a disruptive technology, and the spread of open source methodologies, has transformed the way software is understood, but governments have been slow to realise the value of collaborative development as a tool for promoting innovation, despite squandering billions on software projects leased out to private companies that are over-audited, never come in under budget, and go on for years past their allotted time span. And yet there is innovation in our universities, despite the pressure on their resources and the short-termist drive to turn them into businesses. A prime example is the discovery of the means to extract graphene from graphite at the University of Manchester, for which two physicists, Andre Geim and Kostya Novoselov, were awarded the Nobel prize. Directed research and development is vital to the health of a modern economy. One could imagine a publicly-funded open source research lab dedicated to bringing together research and ideas across multiple disciplines being an engine for revitalising industry, in much the same way as the research labs at Stanford and MIT drove the insurgent computer industry during the Sixties and Seventies.
WorldMags.net
WorldMags.net CONTAINERS
FOUNDATION
CoreOS launches Clair container security monitor
LF launches Open API Initiative
Quay Security Scanning service now available to all CoreOS, the company behind the eponymous cluster-centric Linux distribution, has released a vulnerability analysis tool dubbed Clair under the open source Apache License 2.0. Written as the foundation for the Quay Security Scanning system, Clair is designed to scan each container layer on a server or cluster of servers and provide threat notification based on the Common Vulnerabilities and Exposures Database (CVE) and other databases from Canonical, Debian and Red Hat. The company claims that the sharing of layers between multiple containers means such scanning is vital for creating a comprehensive inventory of packages for security analysis. “Using Clair, you can easily build services that provide continuous monitoring for container
vulnerabilities,” claimed CoreOS’ Quentin Machu of the release. “CoreOS believes tools that improve the security of the world’s infrastructure should be available for all users and vendors, so we made the projectopensource. “Vulnerabilities will always exist in the world of software. Good security practice means being prepared for the mishaps – to identify insecure packages and be prepared to update them quickly. Clair is designed to help you identify insecure packagesthatmayexistinyourcontainers.” Clair’s detection works by extracting package lists from each layer, storing the difference between one layer’s list and its parent’s version. This, CoreOS claims, makes it storage efficient and increasesperformanceoverothermethods.
SECURITY
‘Linux Encoder’ ransomware has been defeated Discovery leads to easily-reversed AES encryption One of the first large-scale attempts to spread ‘ransomware’ on Linux servers has come to a crashing halt, thanks to a fatal cryptographic error made by its author. First discovered by Russian security firm Dr. Web, Linux Encoder began spreading by exploiting a vulnerability in the Magento content management system running on Linux hosts. When infected, a server would encrypt the home, root and MySQL directories with a unique key before showing a message asking the user to pay a ransom in Bitcoins to receive the decryption key. Radu Caragea, a cryptographic specialist working at security firm Bitdefender, was working on an analysis of the malware when he stumbled across a serious flaw: the keys used for the AES encryption process were generated using the libc rand() function seeded with the current system time – a seed which was immediately recoverable by looking at the timestamp of the encrypted file.
Using this information, Caragea was able to write a small utility to decrypt files without needing to pay a ransom. Bitdefender has released the utility for free and has provided a five-step process for cleaning affected systems. The decoding utility and instructions are available at: http://is.gd/linencde. “If your machine has been compromised, consider this a close shave. Most cryptoransomware operators pay great attention to the way keys are generated,” warned Bitdefender of the Linux Encoder malware. “Mistakes such as the one described are extremely fortunate, but also extremely rare.” Ransomware is an increasingly big business, with the US Federal Bureau of Investigation claiming in June it had received 992 complaints about the CryptoWall malware on which Linux Encoder is based over the past year, with losses totalling $18 million.
WorldMags.net
The Linux Foundation has announced a partnership with industry giants including Google, IBM, Microsoft, PayPal and Capital One on a new Collaborative Project: the Open API Initiative. The aim of the Initiative, the Foundation has announced, is to extend the specification and format of the Swagger application programming interface (API) framework, which it hopes will then result in creating an open technical community within which members can contribute to a vendorneutral and open specification for providing metadata for RESTful APIs – the reward for Swagger’s staggering success since its launch in 2010. “Swagger is considered one of the most popular frameworks for building APIs. When an open source project reaches this level of maturity, it just can’t be managed by one company, organisation or developer,” explained Jim Zemlin, executive director at the Linux Foundation, of the decision to launch the Open API Initiative as the latest of the Foundation’s Collaborative Projects. “The Open API Initiative will extend this technology to advance connected application development through open standards.” “Across industries, Swagger has gained incredible adoption for its expressiveness, comprehensive toolchain and vibrant community alike,” claimed project founder Tony Tam, who is also vice president at current project owner SmartBear Software, which acquired Swagger from the original owner Reverb Technologies. “Working with both API vendors and consumers, SmartBear sees the value in open governance around the specification which will allow for even more rapid growth and adoption across the API industry, and is honoured to donate the Swagger Specification into the Open API Initiative under The Linux Foundation.” The Open API Initiative is open to members now and runs under an open governance model. If you are interested in learning more, you can get extra information on the project over at openapis.org.
www.linuxuser.co.uk
9
WorldMags.net
OpenSource
Your source of Linux news & views
INNOVATION
DISTRO FEED
Top 10 (Average hits per day, 27 October – 23 November) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Linux Mint openSUSE Debian Ubuntu Fedora Mageia Manjaro CentOS Arch Linux Kali
This month QStable releases (17) QIn development (3) The Linux Mint project enjoyed a shot in the arm this month with the release of the Mint 17.3 Beta, ensuring it stays ahead of its rivals.
Highlights Linux Mint 17.3 Beta The latest Linux Mint beta release, built on Canonical’s Ubuntu 14.04 LTS base, brings with it a new release of the MATE desktop environment that enables users to quickly and easily switch window managers on the fly for the first time ever.
Chakra 2015.11 The newest Chakra GNU/Linux release, codenamed Fermi, features the Calamares system installer as a replacement for the older Tribe andincludestheSDDMdisplaymanagerasstandard.
Puppy Linux 6.3 An always-popular distribution for older hardware, the newest Puppy Linux release features a Slackware base – contrasted with the 6.0 branch, which used Ubuntu Trusty Tahr binaries.
Latest distros available: filesilo.co.uk
10
ISS chooses Linux over Windows Bit-player Linux to represent the majority platform The US National Aeronautic and Space Administration (NASA) is shifting its hardware on the International Space Station (ISS) from Windows to Linux, following its decision to shift key devices for reliability reasons. “We migrated key functions from Windows to Linux because we needed an operating system that was stable and reliable – one that would give us in-house control,” explained the United Space Alliance’s Keith Chuvala of his company’s decision to aid NASA in shifting from proprietary platforms to Linux. “So if we needed to patch, adjust or adapt, we could.” Speaking to trade publication Computer Dealer News, NASA’s Stephen Hunter confirmed that since the initial introduction of Linux onto
ISS systems, the number of Linux deployments is growing. At present the ISS uses around 100 workstations split 70-30 between Windows and Linux, but according to Hunter, Linux is “increasing in usage” – with the possibility that it may soon outnumber proprietary platforms on the space station. In addition to a number of workstations, the ISS is also home to Robonaut, the first humanoid robot in space. Robonaut has been designed to assist the astronauts with tasks that are too dangerous or mundane for humans to carry out in the station’s microgravity environment, and runs an embedded Linux distribution. The next gadget to land on the ISS is to be an HP ZBook 15 mobile workstation.
INDUSTRY
Wiegley new Emacs maintainer Richard Stallman takes another break from project Emacs founder and maintainer Richard Stallman has handed the reins to long-time contributor John Wiegley, who takes over leadership effective immediately. “Richard and I met at MIT yesterday,” Wiegley explained in a brief announcement to the Emacs mailing list, “where I officially accepted the role as maintainer of Emacs.” Emacs, named as a contraction of Editor MACroS, was originally created in 1976 by Stallman and Guy Steele. It was an extension for the TECO editor. Stallman then proceeded to
create the GNU Emacs project in 1984 as a free software implementation. Wiegley has already indicated he plans to change how the project operates. “I think it has come time to establish a Code of Conduct, along with a transition to a passively moderated list: one where all posts are allowed by default, but those who disregard the CoC will lose their right to post until the end of a waiting period,” he explained. “I would much rather have a semiprofessional atmosphere focused on improving Emacs, than an easy-going social atmosphere.”
WorldMags.net
WorldMags.net OPINION CODING
Profiling programs We find out exactly what profiling is and, more importantly, how it can help you optimise your programs A profiler is a program that shows how many times each part of a program is executed as well as the way each function of a program is called. Profilers are helpful for smaller programs but their true power is in large programs where no other tool can be as effective. The default Linux profiler is gprof, which is going to be used here. The C code of the provided profileMe.c that is going to be used contains deliberately wrong code that never gets executed. Additionally, the fibo() function calculates numbers of the Fibonacci sequence using an algorithm that is too slow. Profiling profileMe.c using gprof involves the following steps:
$ gcc -Wall -o profileMe profileMe.c -pg $ ./profileMe $ file gmon.out gmon.out: GNU prof performance data - version 1 The –pg option generates extra code that writes profile information suitable for gprof. The output for the profiler will be automatically written to gmon.out in the working directory of the program at the time of its exit. As each program you want to profile produces a gmon.out file, make sure that an existing file with this name is not overwritten. After gmon.out is generated, you can see the produced information by running gprof as follows:
$ gprof ./profileMe gmon.out > profileMe.txt The aforementioned command produces an analysis file, which contains the profiling information. The table below shows where the program spends most of its time. As you can see, fibo() takes 91.98% of the total time of the program! The purpose of frame_dummy,
Mihalis Tsoukalos
is a UNIX administrator, a programmer, a DBA and a mathematician. He has used Linux since 1993
If you interpret its output correctly, you can find out where your program spends most of its time
Fig. 01 % cumulative self self total time seconds seconds calls ms/call ms/call name -----------------------------------------------------------------------------------91.98 15.64 15.64 44 v355.37 355.37 fibo 6.82 16.79 1.16 frame_dummy 2.08 17.15 0.35 main
which is automatically created, is to set up for unwinding stack frames for exception handling. The call graph of profileMe.c can be seen inside profileMe.txt – it is the second table, halfway down the document. As you can see, the fibo() function was called 5942430054+44 times in total! Also, you can tell that most of the times, fibo() called itself; however, fibo() was also called directly by main(). As you are calculating 44 Fibonacci numbers it makes perfect sense that fibo() was called 44 times from main(). If the profiler cannot determine the parents of a function, the word is printed in the “name” field and all the other fields are left blank. Unfortunately, gprof doesn’t show code that isn’t executed. However, modern programming environments can display code that cannot be executed as a result of erroneous code. Profiling can reveal unique information. If you interpret its output correctly, you can find out where your program spends most of its time and therefore which code parts you should optimise first. Moreover, the call graph can help you understand the flow of your program and the relationships between its functions. The time command can also be helpful. In this case, it can show that finding the first 24 Fibonacci numbers is extremely faster than finding out the first 44 Fibonacci numbers. In our tests, finding 24 took a fraction of a second, whereas finding 44 took well over a minute – over 6,000 times more! After you finish profiling your code, do not forget to recompile your program with different options to make the compiler remove the extra code used for profiling, which slows down program execution. Other programming languages have their own profilers that, more or less, work the same way. Additionally, it is possible to write your own profiler and measure precisely what it is you want, which can be a very educational task that will make you a much better programmer. Nevertheless, if your program is big, this can be a challenging task. It would also be a good exercise for you to write a line-count profiler. Lines of code with zero counts show code that is untested! You can find all the source code of this article at github.com/mactsouk/ and over at www.filesilo.co.uk/ linuxuser-160.
WorldMags.net
www.linuxuser.co.uk
11
WorldMags.net
OpenSource
Your source of Linux news & views
INTERVIEW ELEMENT14
Customise your Pi RichardCurtin
is Premier Farnell’s global director of strategic alliance, responsible for developing the company’s global technology customer proposition. His primary focus is embedded semiconductor, software, hardware and design service segments
ClaireDoyle
is the global head of Raspberry Pi at element14. She has been in this role for three years, and prior to this Claire has held several strategic supplier roles within element14 since May 2003
12
Put your own stamp on the Raspberry Pi by collaborating with element14’s design and engineering teams. Claire Doyle and Richard Curtin explain how it works Will the customisation service apply to all Raspberry Pi models or just the flagship 2B? Claire Doyle: It applies to all Raspberry Pi boards, and we’re open-minded to the accessories – we’ve already seen a couple of enquiries around the accessories. Is the service targeted at businesses or would you also work with educational institutions, for example? CD: Customisation right now is open to any type of customer. Some of the enquiries that we’ve seen come through have been predominantly OEM-type customers. We’ve stated that the minimum order quantity is between 3,000 and 5,000, around that range. It’s not set in stone, but we believe that’s a good pre-production commercial offering that makes sense. The type of customer…? We’re pretty open-minded, really. It depends on what the customer’s requirements are, and when they want it, what they want to do with it. Richard Curtin: It’s going to be around scalability and prioritisation to start with, because we only have a finite resource to deploy on this programme. So what we’ll be doing is focusing on OEMs and ODMs, where there
is scalable potential in the volumes, to get this off the ground. Once we’ve gone through that and ramped up into the really core, industrial customers, I definitely think we’re going to be expanding into different strategies. I can absolutely see it being deployed into education, leveraging what we’re doing with the micro:bit, for example. How could we perform customisation for education along those lines? In the announcement we made, we did talk about the fact that at some point in the coming months, we do want to make reference designs available for some of the popular customisation requests we’re seeing across the board. Those reference designs perhaps will be made into hardware, in relatively small volumes, but it will be a chance for people to get off-theshelf customisation so they can then perform their maker projects, etc. Can you tell us what sort of customisation is on offer? RC: So, basically, the customisation offered is that we can change the configuration of anything on the board, outside of the Broadcom chip and the memory. When we start to make changes there, it has a complex
WorldMags.net
WorldMags.net How it works A customer would first request a Raspberry Pi customisation, and then element14’s technical engineers and application engineers would work with the customer on that design. Element14 would also work with Raspberry Pi Trading on the design. Should the customer want to go ahead with a pre-production run, element14 would select one of its contract manufacturers from around the world – depending on volume and specifications – who are familiar with the Pi products, and then have that custom board manufactured and finally sent to the customer.
Inset Since 2012, element14 has manufactured and distributed over 4 million Raspberry Pi boards
knock-on effect with the software and the pinouts. If customers did want to do something with that, I think it would be a different conversation with Broadcom and Raspberry Pi. So we ring-fence those components, but all the peripherals, the PCB, the layout, the size and form factor, they’re all fair game as we look to try and work with customersintheirendapplications. Doyouhaveasetlistoffeaturesthatpeoplecanadd inortakeout,orareyouopentomoreadventurous requests–forexample,ifsomeonewantedtoembed ane-paperdisplay? RC: It is our aspiration that at some point in the coming months we do create some kind of menu for our customers. That’s where the reference design potential comes into play that I mentioned earlier. With regards to how flexible we are, absolutely, we can definitely look at that. We have the technical engineering expertise to do that level of customisation – it’s a very broad spectrum of areas that our engineering teams can cover. So there definitely is the flexibility, but as of today we don’t have that menu available to our customers. It is coming – let us validate and approve the customisation requests on a case-by-case basis, which is what we’re doing today, and then as we build up a record of what requests are being made, then I think we can start to do some very innovative and intuitive menus for customers, based on what we’re seeingoutthere.
It’s a very broad spectrum of areas that our engineering teams can cover
CD: One of the reasons why both Raspberry Pi Trading and element14 launched this service was due to customer demand. We had a lot of customers around the world asking us for this, all with a different set of requests and enquiries. We’ve seen some really interesting enquiries come through – what is it, two weeks into the launch? – several hundred. We’re just going to go through those, work with Raspberry Pi Trading and our design teams around the world to see what kinds of trends we’re seeing. And then we’ll be developing the Raspberry Pi and element14 customisation programme with Avid Technology and Embest. We’re really looking forward to seeing where it goes and what trends we’ll see. One thing I’m not sure came across clearly in the press release, that I just want to emphasise now, is that the customisation is design – we have the exclusive right to the design customisation – but also the manufacture. So any customer who wants our design services will be leveraging our knowledge and expertise of being the world leader in the Pi space of manufacturing as well. We can offer the end-to-end solution to the customer. Will the customised designs be owned by element14 or are they being held jointly with Raspberry Pi Trading? RC: The way that this works is that the customers will come in with their customisation request, then, as part of that process, we will sign extensive NDAs with that
WorldMags.net
www.linuxuser.co.uk
13
WorldMags.net
OpenSource
Above Everything but the Broadcom system-on-chip can be considered during the design process
Embest and Avid Embest (based in China) and Avid Technologies (based in the US) provide the design services and manufacturing for every phase of the customisation design process: hardware design, simulation and validation, software development and integration, advanced PCB design, customer validation, testing and certification. Both companies are part of the Premier Farnell group.
Your source of Linux news & views
customer, so that the information they share with us is fully protected. Any changes that they make to the actual design of Raspberry Pi, and everything that goes with that, the IP will remain with Raspberry Pi. What Raspberry Pi is offering is the potential for customers to change those existing designs to fit their end applications. Element14 then handles the transactions, obviously the NDAs with Raspberry Pi, the NDAs with the end customer, manufacturing and providing that end solution to the customer for their application. So the customer’s application, in most cases, sits with them as IP, but the Raspberry Pi as a factor within that does not – the IP definitely resides with Raspberry Pi. Will customers be able to name their boards? CD: What we see right now is that these customisation boards will be part of a finished product. They won’t be branded Raspberry Pi, and the actual board itself will not carry the Pi logo. We’re learning as we go; we’ve got a number of checkpoints and reviews as we develop this with Raspberry Pi Trading. Right now, most of these boards will be part of integrated finished products, so on the reference design piece, we may have a code, we may have a name for them, but we’ll be developing that with Raspberry Pi Trading.
A Premier Farnell Company
What kind of time-frame do you think there’s going to be from the initial quotes to the product delivery? RC: We’ve got some rough estimates. There are a couple of things to factor into this: first, there’s absolutely a lead time on the Broadcom semiconductors, and that lead time is substantial. It goes beyond some of the more run-
14
of-the-mill ARM architecture chips out there, such as those from Atmel that are typically available in 12 weeks. There’s also the design cycle, manufacturing and then shipment back. From signing an NDA, getting the relevant MSA and contracts in place, closing down the design, getting that approved through the relevant channels – Broadcom and Raspberry Pi Trading – moving that into a volume/production run, and then off-the-line package tests, and back into the customer’s hands? I think six months is a very realistic time-frame! CD: I think it all depends on the customer’s requirements. Also, I think the key benefits of element14, Embest and Avid all working together on this under the Premier Farnell umbrella is that we already make millions of Pis. So if we have to flex some of our supply chains to do more production runs, we’re in the best place to do that, based on a really solid, robust supply chain, dealing with the production of millions of Raspberry Pis around the world. It’s one of the reasons we’ve been given this global exclusive from Raspberry Pi Trading; it’s based on our niche and competent engineering and manufacturing, but also because we’ve got such a robust supply chain. We are totally excited about this Pi customisation! It’s a different angle to our current Pi business, which has been massively successful to date. It leverages our capabilities for design and manufacture, and strongly endorses the great work and the unique proposition that we have, so I think this is an area to watch. We’re learning, and we’re really looking forward to working with customers and listening to customers about what they want, and adapting what we need to do in order to facilitate that demand. We’re really excited about this one.
WorldMags.net
WorldMags.net Our revolutionary NEW Web Hosting platform
100% guaranteed UP TIME
100% guaranteed uptime!
Smart SSD storage & intelligent load balancing
Dedicated SSL certificates
Web Hosting from:
£1.99 per month ex VAT charged at 20%
Call 0333 0142 708 or visit fasthosts.co.uk/hosting SERVERS • WEB HOSTING • DOMAIN NAMES • EXCHANGE EMAIL WorldMags.net
WorldMags.net
OpenSource
Your source of Linux news & views
OPINION
The kernel column Jon Masters summarises the latest happenings in the Linux kernel community, as the merge window closes for what will be kernel 4.4
Jon Masters
is a Linux-kernel hacker who has been working on Linux for some 19 years, since he first attended university at the age of 13. Jon lives in Cambridge, Massachusetts, and works for a large enterprise Linux vendor, where he is driving the creation of standards for energy efficient ARM-powered servers
16
Linus Torvalds closed the merge window (period of time during which disruptive kernel changes are allowed) for 4.4, following the customary two weeks, with the release of Linux kernel 4.4-rc1 (Release Candidate 1). In his announcement, Linus noted that things were “possibly a bit more driver-heavy than usual with about 75% of the patch being drivers, and 10% being architecture updates”. He subsequently announced 4.4-rc2, saying “Things are looking fairly normal in 4.4-land, with no huge surprises in rc2. There were a couple of late features: parisc hugepage support and some late slub bulk allocator patches… that strictly speaking should have been merge window things. But the bulk allocator isn’t actually used in tree yet, and so the updates on that front are about upcoming use rather than something that can regress.” As usual, Linus encouraged extensive testing of the new kernel, which is already available in some of the nightly development branches of the major distributions. Linux 4.4 will, as always, contain new and exciting features, including support for a new “devfreq cooling” device driver that lets systems – such as mobile phones – experiencing an overheating condition to throttle back on performance until their operating temperature is back within certain bounds; new journaling support in the software RAID5 driver, allowing for a specific disk device – such as a low latency, high throughput NVMe or flash disk – to be used as a journal in order to guarantee the avoidance of degraded array bad parity induced data corruption; and a new variant of the “mlock()” system call (mlock2) that functions like its predecessor in guaranteeing certain data – such as passwords entered by the user – is never written to the system swap file, eg during low memory situations, but has the additional benefit that such memory may not be locked until it is actually occupied with data following a page fault (saving wasted memory). Last month’s issue coincided with the end of this year’s Kernel Summit, which was held in Seoul, South Korea, alongside the 2015 Korea Linux Forum. At the Kernel Summit, a great many topics were discussed, all of which were nicely summarised by Linux Weekly News (www.lwn.net).
Security cost of Waiting Channels Linux descends from many decades of Unix and Unixlike system heritage. While there may seemingly be little in common between a Linux desktop, server, or embedded device of 2015 and the original AT&T Unix systems of the Seventies, appearances can be deceptive. Underneath, Linux retains support for various standards, protocols, conventions, and nomenclatures of the Unix systems from that bygone era. One example of such influence is visible every time you run the top or ps commands to see which processes (known as “tasks” within the kernel) are running on your system. Depending upon the options used, you may see a “WCHAN” entry. This stands for “Waiting Channel”, and it has long also provided a security flaw hidden right under our noses. Implementations varied slightly, but in traditional Unix and Unix-like systems, WCHAN generally provided the absolute address of the kernel function or data structure upon which a particular thread (task) was blocked. In other words, a task sleeping after performing a blocking read() system call (to read some data from a file on disk, as an example) would traditionally expose the kernel memory address of the corresponding kernel read function. Userspace tools, such as ps and top, would take this memory address and translate it into a function name (such as “wait”, “poll_schedule_timeout” or similar) using a file such as the “System.map” file generated for Linux during a kernel compilation. Modern Linux kernels actually don’t need to rely upon userspace tools performing such an address translation since they provide the KALLSYMS feature, enabling this to occur within the kernel itself. But modern kernels also contain an even more important security feature: Kernel Address Space Layout Randomization (KASLR). KASLR was designed to ensure that the in-memory addresses of Linux kernel functions would differ from one system to the next (and even from one boot to the next). This serves to disrupt the operation of “rootkits” (tools designed to circumvent system security by gaining unauthorised privileges) because attackers who find a flaw in the kernel that lets them inject the code of their choice cannot rely upon knowing exactly
WorldMags.net
WorldMags.net
Credit: Linux Foundation ,Flick (bit.ly/1Otu7sb)
which memory locations contain specific kernel functions that they might want to use. Thus, it is made more difficult to have a kernel exploit that affects all systems using vendor X’s Linux kernel. KASLR works well, until you defeat it by providing the randomly generated offset that is being used for all of the running kernel code on a given system. That information was trivially available (until Linux 4.4) by performing a simple piece of arithmetic against the WCHAN output for a given task, visible to all users in /proc/$PID/stat. Ingo Molnar notes the following example, saying “[WCHAN output] isn’t ideal, because for example it trivially leaks the KASLR offset to any local attacker”:
fomalhaut:~> printf “%016lx\n” $(cat /proc/$$/ stat | cut -d‘ ’ -f35) ffffffff8123b380 Above An obligatory photo of the assembled dignitaries at the Kernel Summit in Seoul
His fix was to provide only the symbolic names of the functions, using the existing in-kernel address to name translation, in the /proc/$PID/wchan file, and to change the output of the /proc/$PID/stat file to contain a 1 or 0 depending upon whether a task is actually blocked or not, since many tools rely upon this in determining whether to then read the “wchan” file.
Ongoing development Octavian Purdila has posted an interesting RFC (Request For Comments) kernel patch series entitled “Linux Kernel Library” that aims to make the Linux kernel buildable as a software library against which to link other software directly. This may seem strange, yet it can be useful in terms of software development to be able to use Linux’s own implementation of various functions to provide compatibility support in applications running within virtual machines or on other operating systems (and indeed, in Operating System bootloaders). Of course building the kernel itself as a library doesn’t change the fact that it is GPL code, but it does mean that many Free Software projects wanting to use a specific Linux filesystem driver (as an example) will have another option than copying and pasting a snapshot of code borrowed from the kernel. Rich Jones quickly picked up on the work
Linux 4.4 will contain new and exciting features, including a new ‘devfreq cooling’ device driver that lets systems experiencing overheating to throttle back on performance
and posted support for linking his excellent “libguestfs” utility against the experimental LKL patch series. Jake Oshins from Microsoft posted a new PCI paravirtualised front-end driver for HyperV, known as “hv_pcifront”, that will provide access to real PCI(e) buses as well as VF (Virtual Functions) on existing PCI(e) devices that have been assigned into HyperV guest virtual machines in an SR-IOV (Single Root IO Virtualization) environment. The feedback was mostly positive – different from the first time Microsoft posted a driver implementing support for HyperV, suggesting that Microsoft has Linux kernel developers who do good work and participate in the upstream community. Finally this month, Noam Camus posted support for the EZchip NPS400 Network Processor. Ordinarily this might not be mentioned here, but for the fact that this single processor provides a total of 4096 “CPUs” (256 SMT cores where each of those has 16 threads of which only one can be active). Rather than implementing a conventional Out-of-Order Superscalar machine, this network processor follows more of a GPU design philosophy in exposing as many simple cores as possible, and blocking individual cores frequently. Noam noted that he believes 4096 CPUs on a chip to be a “new high record”.
WorldMags.net
www.linuxuser.co.uk
17
WorldMags.net
OpenSource
Your source of Linux news & views
WIN £5,400
of PostgreSQL training We’ve teamed up with 2ndQuadrant to give away up to £5,400 worth of training to two lucky winners who’ll be able to master their understanding and use of PostgreSQL. The global experts in PostgreSQL support, training, development, migration and consultancy are offering two £2,700 training vouchers that can be used on their approved courses in London, which 100 per cent of clients have described as “excellent.” All 2ndQuadrant training courses are taught by leading PostgreSQL experts, with many years of industry experience in databases and code development. Courses last between one and five days, and those currently available include Linux for PostreSQL DBAs, Replication and Recovery, and PostgreSQL Immersion – a five-day intensive course for those wanting to learn PostgreSQL fast. For more information, please visit 2ndquadrant.com/en/training/ course-catalog. The UK Met Office approved PostgreSQL as its preferred RDBMS, following an evaluation of alternatives. The decision was influenced by
2ndQuadrant training. Data Services Portfolio Technical Lead James Tomkins commented: “With the training we had from 2ndQuadrant we could feel the weight of expertise that came with Gianni [Dr Gianni Ciolli, tutor] and it was obvious he really knew his subject insideout. It was an enormous confidence-building exercise and has been consistent with all of our interactions with 2ndQuadrant.” 2ndQuadrant offers unrivalled access to one of the largest teams of PostgreSQL experts in the world, and continues to make significant contributions to the wider user community through development. Consequently, the company is able to offer 15-minute response times to customers in need of support.
Closing date for entries
29 February 2016
Where was Postgres, the forerunner to PostgreSQL, first developed? a. Massachusetts Institute of Technology b. University of California, Berkeley c. Harvard University
Please email your answer, along with your full name and contact details, to
[email protected] TERMS & CONDITIONS The two winners will each receive a 2ndQuadrant training voucher worth up to £2,700. A training voucher entitles the winner to book only one course, lasting up to five days and up to the value of £2,700, and cannot be used in conjunction with any other offer or discount. The training voucher will retain its value in terms of course days if prices increase before a course is booked. Winners must book a scheduled course by 1 July 2016. Courses are subject to availability and will take place in London. Course dates are subject to change. Competition entrants must be at least 18 years old. The closing date for entries is 29 February 2016. Terms and conditions are subject to change.
18
WorldMags.net
WorldMags.net
FOR ANYTHING POSTGRES THERE’S ONLY ONE PROVIDER WORTH TRUMPETING ABOUT
WorldMags.net
WorldMags.net
Feature
Linux is famous for being secure and fast, but it may slow down with time. We’ll learn how to measure a computer’s performance and then reveal ways to speed it up Most of the people who install Linux onto their PCs do so because they are told that Linux is fast and secure and that they can configure it any way they want. All of this is true, but there is another truth. As time passes, your Linux installation goes through several cycles of software installs and uninstalls, and in the process becomes inevitably slower. Now, we all know that in theory, anyone can tweak their Linux installation however they want, but doing that sort of stuff yourself, without expert knowledge, puts you at the risk of corrupting your operating system or even worse, losing all your data.
20
A safer approach is to first carefully measure your computer’s performance and see if it does actually run slower than other computers with similar hardware configurations. Once you establish that your PC is in fact running slow, you can take a systematic approach by first making simple changes like modifying start-up applications, and then moving on to other specialised optimisation tricks. By making incremental changes, you can help to avoid ruining your operating system. In this tutorial, we will follow this step-by-step path and help to make your Linux-based PC work faster and smoother.
WorldMags.net
WorldMags.net DIAGNOSTICS AND PERFORMANCE We uncover some of the command line options and third-party tools that can help measure performance Modern computers use resource-sharing mechanisms to run multiple processes. Though the processes are run one at a time, they are switched very fast, giving you an illusion that all the processes are running in parallel. So, the first step to understand why your PC is running slow, is to check out all the running processes and see if a process is being unjust in its resource usage, or if a process is actually required to run. Generally speaking, your CPU usage should not be more than 70% or (0.70). If it is more than that, it means you have less headroom for new processes that may come in. In case your CPU load hovers around 90-100% (0.90-1.00), it is high time you look into cutting out whatever is not required. Similarly,
you should try to make sure the swap memory assigned is twice the RAM you have on the system and that you have some amount of free memory available while running on average load. There are several in-built command line tools like top, free and uptime that show information about processes, CPU and memory usage. The top command displays all currently running processes, complete with the CPU and memory usage details. The free command is useful to check out the memory status of your system, while uptime displays CPU load averages on your system. You can also install third-party software like nmon and collectl to check out the resource usage details on your system.
By default the list output from top or htop is sorted based on CPU usage. To sort it on other column headers, just type the first character of the column header.
Above Linux provides several options for you to monitor resource usage, including the tool top
HTOP COMMAND OUTPUT EXPLAINED RESOURCE USAGE STATUS BAR This shows the usage stats for major system resources. The progress bar is colour coded – red for kernel processes, green for user processes.
01
01 02
03 04
SUMMARY AREA This shows the total number of processes and threads that are running. It also shows the average load on the system in addition to the total uptime.
02
OUTPUT COLUMNS These represent: process ID, process owner, priority, nice number, virtual memory, physical memory, shared memory process status, CPU usage, memoryusage, total time, and the command that started the process.
05
03
OUTPUT ROWS htop enables scrolling through the response. A selected row is highlighted; this also enables extra features. For example, to kill a process, scroll to it and press K.
04
OUTPUT BODY Each of the processes running on the system is listed as a separate row with all the relevant details. You can enable the tree view by pressing T.
05
OUTPUT FOOTER This lists out some of the menu options, along with the applicable hot key. In some cases, pressing the hot key will open a sub menu.
06
06
WorldMags.net
www.linuxuser.co.uk
21
Speed up Linux WorldMags.net
Feature
DEBIAN Use these Debian optimisation tips to get the best from your system Debian was first announced in 1993, when it was a one-man project helmed by Ian Murdock who was then a Computer Science student at Purdue University. The Debian project was envisioned as a fast, robust, open-source distribution of Linux with an emphasis on community-first development, and it continues to be just that. However, one size doesn’t fit all, and there are still things you can cut out from the default installation to make your Debian system faster. Linux systems in general have a special mount option for file systems called noatime. If this option is set for a file system in /etc/fstab, then reading accesses will no longer cause the atime information, ie the last access time information associated with a file, to be updated (in reverse this means that if noatime is not set, each read access will also result in a write operation). Therefore, using noatime can lead to significant performance gains. To set this option, open the file…
$ vi /etc/fstab …and add noatime to the options of the /file system, like this:
proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md0 /boot ext3 defaults 0 0 /dev/md1 none swap sw 0 0 /dev/md2 / ext3 defaults,noatime 0 0 You don’t have to reboot the system for the changes to take effect – the following command will do:
$ mount -o remount /
Another approach to speed up the boot process is using bootchart; it is available as a Debian package. Install it and boot with init=/sbin/bootchartd added to the kernel command line. (In Grub, select the kernel using the cursor keys, then hit E and select the line with the kernel command line, again press E, edit the line, press Return, and then press B.) Then run the bootchart utility, which reads the log written during boot and creates an SVG graph. You can view the resulting file using most web browsers. This will show which processes took the most time, and you can also see how much time was spent waiting for I/O and how much time was CPU-limited. You can then understand the steps during the boot process and can remove any culprits from the start-up process.
Above The fstab file has information about mounted location of partitions and storage devices. Change it to enable noatime
BEST DEBIAN OPTIMISATIONS DEFINING EXTRA RUNLEVELS
USING KEXEC FOR WARM REBOOTS
Debian uses runlevel 2 by default and doesn’t define any special function for levels 3-5. To speed up the start-up of services that you only run on rare occasions, you can set them to start in a different runlevel to the default. For instance, you might keep databases you use occasionally on runlevel 3 and a slow MUD server you almost never use on runlevel 4.
If your system is warm-rebooting, rather than powering on after a long shutdown period, you can skip the hardware re-initialisation. Rather than go through the BIOS and bootloader, you can go to a minimal runlevel and load the new kernel image into memory. This requires kexec-tools and a kernel configured with CONFIG_KEXEC=y (standard for Debian). Set kexec as the default restart-handler with dpkg-reconfigure kexec-tools.
USING READAHEAD TO LOAD FILES FROM DISK The readahead package runs at boot and populates the kernel disk cache with the files that are going to be needed during boot. To activate it, run the commands below and then reboot once. The first boot after this, ie the profiling boot, is very slow and will tune the list of files loaded to match the list of files used during the profile run:
REDUCING SYSTEM LOGGING ACTIVITY In default distro installs, system logging is often configured fully, suitable for a server or multi-user system. However, if you are using it as a singleuser system, the constant writing of the many system log files will result in reduced system performance, and reducing logging activity will speed things up. In Debian, the default system logger is rsyslog and its configuration file is /etc/rsyslog.conf. You can disable the unnecessary default logs by commenting out the corresponding lines with a #.
WorldMags.net
WorldMags.net UBUNTU Make Ubuntu work faster with these optimisation tips and tricks environments, you just need to install it on your system using the apt-get command. For example:
$ sudo apt-get install xubuntu-desktop Once installed, you can get started by logging out and then logging back in again and choosing the environment to use. Background processes are another common reason for computers to respond slower. Processes like indexing can go on for days in the background and slow down your computer while they run. In Ubuntu, the indexing application apt-xapianindex is used. It speeds up certain search operations, but it can slow down older and weaker computers a lot. You can freely remove this package, because it’s not essential, and you’ll likely not even notice that it’s gone. In lightweight Lubuntu it’s not even there by default. To uninstall it, just type:
$ sudo apt-get purge apt-xapian-index Above The Xubuntu desktop is fast and requires fewer resources for it to run smoothly
Ubuntu is pretty snappy right off the bat, but there are still a few aspects of it that the average user doesn’t need. If you are running Linux on a relatively old machine, it is important to strip it down to the bare minimum so that your system runs much smoother. The default Ubuntu installation comes with Unity as its desktop environment, which is quite resource hungry. So, the first step is to select a lightweight desktop environment like LXDE or Xfce. LXDE contains the basic features for a strippeddown yet approachable desktop environment. Similarly, Xfce is a lightweight desktop environment for various *NIXbased systems. It also is the default desktop environment for Xubuntu. To get started with one of these desktop
Software installs are inherently slow processes, because the software needs to be downloaded first. Apt-fast is a shell script wrapper for apt-get that improves updates and package download speed by downloading packages from multiple connections simultaneously. If you frequently use the terminal and apt-get to install and update the packages, you may want to give apt-fast a try. Install apt-fast via the official PPA using the following commands:
Preload is a daemon that monitors the applications you use. It learns the libraries and binaries you use and loads them into memory ahead of time so the applications start faster. For example, if you always open Firefox after starting your computer, preload will automatically load its files into memory when your computer starts. When you log in and launch Firefox, it’ll start faster. You can install the preload package via apt-get.
Ubuntu uses APT to manage packages and their dependencies. APT maintains its cache at /var/cache/apt/archives and with time, this can get cluttered with unnecessary information about old packages. The aptsources list in /etc/apt/sources.list also needs to be cleaned from time to time. To clean these, run sudo apt-get autoclean. Depending on your installation habits, you may regain a significant amount of disk space.
CHANGE SWAPPINESS VALUE
REDUCE OVERHEATING
Swappiness controls the kernel’s tendency to swap physical and virtual memories. Acceptable values are 0-100: 0 means the kernel avoids swapping for as long as possible and 100 means it aggressively swaps. Ubuntu’s default swappiness is 60. If you find it is swapping processes out to disk when it shouldn’t be, temporarily change the value: sudo sysctl vm.swappiness=10. To save the value between boots, run vi /etc/sysctl. conf and add the line vm.swappiness=10, if it doesn’t already exist.
Overheating is a common problem these days – it can take ages to open a program when your CPU fan is running amok. TLP can reduce overheating and boost system performance in Ubuntu, and it runs in the background:
SYSTEM AND SERVICES Try these system-wide fixes applicable to all distributions We’ve just taken a look at performance optimisation tips specific to Debian- and Ubuntu-based systems. But there are many other Linux distributions, and while it may not be possible to cover tips specific to all of them, there are several tips that are not distro-specific, but are instead applicable to all of them. Let us takealookatafewofthemnow. First among these handy tricks is GRUB customisation. GRUB takes care of the boot process; and you must have noticed the boot process interrupted by the GRUB bootloader. By default, most desktop Linux distros will display the GRUB bootloader for anywhere from 10 to 30 seconds. But you can easily trim the duration of the bootloader, or even skip the countdown completely. Fire up the terminal and open /etc/default/grub file in a texteditor.Forexample,
If you are using collectl, here are a few more tricks. To display the time in each line along with the measurements, use $ collectl -oT
$ sudo vi /etc/default/grub Look for the GRUB_TIMEOUT variable. Replace the value associated with this variable to something like 5 or 3. Set it to 0 to disablethecountdown(thefirstentrywillbeselectedbydefault). Saveandclosethefile.Thenrun…
$ update-grub Our next tip is to trim out the start-up programs list. This is because your system may start unnecessary apps and services
during start up, slowing down the booting process. Generally, Linux distros ship with a start-up applications tool to add or remove any apps that will be launched on startup. To disable any services,justlaunch the start-up applications management app and disable any unnecessary programs that you find there. To seeallservices, fire up a terminal and type:
Above You can change the GRUB_ TIMEOUT to shorten or even disable the countdown before booting up
cd /etc/xdg/autostart
UNDERSTANDING LSOF COMMAND OUTPUT COMMAND, PID, USER The first three columns display the command that corresponds to the file, the process ID (aka PID), and the user who owns the process.
01
01
03
FD – FILE DESCRIPTOR The number in front of flag(s) is the file descriptor number used by the process associated with the file. The ‘u’ means the file is open with read and write permission; ‘r’ means the file is open with read permission; ‘w’ means the file is open with write permission.
02
TYPE This indicates the file type. As in Linux, almost everything is files, but with different a type. ‘REG’ means regular files that show up in a directory and ‘DIR’ means a directory.
03
NODE This column indicates a unique identifier for the file node (usually the kernel vnode or inode address). This is sometimes also known as node-id.
04
02
24
04
WorldMags.net
WorldMags.net $ sudo sed --in-place ‘s/NoDisplay=true/ NoDisplay=false/g’ *.desktop Our next approach is to speed up the file system. You might know that inodes are data structures used to represent file system objects. Linux systems maintainan inode cache to speed up the file-loading process. However, computers with 1GB or more of RAM will benefit by shrinking the inode cache less aggressively. The price you pay for this is that certain system items will remain longer in the RAM memory, which decreases the amount of available RAM for general tasks. That’s why it is recommended for computers with at least 1GB RAM. To change the inode caching, launch a terminal window and open the file /etc/sysctl.conf in a text editor.
FIVE UTILITIES TO MONITOR LINUX PERFORMANCE Above This application displays all the start-up programs scheduled to run. You can choose to remove programs based on your usage
$ sudo vi /etc/sysctl.conf Scroll to the bottom of the text file and add your cache parameters to override the defaults. Just add these lines:
Below Sysctl.conf file can be edited to add your cache parameters in order to override the defaults
Finally, close the text file and reboot. Next to examine is the desktop environment. Modern desktops have several graphical features enabled by default, but they can contribute to a sluggish performance. Disable them by switching to a 2D desktop environment like Xfce, or if you want to use the default, major desktop environments (namely KDE and GNOME) have options to turn off special effects. GNOME users can force the fallback Classic mode, and KDE users can go to System Settings and turn desktop effects off. Also, KDE users: turn off Nepomuk. It’s not essential, and it takes up a lot of resources.
GET DETAILED STATUS REPORTSWITHCOLLECTL
01
CPU usage monitoring
By default, collectl displays information about three major aspects of computing, ie CPU, disks and the network interface, at onesecond monitoring intervals. If you want to limit the output to only CPU details, you can use:
$ collectl sc If you have multiple CPUs, you can use C. It will output multiple lines together, one for each CPU:
$ collectl -sC
02
Memory monitoring
To display memory details, you can use the m option, like this:
$ collectl
sm
This shows you the free memory, buffered memory, cache and a few other details. However, if you find that you want further details, use M:
$ collectl - sM
03
Check disk usage
Building upon the previous two options, you might have guessed that to view only the disk related details, you can use:
$ collectl - sd This shows you the number of reads/writes being performed per second. To get into more details, use D:
Five ways to monitor all your system resources
01
VmStat
02
Netstat
03
Iotop
04
Collectl
05
Lsof
The Linux vmstat command is used to display statistics of virtual memory, kernel threads, disks, system processes, I/O blocks, interrupts, CPU activity and much more. By default, vmstat is not available under Linux systems – you need to install a package called sysstat that includes a vmstat program.
Netstat is a command line tool for monitoring incoming and outgoing network packets statistics as well as interface statistics. It is very useful for every system administrator to monitor network performance and troubleshoot networkrelated problems.
Iotop is similar to the top and htop commands, but it comes with an accounting function to monitor and display real-time Disk I/O and processes. This tool is incredibly useful for finding the exact process and the highest used disk read/ writes of the processes.
Collectl is a feature-rich command line utility that can be used to collect performance data describing the current system status. Unlike most of the other monitoring tools, collectl does not focus on a limited number of system metrics, instead it gives information on many different types of system resources such as CPU, disk, memory, network, sockets, TCP, inodes, memory, NFS, processes, quadrics and more.
Lsof stands for List Open Files. This command is used to display a list of all the open files and the processes. The open files included are disk files, network sockets, pipes, devices and processes. With this command you can easily identify which files are in use.
$ collectl - sD
WorldMags.net
www.linuxuser.co.uk
25
Speed up Linux WorldMags.net
Feature
FIVE TOOLS TO SPEED UP A LINUX INSTALLATION Some of the best third-party tools to help speed up Linux
01
Gconf-cleaner
02
FSlint
Gconf-cleaner can be compared to a Windows registry cleaner. This tool goes through the Gconf database (Gconf Registry – a configuration database for GNOME) and removes unused and obsolete entries. It is easy to use, and depending upon how old your system is, will find quite a lot of entries that can be removed.
This is a utility to find and fix common errors in file storage. It can find things like duplicate files, problematic filenames and bad symlinks. FSlint is available as command line utility and with a GUI, so you can choose to use the mode that suits you.
03
Synaptic package manager
04
Thermald
05
TLP
It is important to keep track of used space, so you know you are not about to completely fill up the hard disk. Synaptic package manager can help you find out how much space each package is taking up. You can easily install it and check out the pace used by each package on the system.
BEST FOSS As well as tweaking your system, there are third-party tools that can help optimise it Tweaking the system to use fewer resources may not be enough to make it run faster, so you might want to install third-party tools that can further optimise your system, or take a look at the software you are running on the system and swap it with something that isn’t so hungry for resources. Let us look at a few third-party tools to clean up your system. First of these tools is Cruft. It is a command-line tool that will look through your system and remove anything that shouldn’t be there. It gathers most of its results from the dpkg database, in addition to a list of ‘extra files’ that appear during the lifetime of various package installations and removals. Next is GtkOrphan. It lets you easily remove orphaned packages from your Debian system. Depending on how much has been orphaned, this tool can also clear up quite a bit of space. Another tool for doing this is BleachBit; not only does this clear various caches, cookies and Internet history, it also shreds temporary files, deletes logs and discards various types of junk that you probably weren’t even aware were on your system. BleachBit has an outstanding
26
GUI that enables you to easily choose exactly what it is you want to clean up. As we discussed, swapping your current software with something that is light in resource usage can help make your system run faster. For example, Chrome is known to be memory intensive, so you can use another browser like Midori, which is relatively fast. You can use word processors like Gnumeric or Abiword instead of LibreOffice, and so on.
OPTIMISE LIBREOFFICE MEMORY
Thermald tries to prevent the CPU from overheating by using Intel functions available in the Linux kernel. It’s worth mentioning that thermald applies various cooling methods only when the temperature reaches a certain threshold, so you may not notice a difference.
TLP is a power management tool that brings you the benefits of advanced power management for Linux without the need to understand every technical detail. TLP comes with a default configuration already optimised for battery life, so you may just install and forget it. Nevertheless, if you want to customise based on other needs, TLP is highly customisable to fulfil your specific requirements.
Above Once you install Cruft, you can use it to clear all files left from previous installations
02
Open LibreOffice settings
03
Changing the settings
To start adjusting the settings to help improve performance, open up LibreOffice and then choose LibreOffice Writer. Once it opens, click on Tools from the top menu bar, and then choose Options. In the window that opens next, select Memory from the top set of options. Here that you can view all of the settings that are related to the memory usage of LibreOffice.
01
Why make a change?
LibreOffice is installed by default on most of the common Linux distributions, and to be honest, it feels like a lot of work to uninstall it and install another word processor just to have one run a little bit faster. So, if you think the performance is just a little sluggish but it’s not a deal-breaker, it is better to change the memory settings and see if that speeds things up.
When you are in the Memory section, increase the Graphics cache to 100MB and set the Memory Per Object to 20MB. This will increase the memory usage a little, but it makes LibreOffice snappier. Also, if you use LibreOffice often, enable the Quick Starter, so you have it available in the system tray. This makes sure LibreOffice is preloaded in the memory and therefore starts quicker.
WorldMags.net
WorldMags.net LIGHTWEIGHT DISTROS Here we take a look at some lightweight distros to install on your old hardware to get lightning-fast processing ARCH LINUX Arch Linux is an independentlydeveloped Linux distribution with a strong focus on simplicity, minimalism and code elegance. One of the main reasons that Arch Linux is a lightweight and fast distro is that it is installed as a minimal base system, and users can add packages as and when they need them. This is in striking contrast to other distros where everything is installed at once, making the system run slower and forcing you to keep an eye out for tricks to optimise the set up.
Arch Linux has a rather interesting approach towards releases. It is based on a rolling-release system, which allows a one-time installation with continuous upgrades, without you ever having to reinstall the distro and without having to perform the elaborate procedures involved in system upgrades from one release version to the next. By issuing just the one command, an Arch system is kept up-to-date. This way you get to keep a lean and updated system without much effort at all on your part.
ELEMENTARY OS Elementary OS started out as a set of themes and applications designed for Ubuntu, and later turned into its own Linux distribution. It is, essentially, an Ubuntu-based distribution, so it is compatible with Ubuntu repositories and packages. However, the similarities with Ubuntu are largely limited to handling software packages and do not extend much beyond that. The core focus of Elementary OS is to provide a great user experience for users from anywhere in the world.
It has mastered a global aesthetic by streamlining the user interface and minimising the need to access the terminal. This may feel like going against the GNU philosophy, but elementary OS does away with fully-fledged configuration and instead focuses on a gentle learning curve. With a minimalistic design that is not demanding in terms of resources, Elementary OS is a perfect combination of a fast and easy Linux distribution that you should definitely check out.
PUPPY LINUX Puppy Linux was developed while keeping in mind the need for lightweight distros that can be used to power old computers, or computers without large storage media like hard disk drives. Its tag line even reads “Don’t throw away your PC – make it new with Puppy!”. With a size of ~100MB this distro can be easily downloaded, installed and booted from a variety of media devices like CD/ DVD and USB flash drives. Puppy Linux has a tiny boot time of around 30-40 seconds, making it one of the fastest-
WorldMags.net
booting distros available, and it’s certainly an excellent option for those who cannot upgrade their hardware, or who prefer using older systems. Another striking benefit of Puppy Linux is that since it can detect most of the hardware automatically, there is not much technical knowledge required. Also, there is a wide range of applications: word processors, spreadsheets, Internet browsers, games, image editors and many utilities. Extra software can be added in the form of dotpets.
www.linuxuser.co.uk
27
SPECIAL SUBSCRIPTION OFFER
WorldMags.net
PAY
ONLY
12.50
£
EVERY 3 ISSUES
EP GUIDES STEP-BY-ST FOR LINUX E ENTIAL T
T-FILLED E Y PI PROJEC RASPBERRAZINE IN EVERY ISSU MINI MAG
0844 249 0282
Calls will cost 7p per minute plus your telephone company’s access charge
www.imaginesubs.co.uk/lud WorldMags.net
BY POST
Send your completed form to: Linux User & Developer Subscriptions, 800 Guillat Avenue, Kent Science Park, Sittingbourne, Kent ME9 8GU
WorldMags.net YOUR DETAILS Title Surname Address
EVERY ISSUE PACKED WITH...
First name
Postcode Telephone number Mobile number Email address
Country
DIRECT DEBIT PAYMENT Q UK Direct Debit Payment I will pay just £12.50 every 3 issues (Save 30%)
Programming guides from Linux experts
Instruction to your Bank or Building Society to pay by Direct Debit Please fill in the form and send it to: Imagine Publishing Limited, 800 Guillat Avenue, Kent Science Park, Sittingbourne, Kent, ME9 8GU
Breaking news from enterprise vendors
Name and full postal address of your Bank or Building Society
To: The Manager
Originator’s Identification Number Bank/Building Society
5
0
1
8
8
4
Address Reference Number
Creative Raspberry Pi projects and advice
Postcode
Name(s) of account holder(s)
Instructions to your Bank or Building Society Please pay Imagine Publishing Limited Direct Debits from the account detailed in this instruction subject to the safeguards assured by the Direct Debit guarantee. I understand that this instruction may remain with Imagine Publishing Limited and, if so, details will be passed on electronically to my Bank/Building Society Signature(s)
Best Linux distros and hardware reviewed
Branch sort code
Bank/Building Society account number
Date
Banks and Building Societies may not accept Direct Debit instructions for some types of account
WHY YOU SHOULD SUBSCRIBE... Save 30% off future issues – paying £12.50 every 3 issues FREE delivery direct to your door Never miss an issue
A6 instruction form
PAYMENT DETAILS YOUR EXCLUSIVE READER PRICE, 1 YEAR (13 ISSUES)
Q UK £62.40(Save20%) Q Europe–£70 Q World–£80 Q USA – £80 Cheque
Q I enclose a cheque for £ (made payable to Imagine Publishing Ltd)
Credit/Debit Card
Q Visa
Q Mastercard
Card number Issue number
Q Amex
Q Maestro Expiry date
(if Maestro)
Signed Date Please tick if you do not wish to receive any promotional material from Imagine Publishing Ltd by post Q by telephone Q via email Q Please tick if you do not wish to receive any promotional material from other companies by post Q by telephone Q Please tick if you DO wish to receive such information via email Q
29TH FEBRUARY
Use code PQ16 for this offer.
TERMS & CONDITIONS Terms and Conditions: This offer entitles new UK direct debit subscribers to pay only £12.50 every 3 issues. New subscriptions will start from the next available issue. Offer code PQ16 must be quoted to receive this special subscription price. Details of the Direct Debit Guarantee are available on request. This offer expires 29 February 2016. Imagine Publishing reserves the right to limit this type of offer to one per household. Subscribers can cancel this subscription at any time.
WorldMags.net
Data files and information WorldMags.net
Tutorial
Mihalis Tsoukalos is a Unix administrator, a programmer (for Unix and iOS), a DBA and also a mathematician. He has been using Linux since 1993
Resources Text editor GCC compiler
Systems programming Data files and information Harness the necessary system calls and structures to help you work with Linux system files and information
Tutorial files available: filesilo.co.uk
30
This tutorial will reveal the system calls and techniques that enable you to deal with system files and information. Please bear in mind that most system files are in plain text format, however, editing system files with a text editor can be dangerous and should be avoided by amateur users and programmers. As you will have to deal with files, file permissions play a key role in what you can or cannot access and do with them. But first some information about processes is required. Each process really has two user IDs: the effective user ID and the real user ID. Similarly, each process has two group IDs: the effective group ID and the real group ID. To make things even more complex, most of the time the kernel checks only the effective user and group IDs. You might ask what is the point of having real user and group IDs? Suppose that there is a server process that has to watch all system files, regardless of the user who created it. Such a process must run with root privileges because only the root user is guaranteed to be capable of looking at any file. If a request comes from a different user (eg mtsouk) to access a file, then the server process temporarily changes its effective user ID
from root to mtsouk before trying to perform the requested job. If mtsouk is not allowed to access the file, then an error will be generated. After finishing all tasks demanded by the mtsouk user, the server process will change its effective user ID back to root. Many server processes work this way.
Why deal with system files? Most tasks on a Linux system require one or more programs or processes to deal with Linux user and group permissions, various kinds of system information, TCP/IP services, etc. Therefore most, if not all, programs have to access, read or modify system files, which is generally done behind the scenes without the user knowing what is going on, unless there is an error somewhere.
The /etc/passwd file You will understand how important /etc/password is once you realise that each time you execute the ls –l command, /etc/passwd is accessed. The reason for using /etc/passwd is making sure that the user running ls has the right user or group permissions to access the related files or directories.
WorldMags.net
WorldMags.net C memory allocation One of the reasons that C is fast is that it deals directly with memory. However, many bugs in C programs come from incorrectly freeing memory that you try to access afterwards. Such operations make C programs crash badly. There are various C functions that enable you to allocate memory: malloc(), calloc(), valloc(), realloc() and reallocf(). In practice, most of the time you are going to use malloc(3) to allocate memory and free(3) to free memory. Should you wish to allocate memory for manually copying an existing string into another, you can use the following method:
strdup() that create a copy of an existing string into a new string, which automatically allocates the necessary memory so you do not have to allocate it yourself. The unary operator sizeof returns the size of a variable or data type and is extremely useful for correctly calculating the required memory space that needs to be allocated for storing a variable. Therefore, if you want to dynamically create memory space for an array of length 10 that holds double numbers using malloc(), you should use the following method:
double *myArray = malloc(10 * sizeof(double)); int originalStringLength = strlen(originalString); char *newString = malloc((originalStringLength+1) * sizeof(char)); Please note that the extra memory space is for the ‘\0’ character that is used for properly terminating a C string. Note that ‘\0’ is a single character and not a string. However, there are functions like
Most, if not all, programs have to access, read or modify system files, which is generally done behind the scenes This is extremely valuable when trying to execute the ls –lR command, which recursively lists all subdirectories. The password file contains entries for each user of the system. The entry for the mtsouk user is the following:
The mallocFree.c file in your tutorial resources illustrates a relevant C example. The segmentation fault error happens as a result of the second variable (aString) has no valid memory associated with it after terminating it with the following code:
aString = '\0';
It is extremely important to remember that if a user has a user ID with the value of 0, then it automatically has root privileges; therefore it is not the username that makes the root user but its user ID! The following shows a C program (uid0.c) that finds all users inside /etc/passwd that have a user ID of 0:
#include #include #include #define MAXLINESIZE 512 int main(int argc, char **argv) { FILE *INPUT; char buffer[MAXLINESIZE]; // Open /etc/passwd for read NONLY INPUT = fopen(“/etc/passwd”, “rt”);
// Read file line by line while (fgets(buffer, sizeof buffer, INPUT) != NULL) { // Ignore comments in /etc/passwd if( buffer[0] == ‘#’) continue; // User ID is the third field char *username = strtok(buffer, “:”); char *passwd = strtok(NULL, “:”); char *uid = strtok(NULL, “:”);
A colon separates the different fields in the /etc/passwd file. First, the name of the user is defined, whereas the last field is the login shell of the user. The field before the login shell defines the home directory of the user. Please note that instead of a shell, you can define the absolute path of any command, which usually happens for special types of users such as the owners of server processes like a web or a database server. As an example, the /etc/passwd entry for the www-data user – that is, the owner of the Apache web server – is the following:
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin The /usr/sbin/nologin binary is a login shell replacement for accounts that should be disabled for security reasons. The third field is the numerical value of the user ID, whereas the fourth field holds the numerical value of the default group ID of the user. As you will see in a while, in order to translate the numerical value of group ID into something more meaningful, you will have to access /etc/group.
if ( strcmp(uid, “0”) == 0 ) { printf(“User %s has root privileges!\n”, username); } } fclose(INPUT); return 0; }
WorldMags.net
www.linuxuser.co.uk
31
Data files and information WorldMags.net
Tutorial
by line, it uses much less C code than uid0.c. The getuid() function returns the real user ID of the current user whereas geteuid() returns the effective user ID of the current user. As both system calls need a program in order to be executed, what is actually returned has to do with the properties of the current process. The adjacent image shows the various fields of the passwd structure that is returned by the getpwuid(3) system call used in userShell.c.
Right Reading /etc/ passwd should be fine, but don’t write to it without first locking it
Shadow password support Although the following AWK code can perform the same task using less code, it is still more educational to implement the same solution in C:
$ awk -F: “/:$(id -u root):/{print \$1}” /etc/passwd Let us talk more about the code in uid0.c. The code reads /etc/passwd line by line and gets the user ID for each entry that does not begin with a ‘#’ character, which denotes a comment. If a user ID value is equal to zero, then the related username is printed on the screen. The process continues until you reach the end of the file. The strtok() function searches its input until a ‘:’ character is found. The returned token, which is the username, is stored in the username variable. In order to get the next two tokens and continue using the rest of the string, you will have to pass NULL as the first argument to strtok(), because strtok() maintains a static pointer to the previous passed string – therefore if you pass a new pointer to it, the old terminal reference gets discarded. However, strtok() does not alter the contents of the buffer variable in any way. After you get the user ID as a string, you compare it with the ‘0’ string using strcmp(). If there is a match, you return the username of the user that has a user ID of 0. Please note that reading /etc/passwd without locking it first is not a problem but writing to it without any locking should be avoided. Additionally, uid0.c does not access /etc/passwd using any system calls, which is unconventional.
#include #include #include #include #include
$ ls -l /etc/passwd /etc/shadow -rw-r--r-- 1 root root 1953 Apr 26 2015 /etc/passwd -rw-r----- 1 root shadow 1245 Apr 26 2015 /etc/shadow As a result, /etc/passwd no longer keeps the encrypted password due to security reasons – that is why the second field of /etc/passwd is ‘x’ which is used as a placeholder. This functionality has been transferred to the /etc/shadow file which should have at least two fields, the username and the encrypted password. However, /etc/shadow usually contains additional fields that are related to password aging. The structure that holds the shadow password information is called spwd. Should you wish to support shadow passwords in your programs, you should use some or all of the following system calls: getspnam, getspnam_r, getspent, getspent_r, setspent, endspent, fgetspent, fgetspent_r, sgetspent, sgetspent_r, putspent, lckpwdf and ulckpwdf. Further discussion about shadow passwords is beyond the scope of this tutorial.
The /etc/group file The format of the /etc/group file is similar to the following:
stapdev:x:116:mtsouk stapusr:x:117:mtsouk stapsys:x:118:mtsouk The previous output says that mtsouk is a member of the stapdev, stapusr and stapsys groups. Each group has a unique number associated with it. The group structure that is pretty analogous to the passwd structure is defined as follows:
int main(int argc, char **argv) { struct passwd *p; if ((p = getpwuid(getuid())) != NULL) printf(“User shell: %s\n”, p->pw_shell); return 0; } This shows a program (userShell.c) that finds out and prints the shell of the current user using the appropriate system calls that access /etc/passwd. This is a safer, easier and recommended way of accessing /etc/passwd. Additionally, since it does not have to manually parse /etc/passwd line
32
As you can see from the next output, /etc/passwd can be read from anyone, whereas /etc/shadow can only be read by root or users that belong to the shadow group:
struct group { char *gr_name; char *gr_passwd; gid_t gr_gid; char **gr_mem; }; The gr_name variable holds the group name whereas gr_gid holds the group ID. The gr_mem variable is a NULL-terminated array of pointers to names of group members. The gr_passwd entry holds a C string that contains the password of the group, which is rarely used nowadays.
WorldMags.net
WorldMags.net The userGroups.c file, also provided in your FileSilo tutorial resources, accesses /etc/group in order to print all users that belong to a given group which is provided as a command line parameter. Please note that you give either the group name or the group ID. A simple program execution produces the following output:
$ ./userGroups 116 Finding information about _116_. The group is stapdev with ID (116). The members of this group are: mtsouk
}
If you want to find out all groups that a given user belongs to, you can access the man page of the getgrouplist(3) system call which returns the list of groups to which a user belongs. Inside the man page, you will also find a small C program that demonstrates the use of getgrouplist(3). The getegid() function returns the effective group ID. The getgid() function returns the real group ID of a process.
The /etc/services file The /etc/services file contains information regarding the known services available in the Internet. The following lines belong to /etc/services and are presented in order to better understand its format:
http http
80/tcp 80/udp
www
# WorldWideWeb HTTP # HyperText Transfer Protocol
The first field is the name of the service, followed by the port number and the protocol used. The policy of IANA is to assign a single well-known port number for both TCP and UDP protocols even if the protocol does not support UDP operations. As you can see, a line can also contain comments. It is very important to remember that there is nothing that prohibits you from using port number 80 for FTP or SSH. However, not using the default service number requires you to explicitly declare the port you want to use on your client program. A simple example is the http://localhost:8080/ URL, which uses a different port number for accessing a web server. You can access the information in /etc/services using the following system calls: getservbyname(3), getservbyport(3), getservent(3), setservent(3) and endservent(3). The servent structure that is defined in netdb.h is used by the getservbyname(3), getservbyport(3) and getservent(3) system calls to hold information about entries from the services database. The following C code (getServByPort.c) illustrates the use of the getservbyport(3) system call:
#include #include #include #include
int main(int argc, char** argv) { struct servent *applName; int port = 80; char *name;
applName = getservbyport(htons(port), NULL); if (applName == NULL) printf(“unknown application!\n”); else { name = applName->s_name; printf(“Port number %d is %s\n”, port, name); } return 0;
Please note how you translate the port number between host and network byte order using the htons() function – a forthcoming tutorial about network programming will explain what host and network byte order is and why such a translation is needed.
About Setuid Many programs change the effective user ID in order to perform tasks that it would be difficult to accomplish otherwise. The /usr/bin/passwd binary enables a normal user account to write to /etc/passwd and /etc/shadow with the help of setuid():
$ ls -l /usr/bin/passwd -rwsr-xr-x 1 root root 54192 Nov 21 2014 /usr/bin/ passwd The following shows a C program (setUserID.c) that changes its effective user ID.
#include #include #include #include #include
C structures and struct This is the second time in this series of articles that we have discussed C structures, so it is time to talk more about C structures and the struct keyword. A structure is a user-defined data type available in C that enables you to combine data items of different data types, including structures. You can access any individual structure member by using the member access operator, which is a full stop. You put the full stop between the structure variable name and the structure member that you want to access. You can also use pointers to structures, as the following code shows:
struct list { int value; list * next; } an_alias_to_the_list; Here, you create a linked list where each node has a pointer to the next node in the list. After the closing brace you can put aliases to the defined structure name. Additionally, you can use -> instead of the full stop to access a structure member that uses a pointer to a structure. Therefore, (*aPToStruct).i = 15 and aPToStruct->i = 15 are equivalent.
WorldMags.net
www.linuxuser.co.uk
33
Data files and information WorldMags.net
Tutorial
} The setuid(2) function is used for setting the effective user ID of the calling process. Before you set about executing setUserID you should always change its permissions as follows, using root privileges:
# sudo chmod 4755 setUserID # sudo chown root setUserID $ ls -l setUserID -rwsr-xr-x 1 root mtsouk 7864 Nov 4 20:26 setUserID If you execute setUserID as mtsouk, it will generate the following output:
$ ./setUserID Real UID = 1000 Effective UID =0 Becoming Root! Real UID = 1000 Effective UID =0 Real UID = 1000 Effective UID = 1000 The touch() function is defined and used inside setUserID.c for creating empty files:
$ ls -l after asRoot -r--r--r-- 1 mtsouk mtsouk 0 Nov 4 20:31 after -r--r--r-- 1 root mtsouk 0 Nov 4 20:31 asRoot Above The tm structure breaks down the calendar date and time into its core components
void touch(char *filename) { int fd = open(filename, O_RDWR | O_CREAT, S_IRUSR | S_IRGRP | S_IROTH); if (fd < 0) printf(“Error creating %s.\n”, filename); } int main(int argc, char **argv) { uid_t myID = getuid(); printf(“Real UID\t= %d\n”, getuid()); printf(“Effective UID\t= %d\n”, geteuid()); // setuid() to root printf(“Becoming Root!\n”); seteuid(0); printf(“Real UID\t= %d\n”, getuid()); printf(“Effective UID\t= %d\n”, geteuid()); // Create an empty file touch(“asRoot”); // Go back to normal user // Create an empty file setuid(myID); printf(“Real UID\t= %d\n”, getuid()); printf(“Effective UID\t= %d\n”, geteuid()); touch(“after”); return 0;
34
Programs that use setuid() are potential security threats and should always be carefully examined before going into production systems. As you can understand, a process run by root can change its effective ID to mtsouk, but a process run by mtsouk cannot change its effective ID to root.
Accessing /etc/resolv.conf The main reason for someone accessing /etc/resolv.conf is that they want to read the name servers in order to translate a hostname into an IP address. This part will implement a utility in C that prints the DNS servers used by the current machine. Some of the functions used are to do with network programming and might look quite strange to you, so don’t worry if it all seems a little unfamiliar.
int main(int argc, char **argv) { // Initialize the resolver structure res_init();
WorldMags.net
WorldMags.net printf(“Total number of DNS servers: %d\n”,_res. nscount); res_state res = &_res; int i = 0; // Get the data you want and print it! for (i = 0; i< _res.nscount; i++) { sa_family_t family = res->nsaddr_list[i]. sin_family; if (family == AF_INET) // IPV4 address { char str[INET_ADDRSTRLEN]; inet_ntop(AF_INET, & (res->nsaddr_list[i]. sin_addr.s_addr), str, INET_ADDRSTRLEN); printf(“DNS Server %i: %s\n”,i, str); } else if (family == AF_INET6) // IPV6 address { char str[INET6_ADDRSTRLEN]; // String representation of address inet_ntop(AF_INET6, &(res->nsaddr_list[i]. sin_addr.s_addr), str, INET6_ADDRSTRLEN); printf(“DNS Server %i: %s\n”,i, str); } } return 0; } Here is the source code of the nameServers.c program. You’ll find that running the program will produce output that is similar to the following:
$ ./nameServers Total number of DNS servers: 3 DNS Server 0: 109.74.192.20 DNS Server 1: 109.74.193.20 DNS Server 2: 109.74.194.20
#include #include #include #define __XOPEN_SOURCE #define _GNU_SOURCE #define __USE_XOPEN #include int main(int argc, char **argv) { time_t t; struct tm *myTime; char timeStr[128]; // Get current time time(&t); // Convert it and store it to a tm structure myTime = localtime(&t); // Get multiple output from the same tm structure strftime(timeStr, 128, “Date: %D”, myTime); printf(“%s\n”, timeStr); strftime(timeStr, 128, “Time: %R:%S”, myTime); printf(“%s\n”, timeStr); strftime(timeStr, 128, “Time: %R:%S”, myTime); printf(“%s\n”, timeStr); if ( strptime(“16 Noe 2015 15:21:30”, “%d %b %Y %H:%M:%S”, myTime) == NULL ) printf(“Error executing strptime!\n”); strftime(timeStr, 128, “Date and Time: %a %d %b %Y, %T”, myTime); printf(“%s\n”, timeStr); return 0; }
After you call the res_init() function, the resolver structure is initialised and stores all the information you are going to need. The list of DNS servers is stored in a C structure called nsaddr_ list. As nsaddr_list[0] is not a string, you will have to use inet_ ntop() to convert its sin_addr.s_addr value to a printable string, before actually printing its contents. The _res structure that stores all information after the res_init() system call is defined in the resolv.h file.
Time and date system calls A Linux system provides functions that enable you to deal with date and time with the help of the tm data structure. The image opposite shows the various fields of the tm data structure as taken from the gmtime(3) man page. As you can also see, the tm_sec field can take values from 0 to 60 in order to allow leap seconds – once again, these small details make a big difference. The strftime() function works like printf() for time values as it supports various arguments that enable you to customise its output. The strptime() function converts a string representation of time and date to a tm structure and is useful for parsing date and time input.
The above illustrates a C program (timeFunctions.c) that not only makes use of the strftime() function, but also uses the strptime() function. Please make sure that your timeStr variable has the required length to store the return value of the strftime() function.
Global, local and static variables C supports global, local and static variables. Local variables are only accessible within the block of code in which they are declared. As a consequence, they usually have simple names that have no special meaning. Global variables are accessible at any point after their declaration. You should carefully choose a descriptive and appropriate name for them. A static variable inside a function keeps its value between invocations, whereas a static global variable or a function is ‘seen’ only in the file it is declared in. If you recall, it was said that strtok() maintains a static pointer to the previous passed string. In other words, the static pointer was used in order to make the strtok() function use the same buffer between different invocations. The buffer is allocated the first time you call strtok().
WorldMags.net
www.linuxuser.co.uk
35
Chart.js WorldMags.net
Tutorial
Visualise your data with Chart.js Chart.js is a JavaScript library that helps you draw gorgeous graphs of all kinds on your website
Nitish Tiwari
is a software developer by profession and an open source enthusiast by heart. As well as writing for leading open source magazines, he helps firms set up and use open source software for their business needs
Resources Charts.js chartjs.org
Data visualisation is one of the most important considerations when you need to convey a message to your audience in the clearest manner possible. Whatever the message may be, if you want it to be instantly understood it is vital to have the data plotted as charts and graphs, instead of plain tables. Since humans are wired to understand images better than text, data visualisation will almost always save the dayforyouandyourpresentation. It’s all very well understanding the theory, but it then raises the question of how to do it. There are tons of data visualisation tools out there that cost a lot and do not let you even get a glimpse of what they are capable of before you actually pay for them. Thankfully, the open source world comes to your rescue. There are several open source data visualisation tools available that you can play with, explore and use to illustrate your data in the best possible way. In this tutorial we will take a look at one such tool – Chart.js. It is easy to use and offers a great deal of control over how the graphs and charts look and feel when they are plotted. Please note that while using Chart.js you may have to fiddle with JavaScript code snippets, but it is very easy to handle and can be mastered by anyone. Let us dive in and get started withtheinstallationprocess.
01
Installation
To install the Chart.js library, just download the JavaScript library from the official Chart.js GitHub repository and then include the chart.js file wherever you’d like to use it:
<script src=“Chart.js”> Note that you need to pass the proper path of the chart.js file in your file system, while including the library file. Instead of manual download, you can also use the JavaScript package managers like NPM or Bower. As you may already know, NPM is used commonly to manage Node. js modules, but it also supports front-end libraries, while Bower is created solely for front-end libraries. The biggest difference is that NPM uses a nested dependency tree, while Bower requires a flat dependency tree, putting the burden of dependency resolution on the user. Coming back to Chart.js, here is how to grab it using Bower:
$ bower install Chart.js --save If you want to use NPM:
Right Chart.js lets you draw common graphs with just a tiny bit of code. This is a pie chart with custom tool tips
36
WorldMags.net
WorldMags.net $ npm install chart.js --save Also, Chart.js is available from CDN: https://cdnjs.com/ libraries/Chart.js.
02
Create your first chart
Once you have the chart included, you can start plotting graphs. The first step is to create a canvas tag and assign an ID to it. Later, you need to get the element using the ID assigned to the canvas and use it to instantiate the Chart class. For example, create a canvas with the ID myChart at the location you’d like to draw the graph in the HTML file:
Then, in JavaScript, get the context of the canvas element using the ID, and instantiate the Chart class using the context you got in the first step:
var ctx = document.getElementById(“myFirstChart”). getContext(“2d”); var myNewChart = new Chart(ctx).PolarArea(data);
var ctx = $(“#myChart”).get(0).getContext(“2d”); var myNewChart = new Chart(ctx); If you noticed, after creating the Chart object, the method PolarArea() is called. This draws a Polar area chart with the data passed as the argument to the PolarArea() method.
03
} ]
Line charts
One the most commonly used charts, the line chart plots data points and then connects them on a line. It is generally used to show trend data. If more than one line chart is plotted on a single window, it can also be used to show comparison of data sets. To draw a line chart, you can just call the Line() method on the Chart object. For example:
var myLineChart = new Chart(ctx).Line(data, options); Now, there are two arguments for the Line() method. Let us get an understanding of their usage. The first argument data holds the data points, labels and other metadata about how the graph should look and feel once the points are plotted. Here is a sample dataset:
var data = { labels: [“January”, “February”, “March”, “April”, “May”, “June”, “July”], datasets: [ { label: “My First dataset”, fillColor: “rgba(220,220,220,0.2)”, strokeColor: “rgba(220,220,220,1)”, pointColor: “rgba(220,220,220,1)”, pointStrokeColor: “#fff”, pointHighlightFill: “#fff”,
Above Use the global default values and just change the parts you want, as in this example of a false bezierCurve chart
}; The options argument holds the info about other aspects of the graph, such as whether the line between the data points should be curved or not. You can even set the radius of the point dot in pixels. Note that it is not mandatory to set all the values; you can just set the value you’d like to change. The rest of the fields are taken from the global default values. For example:
var myLineChart = new Chart(ctx).Line(data, { bezierCurve: false }); This creates a chart using all the default options, with just the bezierCurve option set to false, meaning the lines connecting data points will be straight lines.
04
Bar chart
Like line charts, bar charts are a very popular choice when the user needs to display data points spread over time or some other parameter. Bar charts are generally rectangular bars with their height corresponding to the data point (if the bar is on the x-axis) or their length corresponding to the data point (if the bar is on the y-axis). Multiple bars can be plotted side-by-side to make comparisons. Here is how you can plot a bar chart in Chart.js:
WorldMags.net
www.linuxuser.co.uk
37
Chart.js WorldMags.net
Tutorial
Global configuration Along with the global prototype methods, the global configurations are also available for you to set up. This allows for changing options globally across chart types, avoiding the need to specify options for each instance, or the default for a particular chart type. You can find it in the chart.js file.
var myBarChart = new Chart(ctx).Bar(data, options); Note that the data structure used for a bar chart is similar to the one used in line charts.
05
Radar chart
A radar chart is a way to show data as a twodimensional chart. In these kinds of charts, three or more variables are represented on axes starting from the same point. Another quality of these types of charts is that the relative position and angle of the axes is typically uninformative. That means you can use radar charts to plot more data points compared to bar or line charts. The process to plot radar charts using Chart.js is not different; you just need to call the Radar() method:
var myRadarChart = new Chart(ctx).Radar(data, options); To provide context of what each point means, we need to include an array of strings that shows around each point in the chart (called labels). For the radar chart data, we have an array of datasets. Each of these is an object, with a fill colour, a stroke colour, a colour for the fill of each point, and a colour for the stroke of each point. We also have an array of data values. The label key on each dataset is optional, and can be used when generating a scale for the chart. Here is how the dataset looks:
Pie charts are excellent at showing the relational proportions between data. They are generally used to plot the percentages of different items, and as such the sum total of all the items comes out to 100. As we saw earlier, the angle doesn’t matter in radar charts, but pie charts use the angle
38
(or the arc) of each segment to show the proportional value of each piece of data. A popular variation of pie charts is the doughnut chart. The major difference is that the inner portion of the pie chart is filled, while for a doughnut chart it is empty. Hence, both the charts effectively use the same class in Chart.js, but have one different default value – their percentageInnerCutout, set in the global configuration file. This equates to what percentage of the inner should be cut out. This defaults to 0 for pie charts, and 50 for doughnuts. Though there are different aliases for both of the charts, they differ only in the default value.
var myPieChart = new Chart(ctx[0]).Pie(data,options); var myDoughnutChart = new Chart(ctx[1]). Doughnut(data,options);
07
Polar area chart
Polar area charts look similar to pie charts, but there is one major difference – the radius of various segments changes depending upon the values, while the angle remains the same. Pie charts have the same radius for all the segments and the angle varies depending on the values. To plot a polar area chart using Chart.js, you need to use the PolarArea() method:
new Chart(ctx).PolarArea(data, options); The data structure used here is fairly simple:
WorldMags.net Each array element has a value, default colour, highlight colour and the label to be displayed. As with other charts, you can keep the default options or change them as you wish.
08
Prototype methods
For each chart, there is a set of global prototype methods on the shared ChartType, which you may find useful. These are available on all chart objects created with Chart.js. Here, for example, let us use a line chart object:
var myLineChart = new Chart(ctx).Line(data); First method is clear(). This clears the chart canvas on which myLineChart is drawn. You can use this between animation frames to clear the frame and render again:
myLineChart.clear(); Next method is stop(), used to stop the current animation loop. The frame is paused once you call this method:
myLineChart.stop(); Use resize() to manually resize the canvas element. This is run each time the browser is resized, but you can also call this method manually if you change the size of the canvas nodes container element:
The last method we will discuss is destroy(). This will clean up any references stored to the chart object within Chart.js, along with any associated event listeners attached by Chart.js:
myLineChart.destroy(); There are a few other methods available as well. In addition to these generic methods, there are several chart-specific prototype methods. Space constraints make it difficult to cover all of them here, but you can look them up in the official Chart.js documentation.
09
Above This scatter chart is a result of community extensions
// Creates a line chart in the same way new Chart(ctx).LineAlt(data);
myLineChart.resize();
Extend existing chart types
As we all know, open source software not only means being able to freely use and learn stuff, but also being able to extend and build upon the existing elements. On the same lines, let us see how you can extend an existing chart class with extra functionality. Let’s say, for example, you want to run some more code while initialising every line chart:
Chart.types.Line.extend({ // Passing a name registers this chart in the // Chart namespace in the same way name: “LineAlt”, initialize: function(data){ console.log(‘My Line chart extension’); Chart.types.Line.prototype.initialize. apply(this, arguments); } });
10
Adding new chart types
If you’re a power user, and are not opposed to having a bit of an explore of things, Chart.js provides easy ways to add new chart types to the existing library. The format is relatively simple. You just need to pass in a name and provide defaults for the new chart type. There are a set of utility helper methods under Chart.helpers, including things such as looping over collections, requesting animation frames, and easing equations. On top of this, there are also some simple base classes of Chart elements. These all extend from Chart.Element, and include things such as points, bars and scales. There are already a handful of community extensions listed on Chart.js. One of them is scatter chart. Take a look here: dima117.github.io/Chart.Scatter.
Render your first 3D object in MonoGame Creating 3D games used to be incredibly complex; MonoGame is ready to make things easier
Tam Hanna
has been in the IT business since the days of the Palm IIIc. Serving as a journalist, tutor, speaker and author of scientific books, he has seen every aspect of the mobile market more than once
Resources MonoGame monogame.net
Back in the day, Microsoft wanted to widen the appeal of its Xbox 360 console by letting small studios release a large variety of indie games. At the time, releasing the native DirectX SDK was out of the question for a variety of reasons: we shall restrain ourselves and simply state that the immense complexity of this API is well-known in the community. This problem was addressed by a .NET-based product called XNA. It provided developers with an attractive abstraction layer that simplified the creation of visually appealing three-dimensional content. Evolution led to a change of mind at Microsoft: the Xbox One no longer supports XNA. Rumour has it that this change was to discourage incapable developers; the rise of Android and iOS had led to a flood of third-rate games. Due to the popularity of the API, however, an open source project established itself. The MonoGame team promised to reimplement the entire XNA pipeline for a variety of platforms. The product has now reached an amazing level of maturity. Developers working on games that are not purely graphicsdriven can benefit from its deployment: the time saved on the engine can be invested in new game modes, better content or a larger marketing effort, for example.
The MonoGame team released a new version of its product in April of this year – you can get started by paying a visit to http://www.monogame.net/2015/04/29/monogame-3-4/. Once there, click on the MonoGame for Linux link in order to download a 26MB archive which should then be extracted to a convenient place. It contains a large selection of shell scripts, which must be made executable before the building process can commence. This is best accomplished via the following shell file, which has been floating around the Internet for some time. Save it to the folder containing the generate*.sh files, make it executable and run it:
Right Once you’ve installed MonoDevelop’s MonoGame add-in, its templates appear under Miscellaneous
40
WorldMags.net
WorldMags.net GraphicsDeviceManager and a SpriteBatch member variable, both of which are instantiated in the constructor:
chmod +x Makeself/makeself.sh chmod +x generate* Next, run generate.sh. It copies around a group of files, finally creating a file called monogame-linux.run. Spawn it via sudo in order to extract the pipeline project, which is a standalone product dedicated to the encoding of resources. The shell file will proceed to download a large selection of support packages – a steady and consistent internet connection is recommended. When done, the product will output a line explaining the location of the uninstaller. To uninstall the pipeline, please run /opt/monogamepipeline/uninstall.sh
Get started Fire up MonoDevelop, and open its Addin manager by clicking Tools>Addin-Manager. MonoGame’s logic is offered in a separate package, which must be installed by opening the Gallery tab and entering MonoGame into the search field. Next, select MonoGame Addin and click the small Install button. Click the New button to open the project generation wizard. MonoGame’s main template is found in the Miscellaneous tab (see image opposite) – create a new project based on the template. The project name should be GameCode, while the solution name is ImagineGame. Set a location, and click the Create button to build the solution. During the initial build, MonoDevelop will announce a File Conflict – click Yes to All in order to complete the compilation process (see image below). Finally, click Build>Clean All in order to compile the product. By default, the MonoGame add-in will create the structure you can see in the left-hand sidebar in the image below. The main game logic is found in Game1.cs. Due to the importance of this file, we’re going to walk you through it step by step. Game1 is derived from the Game class, which provides a set of common properties. Most instances start out by declaring a
public class Game1 : Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; public Game1 () { graphics = new GraphicsDeviceManager (this); Content.RootDirectory = “Content”; graphics.IsFullScreen = true; } During debugging, IsFullScreen can be set to false in order to simplify interacting with the program and the debugger on single-screen computers. Resource initialisation is handled via a pair of functions. Initialize() performs some housekeeping tasks, while LoadContent is responsible for obtaining sprites and similar resources from the content pipeline:
Light and Shade BasicEffect is extremely convenient in that it saves developers the hassle of creating custom shader programs: it contains prepacked routines for various common lighting effects. Of course, the full effect ‘richness’ of modern 3D games can only be achieved with handwritten GPU code: when done right, it can even be used to create geometry on the fly.
protected override void Initialize () { base.Initialize (); } protected override void LoadContent () { spriteBatch = new SpriteBatch (GraphicsDevice); } As we will cover the content pipeline in the next part of the series, we limit ourselves to the creation of a SpriteBatch
Left Game1.cs contains the main game logic and is found under the Properties folder inside the sidebar to the left
WorldMags.net
www.linuxuser.co.uk
41
MonoGame WorldMags.net
Tutorial
Second screen Developers working on any kind of program should have a second display: this is especially true of 3D games, which can, of course, be debugged more realistically when running in full-screen mode. Decent used LCD monitors can be found from about £50 – make sure you see it turned on before purchase, as the CCFL or LED lighting can sometimes age unevenly.
class. It manages the batching of sprite draw operations in order to optimise performance – a problem which is not of significant concern for now.
Groundhog day Game programmers face a complex dilemma: the realtime processes at the base of the play model tend to be time-continuous. Digital computers often do badly when confronted with any kind of continuity. Unlike their extinct analogue brethren, they are, by definition, discrete. The standard pattern for solving such problems involves discretisation. A game could break time down into a fixed number of slots: actions occurring during this slot are considered ‘constant’ and, by definition, take place at the same time. In games, this pattern is evolved into the game loop. Each time slice starts out with a call to the update method, which is responsible for updating the various elements of the physics model. In our current game, the routine’s implementation limits itself to checking whether the Escape key was pressed – if this is the case, game execution is terminated:
protected override void Update (GameTime gameTime) { #if !__IOS__ if (GamePad.GetState (PlayerIndex.One).Buttons. Back == ButtonState.Pressed || Keyboard.GetState ().IsKeyDown (Keys.Escape)) { Exit (); } #endif base.Update (gameTime); } Actual drawing then takes place in Draw, which currently restricts itself to colouring the screen pastel blue:
protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.CornflowerBlue); base.Draw (gameTime); } } Both methods receive an instance of the GameTime object. This is due to a phenomenon called frame skip. If the performance of the computer system at hand is not fast enough to handle the required tasks in the time allocated, a ‘frame’ is skipped. Real-life physics do not know this phenomenon. The GameTime object provides a way to figure out how much game time has passed – if a frame skip is detected, physics engines can be instructed to compensate for the time lost.
Make your model After these introductory thoughts, it is now time to spruce up our visuals by drawing a cube. Most 3D engines work via
42
a collection of points and a group of ‘bones’ connecting them (called vertices). Vertices usually describe triangles: creating a rectangular plane is accomplished via two triangles joined together at their hypotenuse. Add a new class called CubeModel. Generating the actual vertices is made easier if an array of points is created first. In our case, the required points look like this:
public class CubeClass { public VertexPositionColor[] myBones; public CubeClass () { Vector3 topLeftFront = new Vector3(-1.0f, 1.0f, -1.0f); Vector3 topLeftBack = new Vector3(-1.0f, 1.0f, 1.0f); Vector3 topRightFront = new Vector3(1.0f, 1.0f, -1.0f); Vector3 topRightBack = new Vector3(1.0f, 1.0f, 1.0f); Vector3 bottomLeftFront = new Vector3 (-1.0f, -1.0f, -1.0f); Vector3 bottomLeftBack = new Vector3 (-1.0f, -1.0f, 1.0f); Vector3 bottomRightFront = new Vector3 (1.0f, -1.0f, -1.0f); Vector3 bottomRightBack = new Vector3 (1.0f, -1.0f, 1.0f); } } Due to space constraints, this example just shows the code needed for but one side. Fortunately, a helpful Microsoft employee provides a full cube implementation at an official MSDN blog (http://bit.ly/1lLtBfT) – alternatively, feel free to simply peruse the code accompanying the example:
myBones = new VertexPositionColor[36]; myBones[0] = new VertexPositionColor (topLeftFront, Color.DarkRed); myBones[1] = new VertexPositionColor (bottomLeftFront, Color.DarkRed); myBones[2] = new VertexPositionColor (topRightFront, Color.DarkRed); myBones[3] = new VertexPositionColor (bottomLeftFront, Color.DarkRed); myBones[4] = new VertexPositionColor (bottomRightFront, Color.DarkRed); myBones[5] = new VertexPositionColor( topRightFront, Color.DarkRed); As our cube will be rendered with static colours, we don’t need to bother ourselves with normal or texture coordinates. Instead, a simple colour variable is provided along with each vertex – for reasons of illustration, each side of our cube has a different colour scheme.
Advanced transformations The final step of this story involves the actual drawing of the object. Three-dimensional models are more complex than their two-dimensional brethren – let us start out with some preparational physics.
WorldMags.net
WorldMags.net Left Movement and rotation is achieved through MonoGame’s BasicEffect class
By itself, a point is completely meaningless: it becomes useful only when the coordinate values are mapped to an axis and/or a distance system. When working on 3D objects, the entire model is placed in a coordinate system called model space – it tends to range from -1 to 1 on all three axes. Bringing the model onto the screen is done in two steps. First of all, rotations and scaling operations are carried out. They turn the object around in order to make it face the correct direction, and furthermore make it bigger or smaller. After that, a translation ‘pushes’ the object to its final resting place in the coordinate system of the game. With that, it’s time to walk through the new version of Draw step by step:
protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.CornflowerBlue); BasicEffect cubeEffect = new BasicEffect (graphics.GraphicsDevice); cubeEffect.World = Matrix.CreateRotationX (0.2f) * Matrix.CreateTranslation (0, 0, 0); cubeEffect.View = Matrix.CreateLookAt (new Vector3(10,5,15), Vector3.Zero, Vector3.Up); cubeEffect.Projection = Matrix.CreatePerspective FieldOfView(MathHelper.PiOver4, graphics. GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000.0f); First of all, a BasicEffect instance is created. BasicEffect is a ‘catch all’ class containing a set of drawing commands, which is intended to replace custom shader code. The BasicEffect is then populated with three matrices that are needed for the display. View and Projection matrices are not of interest currently – we will discuss them in a later issue related to shader programming.
3D engines represent each operation as a matrix. Matrices can be combined by multiplying them with one another. Sadly, matrix multiplications are not commutative: the operation on the left gets performed before the one on the right. Getting the matrix sequence wrong can lead to weird results – scaling and rotating should always come before the translation. In the next step, the BasicEffect is informed that the vertices passed to it contain colour data:
cubeEffect.VertexColorEnabled = true; Finally, each of the drawing commands contained in the BasicEffect is applied to all of the vertices:
foreach (EffectPass pass in cubeEffect. CurrentTechnique.Passes) { pass.Apply (); graphics.GraphicsDevice.DrawUserPrimitives (PrimitiveType.TriangleList, myCube.myBones, 0, 12); } base.Draw (gameTime); } With that, we’re done for now – run the program and feast your eyes on the results. Feel free to modify the Rotation, Translation and/or Scale properties in Update.
Conclusion Even though the results are not particularly game-like, getting to this point with DirectX or OpenGL would have taken significantly more code. Taking a step back reveals that the bulk of our code is dedicated to the definition of the geometric object. Storing it in the content pipeline would make the code more compact… which is what we’ll do next time!
WorldMags.net
www.linuxuser.co.uk
43
Springs.io WorldMags.net
Tutorial
Launch scalable Linux containers on Springs.io Find out how cgroups and namespaces power Linux containers, and set up a pay-as-you-go server
Richard Davies
is CEO and co-founder of ElasticHosts. Prior to this, he was one of the founding software engineers of Forbidden Technologies PLC, a leader in web-based video tools
Resources Springs.io springs.io
Lightweight containers bit.ly/1NMGp0h
Cgroups docs
Linux containers have seen an explosion of interest in recent years, but all containers are powered by two underlying Linux technologies that we are going to explore in this article. Here, we will explain the benefits of containerised servers over traditional VMs and get to grips with some of the underlying technology that make containers possible. Let's start with demystifying containers. They are a lightweight form of virtualised server that can offer improved performance and are easier to distribute across applications compared with virtual machines. The last two years have seen a groundswell of interest around Linux containers and recent changes in the Linux kernel have enabled a new generation of scalable containers that could make the old virtual machine server approach redundant. Many tools have emerged for managing containers, including Docker, LXC, lmctfy, Kubernetes and more. We have seen the likes of Docker making waves in the PaaS market with its container solution, and now such companies are starting to made waves in the infrastructure world as well, delivering benefits such as autoscaling and billing purely according to usage. In all cases, the Linux containers that these services run on are driven by two key Linux kernel technologies: cgroups and namespaces.
01
Get to grips with cgroups and namespaces
Cgroups, which stands for control groups, limit and measure the total resources used by a group of processes running on a system. Namespaces limit the visibility a group of processes has of the rest of a system. When you apply a full set of cgroups and namespaces, you end up having a group of processes running inside a fully isolated environment. While both cgroups and namespaces have been around for a while, the latest updates to the Linux kernel have led to some interesting applications for cloud infrastructure. The updates to cgroups now mean that improved insight into a server’s resource usage can allow for more reactive scaling and usage-based billing. The updates to namespaces also now mean that the level of isolation of processes has been significantly improved, enabling containers to be viable for multi-tenant architecture. In most cases it doesn’t make sense for system administrators to directly use cgroups and namespaces – a container tool, such as Docker, LXC or lmctfy, will do this for you. This article is intended to give you an understanding of what’s under the hood, rather than have you working with the kernel technologies directly. However, having said that…
bit.ly/1u8cwcI
Right Terminal showing running processes for cgroups
44
WorldMags.net
WorldMags.net 01
Get to grips with cgroups and namespaces
Left On the left, each server is run inside its own virtual machine. To the right, namespaces and cgroups containerise them on the host OS
02
Control cgroups
Disaster recovery
Here is an example of running tar inside a cgroup with a kernel memory limit:
# mkdir -p /sys/fs/cgroup/test/ # cat /sys/fs/cgroup/cpuset.cpus > /sys/fs/cgroup/ test/cpuset.cpus # cat /sys/fs/cgroup/cpuset.mems > /sys/fs/cgroup/ test/cpuset.mems # echo $((1<<26)) >/sys/fs/cgroup/test/memory. kmem.limit_in_bytes # echo $$ > /sys/fs/cgroup/test/tasks # tar xfz linux-3.14.1.tar.gz The first four lines set up the “test” cgroup – creating it, allowing access to all physical hardware, but limiting kernel memory use. The fifth line puts the bash prompt which you are currently using (and any child processes run from that bash) inside the “test” cgroup. The sixth line runs tar in the normal fashion, but is within the cgroup and so subject to the cgroup limits. There are many cgroup limits available, and the full documentation for these is at: https://www.kernel.org/doc/ Documentation/cgroups/cgroups.txt.
03
Control namespaces
The unshare command to manipulate namespaces is available in util-linux-ng 2.17 and later. As an example, if you run:
# unshare -mount /bin/bash # mount /dev/sda2 /mnt … then the first line starts a bash prompt inside an isolated mount namespace. This means that the filesystem which you have mounted on the second line is visible from the bash prompt which you are currently using (and any child processes run from that bash), but that the rest of the system cannot see this filesystem mount. Please see the man pages for unshare for a list of other namespaces that you can manipulate.
04
Putting it back together
05
Different containers
If we combine our two simple examples, we could create a bash prompt with limited kernel memory use, and private filesystem mounts from the rest of the system. There are many more cgroups and namespaces for limiting other resources. As you can start to imagine, when a full set of cgroups and namespaces is applied, you end up with total isolation between the software running inside the limits and the rest of the system – this is a Linux container.
Linux containers come in two types: Application Containers like Docker provide flexibility and agility for developers and ISVs, while Operating System Containers essentially replace the functions of virtual machines. While we believe that you would probably make more use of application containers as a developer, if you are a sysadmin you may not be aware of the benefits of operation system containers to replace the functions of virtual machines for infrastructure. Operating system containers make more dynamic use of computing resources and allow greater insight into the server itself than VMs. We believe operating system containers will create the next generation of elastic IT infrastructure, scaling automatically according to demand and billing consumers exactly for the capacity used. This will benefit Linux users of all kinds and we feel will bring us closer to the ideal of utility-based computing that was promised in the early days of cloud.
WorldMags.net
Traditionally, disaster recovery meant “hot spares” running in the cloud, ready to be swapped in if and when disaster struck. The cost of running and maintaining so many servers in the cloud as hot spares had put most disaster recovery strategies beyond the reach of many Linux users and developers. Linux containers allow organisations to create cloud infrastructure that is faster to react, dynamically scales by design, and is billed on actual usage, rather than provisioned size, and so can run in the cloud continuously.
www.linuxuser.co.uk
45
Springs.io WorldMags.net
Tutorial
08
Above Assign relevant details to your Springs.io container
06
Introducing Springs.io
Springs.io is ElasticHosts’ own operating system containers service offered in the cloud. ElasticHosts’ breakthrough auto-scaling container technology elastically expands and contracts to meet demands, entirely eliminating the need for manual provisioning. ‘Springs’ are available to Linux users and are the first cloud servers to be billed based purely on consumption, rather than capacity, delivering substantial cost savings. Springs need no additional software or server configuration; users simply sign up for the service and capacity is continuously available. Here are some key features:
• Auto-scaling for continuous high performance availability: The cost of a website or application failure can run into the millions of pounds, therefore servers need to be running with sufficient capacity at all times. By using Springs.io, Linux users can now handle all their usage peaks and troughs effortlessly, automatically scaling each container up to 64GB RAM. This provides peace of mind that capacity will be instantly available when needed, ensuring continuity of service without downtime. • Self-managing infrastructure: Currently, aside from deploying complex software to automate the process, Linux users are forced to provision and adjust capacity manually, which can be costly, time consuming and inaccurate.
Springs.io are the first cloud servers to be billed based purely on consumption, rather than capacity, delivering substantial cost savings 46
Add a Springs.io container
ElasticHosts’ auto-scaling, elastic infrastructure expands and contracts automatically, completely removing the need for manual management or provisioning; users can simply turn on the service and forget about it. • Usage-based billing system: Traditional capacity-based billing is focused on provisioning capacity in blocks, yet concerns over performance and availability force users to pay for a buffer of excess space that they are not actually using. By billing for actual usage, rather than available capacity, Linux users can run their servers with space continuously available for immediate scaling at no additional cost; they only pay for what they actually use, right down to the megabyte.
07
Start up a Spring.io container
Go to http://springs.io and sign up for a free account. Click on Add Credit and add some funds so you can spin up some containers. There is a $5 minimum, but the credit can last for over a month for low usage. Alternatively, you can spin up dozens of containers and use that credit up quicker; they’re pay-as-you-go cloud servers.
WorldMags.net
WorldMags.net 10
Monitor a Springs.io server’s usage
Left By monitoring your usage, you will avoid any nasty surprise bills
08
Add a Springs.io container
Load balancing
Click on Add Spring, give it a name, an operating system and an ssh key, then confirm. Your container will load up – in the background, it is rsyncing your operating system pre-install into the Linux container running on the host.
09
Run applications in Springs.io
Now you have created a Spring, let’s run an application inside it that tests its limits. You can ssh into the container very simply by using the Login button.
10
Monitor a Springs.io server’s usage
You can now monitor your server’s resource usage on the fly, do some tests to gauge usage or see historical graphs for a range of timeframes. First open up your Spring to see its usage details by clicking Usage. Since it’s a new container you will see an empty graph showing its usage over the last day, but we can change the timeframe by clicking Live. The graph should start to populate with usage snapshots. These are polled in ~6s intervals, so slight lag is to be expected; this is so as not to overload the usage API. The blue line denotes its maximum capacity, and can be toggled on and off using the link on the left. You should see it sitting at the maximum values for our container’s CPU and RAM respectively, which are the default of 2000 coreMHz and 1024MB. These values can be changed and set up to a maximum of 20,000 core-MHz and 64GB. The orange line denotes its current usage. You’ll see that CPU is idling at next to nothing, since we have very little going on in our container (kernel resource usage is not included since that is shared on the host), but a quarter of our maximum RAM is automatically earmarked for your container, so that we can guarantee good performance during load spikes. That is why the orange line sits at 256MB in the RAM graph.
11
Stress test a Springs.io container
To create resource demand we can use a program called stress. Install it using sudo apt-get install stress. Using stress we can test resources individually – let’s test CPU first. Run stress --cpu 8 to spawn eight workers that will spin on sqrt(). This will make system calls to the shared Linux kernel on the host that will request CPU cycles, and these will be limited by cgroups to the maximum set. We can test the RAM, too. Our maximum available memory is 1024MB, which is 1024,000,000 bytes. By running stress --vm 1 --vm-bytes 900M --vm-keep, you’ll spawn an additional two workers spinning on malloc()/free(). You can see that cgroups is effectively limiting the RAM available. As soon as more memory is allocated than the 1024MB, the Linux OOM killer terminates the application with signal 15, just as it would on a physical machine. While most Linux users might prefer to retain the autoscaling properties of their Springs.io containers to deal with varying demands, for those worried about being billed too much for their application using too much capacity, they can effectively set a cap on the amount of capacity that the container will use so they will never be billed for more than this amount.
WorldMags.net
Load balancing with traditional virtualisation has been complicated and expensive to set up; running multiple server instances behind a load balancer negates both performance and profitability. Extra servers must be spun up to handle spikes in load, and poor performance is hard to avoid when load changes suddenly and extra servers can’t be set up quickly enough. Containerised Linux servers instead individually grow and shrink dynamically according to load while they are running automatically. This means you simply turn the tap on at the Linux host’s reservoir of resources.
www.linuxuser.co.uk
47
Git WorldMags.net
Tutorial
Master version control by learning how to use Git Created by Linus Torvalds a decade ago, Git is now the standard for open source project management
Swayam Prakasha
has a master’s degree in computer engineering. He has been working in IT for years, focusing on areas like operating systems, network security and electronic commerce
Resources Git git-scm.com/downloads
Documentation bit.ly/1lnOHxG
Atlassian tutorials
As we all know, with the help of a version control system, you can track the history of a collection of files. The main component of any version control system is known as a repository, and as expected, it is used to store various versions of different available files. The version control systems are helpful when several developers are involved in a development project. Version control systems are used to track the changes in source code and as such, you can even use version control systems to track the different versions of a company logo. A distributed version control has a significant advantage in that each and every user will have a complete local copy of a repository on their computer. A user can also copy an existing repository through a process known as cloning. Git is the most popular implementation of a distributed version control system and is widely used in many open source projects. Please note that although the core of Git was originally written in the C programming language, we can see its implementation in other languages such as Java, Python, etc. Since Git is an open source version control system, its installation is relatively easy. You can install Git by typing the following command:
$ sudo apt-get install git Git also comes with very well-defined documentation; its man page gives more information about the various options and commands that are available with Git. When it comes to Git version control, you need to understand the concept of the working tree. The working tree can be considered as a check of one version of the repository with potential changes incorporated by one user. As expected, the user will be able to change the files in the working tree – either by modifying the existing files or by adding/removing the files. Afterwards, they will be able to add these changes to the repository. The advantage with Git is that every user has a local repository. With this local repository, a user will be able to perform various operations in the working tree and can also handle various version control operations – the most common being the creation of new versions for the files inside their Git repository. As with any version control system, Git also supports branches. With the help of these branches, users will be able to switch between different versions so that they can work on
bit.ly/1ohF077
Right Git’s man page is an excellent reference for the various commands and operators available to use
48
WorldMags.net
WorldMags.net them. It is important to note here that branches are local to the repository in Git. It means that a branch created in a local repository does not need to have its counterpart in the remote repository. As expected, a developer can work on changes from different branches. Later, they can use the available Git commands to combine these changes. This feature enables users to combine the changes coming from different developers in any open source project. When it comes to the process of adding to a Git repository, there are two stages: • Staging Here, we are going to add selected changes to the staging area. • Committing The changes stored in the staging area are then committed to the repository. During the staging process, Git provides a useful command: git add. With the help of this command, you will be able to add changes in the working tree to the staging area. By using the git add command, you can incrementally modify files and stage them, and then repeat the same process until you are satisfied with your changes. In the second stage (of committing), the goal is to commit the files and add them permanently to the Git repository. For committing the staged changes, you can use the git commit command. Let us have a quick look at some of the most important Git terminologies: Term Working tree
Revision Staging area
Head
Branch
Commit
Description As noted earlier, a working tree is made up from a set of working files for the repository. The revision represents the version of the source code. This the place where we can store the changes in the working tree before performing a commit. Head can be described as the symbolic reference that points to the currently checked-out branch. A branch can be considered as a pointer to a commit. When we select a branch, it typically refers to checking out a branch. When we commit changes into a repository, it creates a new commit object. You can use this new commit object to then uniquely identify a new revision.
The files in the working tree of a Git repository can be in different states. They are:
Let us understand how we can import a new project in Git repository. We are going to use a swatch-3.2.3.tar.gz file (that already exists in our home directory) and we will see how we can place it under Git version control. You can use the following commands to place the project in Git version control:
Above This is the importing process for a project under Git version control
$ tar xzf swatch-3.2.3.tar.gz $ cd swatch-3.2.3 $ git init Thus we have successfully initialised the working directory. Under swatch-3.2.3, we can now see a new directory named .git. It is possible for you to instruct the Git program to take a snapshot of all files (under the current directory) by executing the following command:
$ git add . When we tried to commit this version of the project in Git, we ended up with an error. This is basically because Git was not able to identify us. So, before we store this version of our project in Git, we need to introduce ourselves to Git with our name and public email address. This can be achieved by the following two commands:
$ git config --global user.name Your Name $ git config --global user.email your.email@ address.com Now we can use the following command to store this version of our project in Git repository:
$ git commit • When a file is staged to be included in the next commit, it is said to be in a staged state. • When a file is changed but the change is not staged, it is in a modified state. • When a file is neither staged nor committed, it is in an untracked state. • When a file is committed, it is in a tracked state.
As you can see from the following screen, the commit command prompts you for a commit message. You can also specify –m parameter. This enables you to specify a commit message – as shown in the following code:
$ git commit -m ‘first version’
WorldMags.net
www.linuxuser.co.uk
49
Git WorldMags.net
Tutorial
Right When ready, commit the project in Git – just make sure you have configured your email and username
Push and pull commands The push and pull commands are another two popular Git commands. The git push command lets a developer to send data to other repositories. The git pull command lets you get the latest changes from another repository for the current branch. For more details about these push and pull commands, you can refer to your respective man pages. Effectively, the pull command performs two tasks – it fetches changes from a remote branch, and then merges them into the current branch.
Right If you don't provide the necessary information, the commit will fail
50
Note that in case you change one of the staged files before committing, you need to add it again to the staging area to commit the new changes. This is because Git creates a snapshot of these staged files. Therefore all new changes must again be staged. As already noted, a developer needs to configure their username and email address so they will be able to commit the changes to Git repository. Please note that this information is stored in each commit. In addition to username and email address, several other parameters can be configured – an important one being the default editor. The following command can be used to configure the default editor:
$ git config --global core.editor vim If you want to, it is possible to query the Git settings by using the following commands:
$ git config --list $ git config --global --list
With any other version control system, the process of committing the changes is pretty simple. First, we modify files and then add their updated contents to the index. For this you can use the git add command. Before committing, you can see what will get committed by using the git diff command with the –cached option. A developer can also obtain a brief summary of the entire situation by executing a git status command. We can note here that git status shows the status of the working tree. It gives information on which files are changed, which files are staged and which files are not a part of the staging area. It also gives an indication about which files have merge conflicts and also ideas for what the user can do with these changes. Another important thing to note here is that the Git operations we have performed will create a local Git repository in the .git folder and all files will be added to this repository. We can obtain a history of all Git operations by executing the following command:
$ git log Sometimes, it may be necessary to ignore certain files and directories for repository operations. In Git, this can be configured via one or more .gitignore files. It is very natural to have this placed at the root of Git repository, but we can also place it in other subdirectories. If we have a .gitignore file at the root directory of the working tree, then it will be specific for Git repository. As a best practice, it is good to commit the local .gitignore file into the repository – with this, anyone who clones the repository will have this file. The following two commands can be used for this purpose:
$ git add .gitignore $ git commit -m
WorldMags.net
WorldMags.net
Left This shows the execution of the git status command Left If you think that it is necessary, or have any doubt, it is possible for you to query the settings
‘Committing .gitignore file’ It can also be noted here that Git ignores empty directories – that is, it will not put them under version control. Let us understand how we can remove files from Git repository. The following sequence of three commands will do the work for us:
$ rm test_file_01 $ git add . $ git commit -m ‘Deleting a test file’ In the above sequence, please note that the command git add . is used to add the deletion of a file to the staging area. In some scenarios, we may need to revert the changes in files that are in our working directory. In such cases, git checkout command comes in handy – this command can be used to reset a tracked file to its latest staged or commit state. A developer needs to be careful while using this command as this command deletes the unstaged and uncommitted changes of the racked files in the working tree and it will not be possible to restore this deletion via Git. Git provides another useful command – git commit -amend – that can be used to correct the last commit. The beauty of this command is that it enables you to change the last commit, including the commit message. Another popular command in Git is git clone. Using this, a developer can clone an existing Git repository. This copy is a working Git repository with the complete history of the cloned repository. Once you are done with developing a feature in an isolated branch, it’s important to be able to get it back into the main code base. For such purposes, Git provides merging. With the help of git merge, you can take the independent lines of development created by git branch and integrate them into a
single branch. It is worth noting that only the current branch is updated to reflect the merge and the target branch will not be touched. Let us see how we can use the git merge command. Git enables the developers to use the edit/stage/commit process to resolve the conflicts. Whenever we encounter a merge conflict, we can run the git status command to see which files need to be resolved.
$ git merge my_branch The above command merges the specified branch into the current branch. While merging, sometimes we will come across conflicts – that is, when we try to merge two branches and both of them changed the same part of the same file. In such scenarios, Git will not be able to make out which version to use, and it stops just before the merge commit so that the developers can resolve conflicts manually. Once the conflicts are resolved, we need to run git add on the conflicted file to tell Git that they are all resolved and then finally git commit.
WorldMags.net
www.linuxuser.co.uk
51
PulseAudio WorldMags.net
Tutorial
Discover the hidden power of PulseAudio We reveal new tricks and some mind-blowing PulseAudio features already at your fingertips
Alexander Tolstoy
has spent years finetuning his Linux system to make it blazingly fast, but he keeps on searching for more tweaks. This kind of addiction seems to be a good one
Resources Any up-to-date Linux distribution, set up for general desktop use
Most of us are running recent versions of our favourite Linux distributions, and that means that probably everyone uses PulseAudio as a default sound server, often without making any conscious decision to do so. We just play music, watch movies and enjoy online videos, but whatever we hear from speakers, it is powered by PulseAudio – a versatile abstraction layer that sits between the Linux kernel (which offers a driver for your sound hardware) and desktop applications. PulseAudio was controversial some years ago, but it has come through seven major releases and is rock solid these days. PulseAudio superseded a much simpler ALSA sound system with a sophisticated modular client-server solution, which has many benefits for power users once you decide to dive deeper into the modern Linux sound setup. In this tutorial we’ll cover features that go beyond playing with your sound applet in the system tray and reveal a number of practical applications that will be useful for common desktop activities. These include handling separate playback streams, redirecting sound over a network, improving sound quality and making use of various convenience tools that ease things a bit. All you’re going to need to provide is a command line and a few minutes of your spare time.
01
Discover your sinks and sources
For any system with PulseAudio, each sound device is identified by three main parameters: card, sink and source. Card refers to the hardware you use for sound playback and capturing, with all its physical inputs and outputs. A sink is an abstraction layer used for sound output. Not only can it point to your speakers, headphones or line-out jack, it can also mute sound by routing it to a null device (via module-null-sink) or make it accessible for other applications by creating a pipe-like FIFO output (via modulepipe-sink). The final parameter – source – is used for working with the incoming sound stream, such as various input devices (microphone, line-in, etc). So finally, PulseAudio creates a set of a card, a sink and a source for each application that deals with sound, and together with PulseAudio’s modular design it gives us great flexibility. PulseAudio tries to figure out which sink and source should be set as the default ones, so in most cases you should hear sound from your speakers and have your mic working correctly out of the box. To see the current setup, just issue sudo pactl list and examine its output.
Right PulseAudio is a lot more than just a sound system with volume control
52
WorldMags.net
WorldMags.net 01
Discover your sinks and sources
Left PulseAudio should automatically work out the sinks and sources
02
Hop between speakers and phones
This is the common case for laptops, where sound is played through loudspeakers but once you plug in your 3.5mm jack, it is transferred to headphones. Modern Linux systems do this automatically, but if they fail (or you need custom behaviour), you can control everything. In PulseAudio it means that one sink can have several ports. To find out the currently used one, look for something like:
PulseAudio was controversial years ago, but it has come through seven major releases and is rock solid Go beyond the basics
‘Active Port: analog-output-speaker’ We also know the names of the other ports, so now we can manually switch sound playback to headphones, like this:
$ pactl set-sink-port ‘alsa_output.pci0000_00_1b.0.analog-stereo’ ‘analog-output’ The same is also true for sources; so when you cannot record your voice in Skype, you should definitely make sure that the corresponding source is using a correct port for capturing sound.
03
Manage volume
PulseAudio uses a simple range from 0 to 65535 to manage sound volume, where 0 is muted sound and 65535 is 100% loud volume. The trick is, however, that you can go beyond 100% and boost the volume further, without any thirdparty tools (like VLC player). Let’s see some examples for the default sink #1:
$ pactl set-sink-mute '1' true If you need to set different volumes for certain inputs inside one sink, you may want to turn off the so-called ‘flat volume’ setting, which limits maximum volume for a sink. This is a simple procedure to carry out:
PulseAudio introduces clientserver design, which means that your sound setup can be spread across a network. This is a lot more than just playing to a remote device – it includes other cool things like broadcasting, radio streaming and alerting. You can set up PulseAudio together with the Icecast server and play audio for those that can connect to your stream.
Even more modules The number of PulseAudio modules keeps growing – and most of them are still waiting to be discovered by the general public. Meanwhile, there are some astounding modules that will boost your fantasy setup. A couple of examples: modulesuspend-on-idle can save your laptop battery by powering down an idle sound card and moduleposition-eventsounds positions event sounds between the Left and Right channels depending on the position of the widget triggering them.
04
Remove noise and echo
This is something not everyone is aware of: PulseAudio is shipped with modules that can improve the sound quality in certain cases, such is in VoIP conversations. The main module for that is called module-echo-cancel and it does the perfect job of removing echo, auto-levelling, controlling gain and reducing ambient noise. To use it, add the following line to /etc/pulse/default.pa:
load-module module-echo-cancel You can also specify one of the audio echo cancellation (AEC) methods right there:
load-module module-echo-cancel aec_method=webrtc # or load-module module-echo-cancel aec_method=speex Webrtc removes noise better than speex, though the latter is more stable. There is a small limitation, however: it only works when something is being played through a sink, ie apps that play back sound. By the way, it is possible to load modules instantly, without altering global PulseAudio settings:
06
Sound over network
If you have at least two Linux PCs in a home LAN, you can set up remote audio playback with the help of PulseAudio’s network capabilities. It can be really useful when your high-end speakers are connected to, say, a Raspberry Pi in your living room and you want to listen to some music that is stored on your laptop. In PulseAudio terms, your Pi would be a server and your laptop would a client. Both machines should be running PulseAudio and be discoverable on the LAN. Now we’ll set up a tunnel from the client to the server. On the server side, add the following into /etc/pulse/default.pa:
$ pactl load-module load-module module-zeroconf-publish load-module module-tunnel-sink-new server=192.168.0.1 sink_name=Remote channels=2 rate=44100 … where 192.168.0.1 is your server’s IP address. On the client side, install the paprefs utility (for Ubuntu it goes like: sudo apt-get install paprefs), launch it and enable the ‘Make discoverable PulseAudio network sound device available locally’ option. Finally, restart the PulseAudio daemon on both your server and client (sudo pulseaudio -k && pulseaudio --start). Now you can choose your remote sound device from Pavucontrol or other PulseAudio-compatible mixers.
05
Fix Skype issues
Skype is a proprietary app but is the most widespread VoIP application for Linux. Various issues take place when using Skype, most concerning sound quality. PulseAudio can help here. First, if you encounter echoing, try to launch the application with custom variables, like this:
$ PULSE_PROP=“filter.want=echo-cancel” skype Another concern can be static/crackling sounds. It was an issue in older PulseAudio versions, but some people running Skype on 64-bit Linux systems still find it an issue. We’ll try two methods that address two causes of the problem. The first changes audio latency:
$ PULSE_LATENCY_MSEC=30 skype The second one disables glitch-free playback, which may help for sound cards that do not return accurate timing information. Add the following line to /etc/pulse/default.pa:
load-module module-udev-detect tsched=0 … and restart the system.
54
07
Use a built-in equaliser
Many music and video players for Linux have audio equalisers that can enhance sound or create a desired ambient effect. However, these are custom implementations that affect a given player but not the system-wide audio output. In case your speakers are not perfect and you’d like to compensate, a global equaliser would be marvellous. Well, we have one and it’s called… PulseAudio-equalizer! It is included in almost all Linux distros that have the core PulseAudio bits, and all you have to do is to head to your software manager and get this extra package installed. PulseAudio-equalizer has 15 bands and 19 presets for
WorldMags.net
WorldMags.net 08
Make use of roles
Left It’s possible to lower the volume of streams to raise the volume of an important stream almost any music style or conditions, such as the very useful Laptop preset. The tool is otherwise very simple, with a few extra checkboxes and the Apply Settings button. PulseAudioequalizer works for all audio that is played through the current sink, including desktop notifications (if you use them). Presets are stored as plain text files under the /usr/share/pulseaudioequalizer/presets directory, so you can use existing files there as templates and create your own presets seamlessly.
08
Make use of roles
This is a relatively new PulseAudio feature that resembles the behaviour of modern smartphones – when you receive an incoming call, all other audio playback (if any) gets temporarily muted. In PulseAudio there is the module-roleducking module which lowers the volume of less important streams when a more important stream appears, and raises the volume back up once the important stream has finished (this is called ‘ducking’). The decision whether a stream has high or low priority is made based on the stream role (the media.role property). By default, “music” and “video” streams are ducked, and “phone” streams trigger the ducking. Let’s now load the module with explicitly declared options and specific attenuation to be used while ducking at -10dB:
09
Get things done easier
Most of command line actions around PulseAudio involve the pactl and pacmd commands, both producing verbose outputs. This is when Patricks comes out – it is a simple PHP-based utility (https://github.com/ootync/ Patricks) that parses the pactl list and pactl stat outputs and shortens them to more readable variants. The syntax is also very easy to understand:
$ patricks ls … lists entities, while:
$ pactl load-module module-role-ducking trigger_ roles=phone ducking_roles=music,video volume=-10dB … and then make sure it’s working by triggering ducking with sample playback, like this:
$ patricks ls sink 0 properties … shows the properties of the currently used sink. This command can even be shortened down to:
$ patricks ls si 0 pr
WorldMags.net
www.linuxuser.co.uk
55
From the makers of
WorldMags.net
Get ahead of the ever-changing world of Linux and open source software, with this collection of the year’s best bits from Linux User & Developer magazine. Whether you want to di over the right distro for you, or ind exciting Pi projects, it’s all A nual.
Also available…
A world of content at your ingertips Whether you love gaming, history, animals, photography, Photoshop, sci-i or anything in between, every magazine and bookazine from Imagine Publishing is packed with expert advice and fascinating facts.
BUY YOUR COPY TODAY Print edition available at www.imagineshop.co.uk Digital edition available at www.greatdigitalmags.com WorldMags.net
THE ESSENTIAL GUIDE FOR CODERS & MAKERS
WorldMags.net PRACTICAL
Raspberry Pi WAYS TO MASTER RAS PI Turn to Page 60
Contents 58
What is the Ras Pi Zero?
68
The Internet of LEGO city
70
Embedding Python in C
72
Control lights with Pi-mote IR
WorldMags.net
76
FUZEBASICretro game tutorial
www.linuxuser.co.uk
57
FAQ
WorldMags.net FAQ Raspberry Pi Zero
Gavin Thomas
is the Editor of LU&D and RasPi. He is an experienced tech journalist, an open source enthusiast and has a deep passion for homegrown projects
If you like this…
The Pi Zero is available from the Foundation’s Swag Store for just £4: bit. ly/1Q21nbH. You’ll probably want the Adaptor Kit as well, which contains the uncommon cables you’ll need: bit. ly/1NbONS6
Further reading
You can get Pi Zero cases from Pimoroni (bit.ly/1jv8pZM), Adafruit (bit. ly/1OAsOI1) and C4 Labs (bit. ly/1PWH8Oi)
58
The Raspberry Pi Foundation surprised us with an early Christmas present this year – a brand new board! Here’s everything you need to know What happened to the Raspberry Pi 3? Well, calling this the Raspberry Pi 3 would imply it’s more powerful than the Raspberry Pi 2. It’s not really an upgrade – more of an awesome downgrade. So what does that mean? The Raspberry Pi Zero isn’t as powerful as the Raspberry Pi 2, and a few bits and pieces like the USB 2.0 and Ethernet ports have been removed. The whole point of it is to make the computer as cheap as humanly possible, and by selling it for just $5, the Raspberry Pi Foundation has definitely succeeded! $5? That is awesome. But it’s not as good as other Raspberry Pi models? Depends what you mean by ‘good’. The Raspberry Pi Zero is on about the same level as the original Raspberry Pi Model B – it uses the same processor, rather than the newer BCM2836 that’s used
in the Pi 2. However, that processor has been overclocked and the Pi Zero actually runs about 40 per cent faster than the original Raspberry Pi. That’s cool. Does it have the same RAM as the original Raspberry Pi as well? It’s got 512MB of RAM – the very first Raspberry Pi Model B only came with 256MB, although later versions of the same board were upgraded to 512MB. So it’s a super-cheap, better version of the original Raspberry Pi? Not quite. As we said, a couple of sacrifices had to be made in order to bring the cost right down. The 40-pin GPIO header on the Pi 2 and B+ is still there, for example, but it’s unpopulated. And there are no USB ports? Not the full-size ones that you’re used to, but there are two micro-USB ports. One
WorldMags.net
is reserved for the power cable, while the other one is free for you to use. So how can I plug in a keyboard and mouse at the same time? You’ll need two things for that: a micro-USB to USB adaptor and then a powered USB hub. Plug the adaptor into the micro-USB port, power your USB hub, then you can plug things like your keyboard and mouse into the hub. What about the Ethernet cable – is there anything to replace that? Nope – but if you plug in the micro-USB to USB adaptor first, you can just use your regular Wi-Fi dongle to get online. But if I don’t have a powered USB hub then I won’t have room for a keyboard and mouse as well, right? You’re absolutely right. Don’t forget, though, that if your Pi Zero has a Wi-Fi
WorldMags.net
Pi Zero
Tiny HATs Pimoroni has released new versions of its HAT add-ons that fit the Pi Zero, but still work with other 40-pin Pi models Explorer pHAT, £10 This one provides you with four analogue inputs, two H-Bridge motor drivers, and then four 5V tolerant inputs and four 5V powered outputs. It’s perfect for robotics and other projects involving motors, solenoids and relays, as well as working with 5V systems like the Arduino.
Above The Pi Zero is 2.5 inches long and just over an inch wide
dongle and is online, you can use SSH to access it from another computer. And how do I connect my monitor? You’ll need the mini-HDMI port for that, which is in the corner below the microSD slot. You’ll also need to grab a mini-HDMI to HDMI cable. So I need pins for the GPIO header, a cable for the mini-HDMI and an adaptor for the micro-USB. Where do I get all of that? Where else? The Swag Store! Just head over to swag.raspberrypi.org and you can order the Raspberry Pi Zero Adaptor Kit for just £4. Ace! So once I’ve got the hardware, do I need to use a special Raspbian for this? Nope – the standard Raspbian distro works just fine with the Pi Zero. We’d recommend a fresh installation, though. The best way is to get a new microSD card and download NOOBS onto it, or just order one with NOOBS already on it from the Swag Store. NOOBS? New Out Of Box Software, not the gaming kind. The download and the instructions are over at: raspberrypi. org/downloads/noobs. Basically, you use it to install a Linux distro onto your Raspberry Pi, like Raspbian. Sounds great! Anything else I should know about? Along with your 40-pin GPIO header, there are also some extra headers that
pHAT DAC, £12 Based on Texas Instruments’ PCM5102A stereo audio DAC (digital to audio converter) chip, the pHAT DAC adds high-quality audio output to your projects. It pumps out 24-bit high-performance audio at 192KHz – amazingly, your average CD quality is only 16bit at 44.1KHz!
you can solder on: an unpopulated RCA composite video-out pin and an unpopulated RUN mode pin. They’re found just below the far-right end of the GPIO header. What’s the RCA one for? You use that one to hook the Pi Zero up to older display gear, in case you can’t or don’t want to use the mini-HDMI. The great thing is that you can even use the RCA composite video-out to connect your Pi Zero to an old TV. And what about the RUN pins? Solder these on and you can then connect a reset switch to your Raspberry Pi Zero. This can be really useful if you’re testing new projects and find that your Pi is crashing or just hanging for an unreasonable amount of time – if you can’t reboot from the command line, you can just use the switch instead.
Scroll pHAT, £10 Perfect for low-power displays, like simple messages, images and graphs, the Scroll pHAT provides a 55-pixel (11 x 5) matrix of white LEDs with brightness control. It’s the type of thing that’s useful for displaying system info like CPU usage, or even displaying incoming tweets via the Twitter API.
Just one more question – can I use my HATs with this? Yep, they will all work just fine. There are some new ones out that have been made specifically to fit the Pi Zero, though – check out the boxout to the right.
WorldMags.net
www.linuxuser.co.uk
59
WorldMags.net
WAYS TO
MASTER
RAS PI Get the most out of your Pi with these expert tips and tricks Whether you’ve just gotten your lucky hands on a powerful, petite new Raspberry Pi Zero or you’re looking to maximise the efficiency of the faithful Raspberry Pi you already own, this is everything you need to get started on boosting not only your Raspberry Pi but your own knowledge. The Raspberry Pi is a versatile little piece of hardware, with a wonderfully creative amount of potential, and though most of you will be familiar with its more day-to-day functions, there are always tweaks and adjustments to be explored that can tailor the Raspberry Pi to your own desired user experience. From soldering to useful Python features, GPIO interrupts to remote access, and a whole
60
lot more, this masterclass in technical and practical skills covers fifty useful ways to get the most out of your Raspberry Pi. If you’re anything like us, you’ll have been tinkering around with your Raspberry Pi Zero already, but whatever your skill level there’s still something here for you to get your teeth into. Every single tip here will work on your Pi Zero; just keep an eye out for the ‘Zero’ flash, indicating which are relevant to the Pi Zero only. Those of you using an earlier model won’t be missing out, though – how could we ever neglect our favourite single-board computer? – the majority of tips, tricks and tweaks are still suitable for any other official Raspberry Pi as well. Have fun tinkering!
WorldMags.net
Master Ras Pi
WorldMags.net 01
Raspberry Pi Zero
BCM2835 This is the same processor used in the original Raspberry Pi models, although it’s been overclocked to run at 900MHz and is about 40% faster.
Pinless port The GPIO header is the same as in the newer models but comes without the pins. You’ll need to solder on a 40-pin male header block.
Zero
The newest member of the Raspberry Pi family, this tiny board is the result of the Raspberry Pi Foundation’s efforts to reduce the cost of the computer even further. Not content with reducing its $35 computer to $25, which can still be a little pricy for some people, it has cut it right down to $5 by making a few adjustments. Here’s what you need to know.
Mini-HDMI You’ll need a mini-HDMI to HDMI cable to use your monitor as a display. You can use your TV with the RCA video-out if you solder the pin.
Vital knowledge Essential tricks to improve day-to-day use
Micro-USB One of these ports is for your micro-USB power supply. To use peripherals and a Wi-Fi dongle, you’ll need a micro-USB to USB adaptor so you can attach a powered USB hub.
02
Ensure you have the latest packages on Raspbian by running sudo apt-get update; sudo apt-get upgrade from a terminal
your Pi on a network 03 Find If you can’t log into your router to view DHCP leases, you can use
to type sudo? 05 Forgot Sudo is used to get root privileges for a specific command (for
nmap (available for Linux, Windows and Mac) to scan the local network to find a Raspberry Pi. You need to know the address range of your local network (common networks are 192.168.1.0/24, and 192.168.2.0/24). nmap -sn 172.17.173.0/24 will run a ping scan and output a list of devices. An example output is:
example, editing a file in /boot or /etc) The variable !! in bash is the previous command that yo something but forgot to ty provided you haven’t typed
Nmap scan report for raspberrypi.home.org (172.17.173.21) Host is up (0.0011s latency). stability issues? 04 Experiencing By far the biggest cause of stability issues is the power supply you are using, especially now the Raspberry Pi 2 has more CPU cores and so uses more energy. The recommended current supply for a Raspberry Pi 2 or B+ is 1.8 amps. If you are still having issues, your SD card may be worn out.
max curre 06 Enable If you have a good po (ie 2 amps or more) and w able to connect a curre device to your Pi 2 or B+, su USB hard disk, then you c the line max_usb_curren /boot/config.txt, whichwill max current over USB to 1.2 instead of the default600m
WorldMags.net
07
CONTROL OPTIONS sudo raspi-config can be used to change several options. For example, to enable the camera, overclock the Pi, change boot options, and expand the filesystem to use the full space on the SD card.
WorldMags.net Transferable tips Handy hints and vital info to get the most out of any Raspberry Pi model Remote access with SSH 08 SSH stands for Secure Shell. You
can use SSH to connect to a terminal session on your Raspberry Pi over your local network. This means that you can use the Pi with only a network cable and power cable connected, which is much more convenient than needing a screen and keyboard. Once you have found your Pi on the network you can log into it using the default username of ‘pi’ and the default password of ‘raspberry’. Both Linux and Mac will have built-in SSH clients, so simply open the terminal and type ssh [email protected], assuming that 192.168.1.5 is the address of your Pi. On Windows, you can use an SSH client called PuTTY, which is free to download, doesn’t need installing and is easy to find with a search engine.
Copy files using SCP 09 SCP stands for Secure Copy Protocol, and is a way for you to copy files (and directories) to and from your Raspberry Pi over the network. A good use of this would be if you have art for a PyGame project and you need to copy it over. FileZilla is a decent graphical SCP client to use (connect to your Pi on port 22 with the username ‘pi’ and password ‘raspberry’). If you are using SCP from the terminal then the syntax is as follows:
Above Get a terminal on your Raspberry Pi from the convenience of your main computer
are copied in addition to files. This will place your test directory in /home/pi/ (because ~ points at the logged-in user’s home directory on Linux). Simply swap the syntax for copying from the Raspberry Pi instead:
The dot refers to the current directory in Linux, so the testdir directory would be copied to the current directory.
RaspberryPi B+ 10
USB controller The USB controller on
the Pi is theoretically capable of 480Mbit/s. On earlier Pi models the performance is limited by the single core ARM chip, but it is possible to get close to that limit on a Pi 2.
11
Power / Act LEDs
The remaining LEDs are for power and SD card activity. The power LED is red, and the activity LED flashes green when there is SD card activity. The activity LED also flashes when powering down to indicate when it is safe to disconnect.
SD cards SD cards aren’t really designed for running an operating system. Higher class SD cards don’t necessarily mean better performance for lots of small files. The Raspberry Pi foundation recommend its SD card, which is an 8GB class 6 microSD card.
Wi-Fi module To get your Pi online without an Ethernet connection, you’ll need a Wi-Fi module. We advise on an official one, but look for 802.11 b/g/n modules if going third-party.
14
12
62
Ethernet LEDs There are two LEDs below the Ethernet port of a B+ onwards. The orange light means there is a link, and the flashing green light means there is activity.
13
WorldMags.net
Master Ras Pi
WorldMags.net 15
Get your Pi Zero online
Using an Ethernet adapter with your Pi Zero Step One: The parts
Zero
You’ll need an adapter to connect the micro-USB port to a full size USB port. This adapter, along with a miniHDMI to HDMI adapter and GPIO pin headers can be found here: http:// swag.raspberrypi.org/products/pizero-cables. You’ll also need a USB to Ethernet adapter that works with Linux (most will work out of the box), These can easily be found on Amazon for around £10.
Navigate your way around the command line with ease open ports 18 Listing Lsof stands for List Open Files, and you can install it with sudo apt-get install lsof. If you run sudo lsof -i you will see a list of all open network connections on the machine, including applications that are listening for new connections (for example, the SSH daemon).
Step Two: Configuration As the Pi Zero uses the normal Raspbian image, and eth0 (ie the builtin Ethernet card) is missing, there is no configuration necessary because the USB Ethernet card will take the missing LAN chip’s place as eth0.
Five terminal tricks
Step Three: Testing it out If the activity lights on your USB to Ethernet adapter are lit, then it should be working fine. You can now use the remote access tips from other sections of the article with your Pi Zero!
up a VNC server 16 Set VNC stands for Virtual Network Computing. Using VNC, you can access the Raspbian desktop over the network (meaning you only need power and Ethernet connected). There is no audio support but for any other tasks (including the use of Pygame), VNC should provide acceptable performance. You can install a VNC server with the following commands…
Using wget to download files 19 Wget can be used to download
files from the Internet from the terminal. This is convenient if you need to download a zip file containing source code and then extract it. For example:
17 The sync command ensures that everything has been flushed to permanent storage from the cache. It can be useful to run after updating packages, for example.
sudo apt-get update sudo apt-get install tightvncserver Using htop to monitor load 20 Htop is an improvement on the original top utility.
There are several free VNC clients available, so a search engine will help find a suitable one. To start a VNC session on your Pi, log in over SSH and then run tightvncserver. You will be prompted to enter a password the first time you run it. You can specify a screen resolution with the -geometry option, for example -geometry 1024x768. You can kill an existing VNC session with tightvncserver -kill :1, where 1 is the session number. To connect to that session on a Linux machine, you could use the command: vncviewer 172.17.173.21:1, substituting for the IP address of your Raspberry Pi.
It lets you see the CPU and memory load on your machine in real time, which is useful to know how intensive your application is being on a Raspberry Pi. You can install it with sudo apt-get install htop.
from the terminal 21 Reboot It seems like a simple tip but not everyone knows that you can reboot your Pi with the command sudo reboot, and similarly power it off with sudo poweroff. You need to disconnect and reconnect the power after powering off the Pi, though, as there is no on/off switch.
screen 22 Using Screen (available via apt-get) is great if you have something you want to run that takes a long time. You can run it in a screen session, detach from it and reconnect later on. Example usage: Left Access the Raspbian desktop from your main computer over your local network
screen -S test (Ctrl + A, d to disconnect) screen -ls to list sessions. screen -r test to reconnect exit will kill bash and therefore end the screen session.
WorldMags.net
www.linuxuser.co.uk
63
23
SPLIT FILES If your program is large, you can split it up into multiple files. If you have a file called MyClass.py containing a class MyClass, you can use it from another script with from MyClass import MyClass
WorldMags.net Become a Python Pro Here are some handy Python features that will make your code really stand out The main function 24 Having a main function (if __name__ ==
if __name__ == “__main__”: # Test MyClass m = MyClass() m.print_increment() m.print_increment()
__main__) in Python is useful. It makes it easier to see the difference between where your functions/ classes are defined and where your entry point is. More importantly, it allows you to only run code from that Python script in the case that it is the main script being run. This means that you can create a class and have code that tests that class to make sure it works. However, if you were to include your file in another program, the code in the main method would not be run and you can just use the class that you require.
line arguments 25 Command Command line arguments enable your program to run in various modes depending on the options that the user passes to the program when running it. Command line arguments in Python are given in sys.argv, which is a list of arguments. The first argument is always the name of the script that has been executed. For example, some code that prints the argv array produces the following outputs:
You can check for command line arguments in this list. If the length of the list is 1 and you require arguments, print a help message with instructions.
import sys help_msg = “My help message” debug_mode = False if __name__ == “__main__”: if len(sys.argv) == 1: print help_msg sys.exit() if “--debug” in sys.argv: debug_mode = True print “Using debug mode”
Using properties and setters 26 Properties in Python are a way to write getters and setters for variables. You need newstyle classes (you need to inherit from object). We
28
Python conferences
You can use negative indexes on Python lists to get the most recently added item(s). mylist[-1] will get the latest thing that was added to the list.
OSCON USA 2016 16 - 19 May oreil.ly/1QKVTTB
The Open Source Convention, held in Austin, Texas this year, is a place of innovation for sharing new ideas and technologies
PYCON US 2016 28 May - 5 June
FOSDEM 2016
us.pycon.org/2016
30-31 January
The largest gathering of the global Python community takes place in Portland, Oregon in 2016, featuring two tutorial days, three talk days and four sprint days
64
fosdem.org/2016
OSCON Europe 2016 17 - 20 October oreil.ly/1VqRWnE
Europe’s OSCON event mirrors the US format, with training sessions, keynotes and tutorials through its four days
This free, non-commercial event organised by volunteers takes place in Belgium and has grown wildly over the years
WorldMags.net
Master Ras Pi
WorldMags.net
Five hidden features Uncover the secrets to be found in Python comprehension 30 List List comprehension is a way of generating a list on a
Above Not sure whether to pick Python 2 or 3? There’s a guide to help in the docs: bit.ly/1jyd799
have created a class called distance where the distance in miles is a variable, and the distance in kilometres is a property. Getting from that variable multiplies the distance in miles to be kilometres and setting the variable sets the value in miles.
class Distance(object): KM_PER_MILE = 1.60934 def __init__(self, mi): self.miles = mi @property def km(self):
import RPi.GPIO as GPIO Then, you want to set the pin numbering convention to the Broadcom mode (as in GPIO17 will be pin 17, rather than being pin 11 on the Pi):
for p in pins: GPIO.setup(p, GPIO.OUT) Step Three: Get values Once the pins are set up, getting values from them is easy. To get the value of a pin (0 for low, and 1 for high), use the following syntax:
value = GPIO.input(6) GPIO.setmode(GPIO.BCM) Step Two: Pin setup Now you need to set up your pins as either inputs or outputs with the following syntax:
GPIO.setup(5, GPIO.OUT) GPIO.setup(6, GPIO.IN) If you have several pins to set up, it makes sense to put them in a list and then do something like:
>>> [x/2.0 for x in range(0, 10)] [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5]
31 Assertions Assertions are
useful when writing algorithms. They are used to check the data is valid before and after your algorithm. So, for example, if you are expecting your results in a certain range you can check them. The syntax is assert(boolean expression). For example:
>>> assert(0 < 1) >>> assert(1 < 0) Traceback (most recent call last): File “”, line 1, in AssertionError exceptions 32 Throwing It’s useful to throw exceptions
Get to grips with the GPIO library Start by importing Rpi.GPIO:
single line. You can use list comprehension to extract values from other list-type data structures. For example, if you have a collection of distance classes, you could get just the distance in miles into a list: miles = [x.miles for x in distances]. Here is a sample output:
…and to set the value of a pin (with either 0 or 1, or True or False) use the following syntax:
GPIO.output(5, True) Please note that if you are starting with Raspbian Jessie, you shouldn’t need sudo to access the GPIO pins, but in previous versions of Raspbian you will need to use sudo to run your code.
in your code if it’s possible that it can go wrong. That way the person calling your code can put it in a try-catch block and handle the exception if it is raised. If the caller does not handle it then their application will crash.
>>> raise ValueError(“Supplied value out of range”) Traceback (most recent call last): File “”, line 1, in ValueError: Supplied value out of range your script from a terminal 33 Running If you begin your Python script with #!/usr/bin/env python and then mark it as executable, you can execute it from bash just like a normal script without having to type Python before it:
$ echo ‘#!/usr/bin/env python’ > test.py $ echo ‘print “Hello World from Python!”’ >> test.py $ chmod +x test.py $ ./test.py Hello World from Python! the interpreter 34 Using Did you know you can start a Python interpreter to test things are working without having to write a Python script? Just type python into the terminal and then simply start writing Python.
WorldMags.net
www.linuxuser.co.uk
65
35
REMOVE STATIC Make sure you haven’t built up any static charge when working with electronics. Touching a grounded radiator in your house can be a good way of getting rid of static charge.
WorldMags.net Hardware how-to GPIO interrupts, pulse width modulation, soldering and more
GPIOexplained GPIO 36 orientation The best way to verify you have the Pi oriented the right way is to flip it over. The underneath of pin 1 has a square hole for the solder instead of a circular hole.
41
Unused 38 GPIO pins
Serial 37 console If you connect the UART0_TXD pin to the receiver pin of a USB-to-serial converter, and the UART0_RXD pin to the transmitter pin of that converter, you can set up a serial console.
The green coloured GPIO pins are unused by default and are therefore the best pins to use in your own personal hardware projects. The other pins may have more than one available purpose.
Howtosolder
Step One: The tools You need a soldering iron (30-40 watts, or ideally a temperature-controlled one), a stand to put it in, a damp sponge to clean the tip, and some thin solder. Lead solder is easier to work with than lead-free, and an iron with a square tip conducts heat better than a pointy one.
66
40
Pin layout 3V3 Power
01 02
5V Power
GPIO2 SDA1 I2C
03 04
5V Power
GPIO3 SCL1 I2C
05 06
Ground
GPIO4
07 08
GPIO14 UART0_TXD
Ground
09 10
GPIO15 UART0_RXD
GPIO17
11 12
GPIO18 PCM_CLK
GPIO27
13 14
Ground
GPIO22
15 16
GPIO23
3V3 Power
17 18
GPIO24
GPIO10 SPIO_MOSI
19 20
Ground
GPIO9 SPIO_MISO
21 22
GPIO25
GPIO11 SPIO_SCLK
23 24
GPIO8 SPIO_CE0_N
Ground
25 26
GPIO7 SPIO_CE1_N
ID_SD 2C ID EEPROM
27 28
ID_SC I2C ID EEPROM
GPIO5
29 30
Ground
GPIO6
31 32
GPIO12
GPIO13
33 34
Ground
GPIO19
35 36
GPIO16
GPIO26
37 38
GPIO20
Ground
39 40
GPIO21
Soldering 39 headers With the Pi Zero, you will need to solder GPIO headers onto the board. Using a reusable adhesive like Blu-Tack to hold the headers in place will make the job much easier.
Zero
Solder with ease in just a few steps Step Two: Soldering Once the iron is hot, apply some solder to the tip of the iron and wipe off any excess on the sponge. When pressing the iron to the joint, the tip should be touching both the joint and the wire you are soldering. Do not apply solder directly to the iron; apply it to the joint/wire.
Step Three: If you make a mistake You probably won’t need as much solder as you think you will, so be sparing. Still, accidents can happen, and the best way to remove excess solder is to get some de-soldering braid. This is copper that you put over solder, before putting the iron over the top. The solder is sucked onto the braid.
WorldMags.net
Master Ras Pi
WorldMags.net
Five-minute practical fixes Quick and easy things you can try in a few minutes 42
Above Here are two waves with duty cycles of roughly 80% (top) and 20% (bottom)
Ensure you are only using 3.3V voltage levels when working with the GPIO pins. Connecting anything higher than 3.3V to a GPIO pin will likely damage your Pi.
width modulation 43 Pulse Pulse width modulation is
a push button 45 Using Refer to the circuit diagram
where the output of a GPIO pin is high for a percentage of time and low for the remaining percentage of time. The percentage where the pin is high is called a duty cycle. Pulse width modulation is very useful in electronics, especially when it comes to tasks like controlling the brightness of LEDs. To do this in Python:
below. When the push button is pressed, the left pins are connected to the right pins. By using a 10K pull down resistor connected to ground, the purple output wire (connected to a GPIO pin configured as an input) defaults to 0V when the button is not pressed. The right-hand side of the button is connected to 3.3V, so when the button is pressed, the left-hand side of the button will also be connected to 3.3V. The left-hand side is connected in parallel with the purple signal wire, and also the 10K resistor to ground. Because electricity takes the path of least resistance, the purple signal wire will output 3.3V.
GPIO.setup(5, GPIO.OUT) # Frequency of 50 hz p = GPIO.PWM(5, 50) # 50 percent duty cycle p.start(50) # Do work or wait here so # program doesn’t exit # 70 percent duty cycle p.ChangeDutyCycle(70) p.stop()
from the GPIO pins of the Raspberry Pi. If you need more current than that (or you need to switch a higher voltage), then you can use the GPIO pin to switch a transistor connected to a stronger voltage source.
built-in sound card 47 Disable If you are using a USB sound card then it can be easier to disable the built-in sound card completely:
sudo rm /etc/modprobe.d/alsa* sudo editor /etc/modules Change snd-bcm2835 to #snd-bcm2835 and save, then
sudo reboot.
a multimeter to verify orientation 48 Use You can use a multimeter to verify the orientation of the GPIO pins. Looking vertically, with the USB ports at the bottom, the bottom-left pin (39) is ground and the topright pin (2) is 5V. Using the negative terminal on ground and the positive terminal on 5V should show ~5V.
up serial console 49 Set You can use raspi-config to set up a serial console so
interrupts 44 GPIO An interrupt is when a hardware event triggers an interrupt on the CPU, causing it to stop what it is dealing with and run an interrupt request handler. The Raspberry Pi can trigger inputs when a GPIO pin goes high (ie from 0V to 3.3V) or low (from 3.3V to 0V). This can be more efficient than polling for the state of a GPIO pin, as you only have to deal with the pin changing when it happens. Plus, it can actually simplify the flow of your code. The use of interrupts requires root privileges so you will have to execute your code with sudo. The code provided demonstrates how to set up a callback function to deal with a rising edge.
more current? 46 Need You should only draw a few milliamps of current
you can get a login shell using the UART0_TX and UART0_RX pins by connecting them to a USB to serial adapter:
sudo raspi-config 8 (Advanced Options) A8 (Serial) Select Yes to enable serial console. Finish, then reboot.
resistor calculations 50 LED To calculate an LED’s resistance value, use Ohm’s law:
Above The blue wire is the ground, the red one is 3.3V and purple is for the output
Resistance = voltage / current. The voltage of a GPIO pin is 3.3V. You need to know the voltage drop of the LED and its suggested current, so R = (3.3V – voltage_drop) / led_current. Using 2V as the voltage drop and 20mA as the current: (3.3 – 2.0) / 0.02 = 65 ohms. Round up to the next available resistor.
WorldMags.net
www.linuxuser.co.uk
67
WorldMags.net
Internet of LEGO Cory Guynn brings an entire LEGO city to life using Internet of Things technologies
Cory Guynn
is an IoT and cloud computing enthusiast, and a veteran Cisco Meraki systems engineer specialising in international deployments. To better understand IoT use cases, he has replicated them with LEGO and electronics
Like it?
Cory is documenting the entire build process over at internetoflego. com, and is also uploading the scripts for his smart city. From the city lighting and train automation through to the PubNub hub setup, it’s all there for you to explore
Further reading
SAM Labs launched an electronics kit earlier this summer that essentially combines LEGO and IoT: there are 15 individual blocks, each wireless and with a different sensor, slider or button, that can be snapped together: samlabs.me
How long have you been working on this project? I would say since September, when I really launched the blog. My wife got me a LEGO train set for Christmas last year and it got me back into LEGO. That’s really how my engineering brain started as a kid – it was all about LEGOs. At the same time, I work for Cisco Meraki and we do a lot of stuff in the cloud – wireless and switching technologies – and the hot topic in Cisco is the Internet of Things, how it’s going to be a multi-trillion dollar business over the next couple of years, and I wanted to wrap my head around it. It’s one of those elusive terms, like ‘cloud’. What does ‘cloud’ really mean, right? So I started to look up all the technologies. I started with Node-RED, IBM’s Node.js application for wiring things together, and that made it very easy to get the concepts; I’m not having to do a lot of coding at that stage. I started writing some front-ends to splash pages for Meraki, wireless hotspots… and once I understood what the concepts were, I started quickly picking up Node.js. And when I discovered Johnny-Five, which is this JavaScript framework for robotics, I was just hooked. It was very easy to understand. I had a Raspberry Pi; I later bought an Arduino so I could do some servo controls. And then as I was running each of these various API sections, I was like, ‘Okay, how do I control a servo? How do I control a light?’ So I started making
small programs and then combining that concept with LEGO, because I needed a medium to manipulate. I combined my hobby of LEGOs and my engineering brain with all these electronics and programming. It just made it a really fun and interesting project to work with. So that’s where it started – I’ve opened Pandora’s box now! How are you using things like JohnnyFive and Cylon.js? Johnny-Five and Cylon.js are more or less frameworks to quickly allow you to write code to control physical devices. And so, like Raspberry Pi traditionally uses Python or Arduino normally uses ‘sketches’, which are basically C++, Johnny-Five is JavaScript. I instantiate a servo by assigning it to a pin, and then just like writing a web page, if someone presses a button, it changes the state of that device. So it lends itself really well to robotics. The difference is that if you use something like the normal Arduino sketches, it’s very much loop-driven; you have a loop, it looks to a sensor, and if it’s on it does something, and it just reiterates that loop. Whereas JavaScript is basically built around event-handling. So, I have a web page with tons of buttons, links, and it’s just waiting for an activity – that’s the change in paradigms. So now I can say, ‘If the sensor is triggered’, like a button, ‘then I do something’. With robotics, if I have a robot
Right Cory hacked his model train with LIRC and Node.js, using an infrared sensor to stop the train as it arrives at the station platform
68
WorldMags.net
going towards a wall, if the proximity sensor detects it’s getting close then I change the servo on the wheels, and they change direction. That’s why I like using JavaScript and Node.js as the server side. And Johnny-Five is really welldocumented. They have an amazing book – it’s Make: JavaScript Robotics – written by Rick Waldron. Why did you decide to connect everything through PubNub? I’m not partial to PubNub. I do like it, but it’s one of those things where I was building an API for my city. My city is basically a Raspberry Pi and Arduino, and I wanted to now do more of the Internet of Things, whereas before it was just robotics, right? So I was trying to figure out a way to connect a website, other WiFi-enabled devices, and I built a couple of versions of this: the initial one was a RESTful API. I would just go to some URL and it would turn on a light. Then I decided to do web sockets in a traditional way and there were some limitations, because in that model the Raspberry Pi had to be publicly accessible. Long story short, I was like ‘Yes, I can do this’, but I want to do something that’s more of the publishsubscription model, where I can have random devices that can subscribe to a feed of data like a Twitter feed, and if it’s of interest to them, they respond, and if they have an update, they blast it out to the other devices. There are numerous ways you could do this – socket cluster, things you can build yourself – but I found PubNub online and they had a really good explanation of how the concepts work. They had examples, they had SDKs that would work across Node.js, Arduino, Raspberry Pis. Ultimately, it was a low barrier to entry and I was able to quickly build a server front-end to my city, add the little subscription code to it, and then I could
My Pi project
WorldMags.net Left City lighting is controlled via a master switch. There are 12V LED strips cut down into three-LED segments for the buildings, plus micro LEDs the size of a human hair Right The station platform elevator, driven by a DC motor and a gear system, uses an ultrasonic sensor in the base to detect its distance from the ground and stop its movement Below left A script on the Internet of LEGO blog publishes “WordPress visitor” to PubNub when a page is visited, which then fires up the disco lights
easily add little hook-ins on my blog so if somebody goes to my blog, it would actually turn on lights in my city. I’m working on an idea where I have Bluetooth Low Energy beacons off my access point, and I can have an app on my phone so that if it detects the Bluetooth LE beacon, it publishes to PubNub and then that tells my city to start the train – maybe deliver a beer? [laughs] These are the sorts of things I have in my head! What’s next in line for construction? I’m gonna build a weather station! I have temperature sensors, humidity sensors… I’m going to build a collection of those, and I’ve just now got this Arduino with Wi-Fi – the ESP8266 module – on it. Basically, I can now run an Arduino somewhere else, and I’m going to attach those sensors to it, then tie that back into the entire system and also combine it with a weather API, from weather.com, so I can see regional and local weather. Then I’m going to have a scrolling screen within the city, maybe above the bank, saying ‘The temperature is…’ I’m just figuring out if I’m going to do it through PubNub or the more traditional model, which would be the MQTT standard for low-overhead communication per sensor.
I’ve done all of this – PubNub, web sockets, I’m looking at MQTT – and it’s not that any one is always better; it’s knowing when to use the right tools for the job. The better I understand all of the technologies available, this will help to figure out what’s better for dashboards, better for sensors running off a battery, where energy and power is a concern… Now I can cherrypick the right technology once I’ve really understood them all. Ultimately, it’s about creating concrete use cases in a medium that I can manipulate. As I talk about the Internet of Things to my customers at Cisco, thinking about the wider industry, it’s one thing to talk about it, but once it’s tangible, you can then easily scale that up to a smart city, a smart warehouse – you’re just using real materials instead of little plastic bricks.
WorldMags.net
www.linuxuser.co.uk
69
WorldMags.net
Python column
Embed Python in C This month, we will learn how to use Python code within your usual C program to get the best of both worlds
Joey Bernard
is a true renaissance man,splittinghistime between building furniture, helping researchers with scientific computing problems and writing Android apps
Why Python?
Back in issue 155, we looked at how to call C functions from within a Python program to get more speed. But, there are times, within a C program, when you may want to execute some piece of Python code. Maybe you want to be able to run user code within your program, for example. This means you can enable users to use plug-ins to extend your program’s functionality. The way we can do this is by embeddingPythonwithintheCprogram. We will look at how to embed, how to run your Python code, and how to interact with the Python interpreter you’ve set up. This is functionality built into Python itself, so you don’t need to install anything extra on your Raspberry Pi, aside from the development package for Python and GCC. You will need toinstallthemwiththecommand:
SetArgvEx(int argc, char **argv, int updatepath). This way, you can access any command line arguments that your Python code needs. You can check to see whether the interpreter is properly initialised by using the function int Py_ IsInitialized(). It returns an integer for either true (nonzero) or false (zero). The simplest way to use your new interpreter is to use the function int PyRun_ SimpleString(const char *command). This function takes a string that contains some arbitrary bit of code. If you have multiple lines of code that you want to run, you can use newline characters to separate lines. For example, you can print outthesineofananglewith:
PyRun_SimpleString(“import math\na = math.sin(45)\nprint(‘The sine of 45 is ’ + a)”);
sudo apt-get install python-dev gcc It’s the official language of the Raspberry Pi. Read the docs at python.org/doc
You should now have all of the tools you needtocompileyourcode. The first step is to start the interpreter. To access the functions you need, you will have to add the following line to the head of yourCsourcecodefile:
#include You can now start to embed Python. The first function you need is void Py_ Initialize(). The only other functions that can be called, before you initialise the interpreter, are Py_SetProgramName(), Py_SetPythonHome(), PyEval_ InitThreads(), PyEval_ReleaseLock() and PyEval_AcquireLock(). Once this function finishes, you can start to interact with this interpreter. This starts up the interpreter, and loads the core modules __builtin__, __main__ and sys. But what about other modules? You can set the search path, where the interpreter will look to find modules, by using the function void Py_ SetPythonHome(char *home). If you need the information, you can find the current module path with the function char* Py_ GetPythonHome(). It does not set sys.argv, however. You need to use the function void PySys_
70
This function is a simplified version of int PyRun_SimpleStringFlags(const char *command, PyCompilerFlags *flags). This not only takes the command string, but also takes a struct of compiler flags for the Python compiler. You will need to check the development documentation online to seethedetailsforthesecompilerflags. Let’s say that you have a more complicated bit of code to execute. There are equivalent functions to work with Python script files. The simplified version is int PyRun_SimpleFile(FILE *fp, const char *filename). You actually hand in two references to your script. The first is a file handle that you get from the C function fopen() to open your script file, and the seconde is the name of the script you just opened. You will need to open your script file with the read permission. You also now need to worry about whether your program will have the correct file permissions on the file system to open this script. Proper coding means you should check this call to fopen() to verify that it completed and gave you a valid file handle. This simplified version doesn’t use any compiler flags, and closes the file handle after the function returns. The full version of the function is int PyRun_SimpleFileExFlags(FILE *fp, const char *filename, int closeit,
WorldMags.net
PyCompilerFlags *flags). If closeit is true, then the file handle is closed. If the script is something you will want to run several times, set closeit to false so the file handle remains open. You can set any flags for the Python interpreter in the flags struct, similar to the PyRun_SimpleStringFlags() function call. If this simple way of running code isn’t powerful enough, there are ways of interacting with the interpreter in a more direct fashion. The first step is learning how to send data back and forth between the Python interpreter and the main body of your C program. The basic workflow is to convert your C variables to their Python equivalents, then call the Python functions you wish to use, and convert the Python results back into their equivalents within C. Python is an object-oriented language, so the core of communicating with the interpreter happens with the Py_Object construct. This provides the base for all the other types of objects you can use to communicate with Python. For example, create a Python string object with:
PyObject *pName; pName = PyString_ FromString(“print(‘Hello World’)”); You can then use this Python object when using Python functions. For example, if you stored the name of a Python module in the string pName, you could import it with the function call PyImport_Import(pName);. You can also get access to Python functions from your C code. You store a reference to the function in a PyObject, just as with data objects. The first step is to get the dictionary of the function names for the module in question with:
my_module = PyImport_AddModule(“__ main__”); my_dict = PyModule_GetDict(my_ module); Once you have the dictionary, you can get a reference to specific function with:
Full code listing # A simple way to run Python code #include int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); # Initialize the Python interpreter Py_Initialize(); # Run your Python code PyRun_SimpleString(“from time import time,ctime\n” “print ‘Today is’,ctime(time())\n”); # Don’t forget to clean up Py_Finalize(); return 0; } ----------------------------------# You can create an interactive Python console #include
static PyMethodDef my_methods[] = { {“my_method”, my_func, METH_VARARGS, “This is my method”}, {NULL, NULL, 0, NULL} };
PyObject_CallObject(my_func, NULL); With this access, you should be able to do just about anything you wish in Python. Up to now, we have been looking at code interacting with the Python interpreter. But there are occasions when you want to allow the end user to have access to the interpreter. In these cases, you probably want to give your user access to a full Python console. You can do just such a thing with the function call Py_Main(argc, argv), where you hand in the argc and argv that you have from the C side of your program. This is fine for a console-based program, but for a GUI program, you need to create some kind of terminal window to allow the user to interact with the Python
static PyObject* my_func(PyObject *self, PyObject *args) { ... } The new PyObject contains the executable code for the methods to be used for your new object. You also need to create a method definition to be able to register the details with the Python interpreter. You do this by creating a PyMethodDef array:
int main(int argc, char *argv[]) { Py_Initialize(); Py_Main(argc, argv); Py_Finalize(); } ----------------------------------# You can even run a script file #include
where func_name is a string containing the function you want access to. You can then run the function with a command like
With the Python interpreter, you aren’t just limited to what is already available; you can extend the available functionality by defining your own Python objects, with their own methods and data, within your C code. These newly-created objects can then be called from within the Python interpreter. They are defined as static objects, with code like:
Once you have finished these two parts, you are almost ready to start using your new module code. You need to initialise it with the function:
Py_InitModule(“my_module”, my_methods);
interpreter. This console will continue until the user explicitly quits from Python. The last thing you need to do is to clean up after the interpreter. You can do this with the function void Py_Finalize(). The major issue with this function is that it destroys objects in a random order. If they depend on other objects, they may not be able to get cleaned up correctly. If you then try and re-initialise the interpreter again, it may fail due to an unclean finalisation step. Now that you have your program written, you need to compile it. You need to include flags to tell the compiler where to find everything. Luckily, you can get these from Python itself. The flags needed for compiling are available with the command python-config --cflags. You also need to know where to find the libraries to link in, which are available with python-config --ldflags. Now, you have access to Python anywhere, even within another program.
Now, you can import this new module in your Python code, just like any other module installed on your system. Within your Python code, you can write:
import my_module my_module.my_method() One thing to be aware of is that this level of control also gives you a huge level of responsibility. For example, you will need to start worrying about things like reference counts for objects. The garbage collector for the interpreter needs to know when an object is okay to delete. You need to increment the reference count each time something points to your newly created object. Every time a reference is removed, you need to decrement the counter. You can also create multiple sub-interpreters within a single program. You can create a new subinterpreter with the function Py_NewInterpreter(). This way, you can have multiple Python threads running concurrently, and mostly independently. When you are done, you can shut them down with the function Py_ EndInterpreter(). There is no limit to what you can do with all of this power.
WorldMags.net
www.linuxuser.co.uk
71
WorldMags.net Control lights with your Pi Tutorial
The winter nights are getting longer; use Raspberry Pi and a mobile device to remotely control your lights
Dan Aldred
is a Raspberry Pi Certified Educator and a Lead School teacher for CAS. He recently led a winning team of the Astro Pi secondary school contest
The folks at Energenie have created some genius plug sockets that can be turned on and off via your Raspberry Pi. You can buy a starter kit which includes the RF transmitter add-on board and two sockets to get you started. The add-on board connects directly to the GPIO pins and is controlled with a Python library. Once everything is installed and set up, your Raspberry Pi can be used with the Pi-mote to control up to four Energenie sockets using a simple program. This tutorial covers how to set up the software, the sockets and how to adapt the program to run on your mobile device.
01
Set up
sudo apt-get update sudo apt-get upgrade Depending on which version of the OS you’re using, you may need to install the Python GPIO libraries. (Raspbian Jessie comes with this library pre-installed, so you can skip this step.) Type:
sudo apt-get install python-rpi.gpio On completion, reboot your Pi. This will install the Python GPIO libraries, meaning you can access and control the pins with Python code.
To get started, boot up your Raspberry Pi and load the LX Terminal, then update your software by typing:
de Full coo .co.uk FileSil
Whatyou’llneed Q Pi-Mote IR control board with RC sockets bit.ly/1MdpFOU Above Take control of your home environment using your smartphone
72
WorldMags.net
Q Desk lamp Q Accessories
Pi-mote
WorldMags.net 02
Left The Pi-mote transmitter is so easy to use; it is powered by the Pi and uses a transmitonly open loop system
Install the Energenie library
Next, install the Energenie libraries. These enable the Pi-mote board and Raspberry Pi to interact with Python. In the LX Terminal, depending on which version of Python you are using, type either:
sudo apt-get install python-pip sudo pip install energenie ...for an older version. In the future, Energenie will update its software and you may need to run a check for updates to ensure that you have the most recent version. To update the software, type the code:
sudo pip install energenie -update
03
Fitting the Pi-mote
Before fitting the Pi-mote transmitter, shut down your Raspberry Pi with sudo poweroff. Unplug the power supply and fit the module onto your Raspberry Pi. The ‘L’ part of the board fits opposite the HDMI port. Power up the Pi and plug in one of your Energenie sockets in the same room or area that your Pi is in. The range is fairly good, but furniture and walls may sometimes block the transmission signal. You can test that the socket is working by plugging in something like a desk lamp and then pressing the green button that is located on the socket. This will trigger the socket on and off, turning the lamp on and off.
04
Download the set-up code
05
Set up your socket
Before the Raspberry Pi can interact with the socket and switch it on/off, it requires programming to learn a control code that is sent from the transmitter. Each socket has its own unique code so that you can control up to four individually. Energenie provides the set-up program which can be found inside your tutorial resources (available through FileSilo).
Once you have downloaded the set-up program, run it. This should place the socket into ‘learning mode’, and will be indicated by the LED on the front of the socket housing slowly flashing. If it is not doing this, press and hold the green button for at least five seconds and release it when the LED starts to flash at one-second intervals. Run the program and it will send a signal out. Follow the on-screen prompts, pressing the return key when required. When the code is accepted, success will be indicated with a brief flashing of the LED on the housing. If you have more than one socket to set up, simply use the same program and method to do so for as many times as required.
06
A quick test
Before you get to the task of creating your Python code to control your socket, it is always advisable to test that the socket is working correctly. Ensuring that the power is turned on at the wall plug and that the lamp is switched on, you can turn the lamp off by pressing the green button on the front of the Energenie socket. The lamp should turn back on again when the button is next pressed.
07
Code to turn the socket on
The Python Energenie library makes it incredibly easy to create a code to turn the socket on, which will then turn your lamp on. Before you know it, you will be using your Raspberry Pi to turn the kettle or the TV on or off! Open your Python editor, start a new program, import the Raspberry Pi GPIO library (line 1, below), then import the Energenie library (lines 2 and 3). Finally, add the code to switch the socket on (line 4). Save and run your program. The socket will turn on, you may hear a click, and then your lamp will come on.
import RPi.GPIO as GPIO import energenie from energenie import switch_on energenie.switch_on(1)
08
IP address
Switching the socket on and off
Since you have not told the socket to turn off, it will stay on, which means the lamp will stay on forever (or until the bulb blows)! To turn the socket off after five seconds, import the time function at the start of your program (line 2, below), add the command to turn off the socket (line 5). Then add a pause with the sleep command (line 7) and finally turn off the lamp (line 8). Now save and run the program.
import RPi.GPIO as GPIO import time import energenie from energenie import switch_on from energenie import switch_off
Every device on the Internet is assigned an Internet Protocol address (IP address). This is a numerical label which is used to locate and identify each device within a network which may contain many thousands of devices. Most home network IP addresses start with the numbers 192.168, with your router being on 192.168.1.1.
It is possible to augment this hack so that you can turn the lamp on and off from a mobile device such as your phone, laptop or tablet. This makes the whole project more impressive, slick and fun. The first step is to set up your Raspberry Pi as a web server which will host and display a web page with the ON / OFF option. These buttons are interactive and control the socket. Open the LX Terminal and install pip and Flask:
sudo apt-get install pip sudo pip install flask
WorldMags.net
www.linuxuser.co.uk
73
Tutorial
Above You’ll need to get the folder names correct so that files are saved properly
WorldMags.net
10
CSS and HTML
To make the web page look presentable, you need to set up an HTML and a CSS file. HTML stands for HyperText Markup Language and is the markup language used to create web pages. Your browser reads HTML files and converts them into web pages, enabling images and objects to be embedded into the pages. Cascading Style Sheets, or CSS, is the code which describes how the web page will look; the presentation of the HTML content. It contains the instructions on how the elements will be rendered on your device. In this tutorial, it controls how the on and off options will be presented and look on the screen.
11
Create a new folder
12
The HTML files
With Flask installed, reboot your Raspberry Pi; type sudo reboot. Create a new folder called Mobile_Lights in the /home /pi folder. This is where you will save the Python program which controls the socket and lamp, the CSS and the HTML file. You can create the folder in the LX Terminal by typing mkdir Mobile_ Lights or right-clicking in the window and selecting New Folder.
Open the Mobile_Lights folder and create a new folder called ‘templates’. This folder is where the HTML file is saved that contains the structure for the website layout. The code names the web page tab and, most importantly, adds the links for the on and off option. Open a text editor from your Start menu, or use nano and create a new file. Add the HTML below to the file and then save the file into the template folder, naming it ‘index.HTML’. Remember, this is an HTML file and must end with the file extension .html:
The Cascading Style Sheet, CSS, is used to create and apply a ‘button’ style effect to the web page. Move back to the Mobile_Lights folder and create a new folder named ‘static’. This is where the CSS file is saved. Create another new text file and add the code below, which sets out the ‘style’ for the web page. You can customise the colours of the buttons from line 20 onwards. Save the file as ‘style.css’ in the static folder. Keep in mind that this is a CSS file and needs to be saved with the file extension .css:
Flask is a powerful tool for creating interactive web pages and apps. If you are interested in learning more and trying out some other projects, this resource is a great place to start: http://flask.pocoo. org. Check out the site for examples of where Flask and Python are used for real-world applications and solutions: http:// flask.pocoo. org/community/ poweredby.
@app.route(‘/off/’) def off(): switch_off() return render_template(‘index.HTML’) Above We recommend using the -I operator over -i with the hostname command, as the latter only works if the host name can be resolved
if __name__ == ‘__main__’: app.run(debug=True, host=’0.0.0.0’)
15
Find your IP address
Before you start the web server running, you need to check the following:
width: 100%; height: 50%; }
• You have a folder called Mobile_Lights • In the Mobile_Lights folder is a Python file named mobile_ lights.py • Also within the Mobile_Lights folder are two folders, one named templates which stores the index.HTML file and another folder named static which contains the file style.css
div a { width: 100%; height: 100%; display: block; }
If all checks out, in the LX Terminal type sudo hostname –I. This will display the IP address of your Raspbery Pi – for example, 192.158.X.X. Make a note of it because this is the address you will enter into the web browser on your mobile device.
You have arrived at the point where you are ready to start the web server. Move to the Mobile_Lights folder by typing cd Mobile_Lights. Now run the Python mobile_lights.py program by typing sudo python mobile_lights.py. This starts up the web server, which is then ready to respond to the buttons that are pressed on the web page.
The final part of the setup is to write the Python script that combines the index.html and style.css files with the Energenie socket control code similar to the one used in Step 7. Open IDLE and start a new window, add the following code and save into your Mobile_Lights folder, naming it ‘mobile_lights.py’. Line 4 uses the route() decorator to tell Flask the HTML template to use to create the web page. Lines 7 and 11 uses app.route(‘/ on/’) and app.route(‘/off/’) to tell Flask the function to trigger when the URL is clicked. In line 15 the run() function is used to run the local server with our application. The if__name__ == ‘__main__’: makes sure the web server only runs if the script is executed directly from the Python interpreter and not used as an imported module.
from flask import Flask, render_template from energenie import switch_on, switch_off app = Flask(__name__) @app.route(‘/’)
Grab your mobile device, smartphone or tablet and load the web browser. In the address bar enter the IP address that you noted down in Step 15. At the end of the address, add ‘:5000’ – for example, 192.168.1.122:5000. The 5000 is the port number that is opened to enable the communication between your device and the Raspberry Pi. You will be presented with ON and OFF options, and you can now control the socket and whatever you have plugged in – kettle, radio, TV – all from your mobile device by simply pressing ON or OFF. Have fun!
WorldMags.net
www.linuxuser.co.uk
75
Tutorial
WorldMags.net
Code a Tempest clone in FUZE BASIC Part 2 Remake a classic game in FUZE BASIC and delve into the world of programming Luke Mulcahy
is the FUZE team’s in-house programmer. Just 15 years old, Luke is adept in many programming languages, though he does love to put FUZE BASIC through its paces
76
Welcome to part two of our FUZE BASIC tutorial. You will need to FUZE BASIC installed for this, and it can be downloaded for free for Linux and Raspberry Pi users from www.fuze.co.uk/getfuzebasic. Don’t worry; it’s possible to skip part one and just jump straight in here. However, because this is a fairly large program, you won’t be typing in off the page. Instead you will need to download the program listing from FileSilo or here: www.fuze.co.uk/ tutorials/73MP357PART2.fuze.
73MP357 is coded by Luke Mulcahy, FUZE’s resident coder and, in the nicest way possible, our very own human supercomputer. This issue he tasks his brain with developing the basic game structure further, adding all sorts of awesome trinkets to delight. These include simple scoring, power-ups, enemies and the all-important fire power. Gameplay is also improved with smooth movement. In short, all you need for a rudimental – but definitely playable – game.
Start FUZE BASIC and either load (with F8) 73MP357PART2.fuze or copy and paste the code straight into the editor (F2 switches between the editor and immediate mode). If you followed the first part last month then you’ll be pleased to see things are coming along nicely. Run the program by pressing F3 or typing RUN in immediate mode.
The first thing you’ll notice is that we now have a score displayed. Tap the space bar and move left and right. We have fire power; hold on for a second longer and we should also have enemies and the all-important ‘Power Up’, which in this case awards us with a Particle Laser. We also have basic collision so enemies can be shot, but more importantly, if one touches you then you die too! Have a play then bring up the editor; press Esc to stop the program then F2. We’ll be brief on the sections we covered last month and offer more detail on the new additions.
Our initial section sets up a few environment variables. There are a number of variables used for screen resolution and frames per second (targetfps, minfps and maxfps). Luke has built in a very cool dynamic resolution feature that automatically adjusts the screen resolution to maintain a smooth frame rate. At the start of each game you can see the screen size adjust until it finds its optimum resolution to frame rate ratio. Once it reaches 60fps, a time delay is used to keep it there. We’ll come back to this later but throughout the code you will find functions using the frame rate to ensure everything happens in sync. The remaining variables are straightforward.
The star field was covered last month but in brief this creates three variable arrays to store the positions, speed and angles of up to 1,000 stars. The arrays are populated with random initial settings.
The good stuff. Again a few arrays are set up to store the positional and movement information for each of the active lasers. Notice also laserReload = INT (targetfps / 2) on line 53, the first use of a frame synced action. In this case there is a counter so that the lasers can’t fire too quickly but regardless of the frame rate, they will reload at the same frequency.
There’s just no point in having great firepower if we can’t use it to exterminate a relentless supply of alien monsters hell-bent on taking over the Earth. You should be noticing something of a pattern by now. Again a few arrays are used to store enemy angles, distance (from the centre) and speed (enemyMove), however an additional variable (enemyHealth) has been added so that we can have different results from different weaponry.
Here we handle variables like the player angle, set score flags and define the power-up colour, fade and angle.
This is where the meat is. To start with we’re checking for key presses. We have it set to A and D (or the left and right
keys) to move the player left and right, and then either the space bar or return key for firing. The fire button routine is fairly complex as we have to check to see if we have a laser available because we are restricted by maxLasers. We then check to see if they have reached the centre (radius2). If we have particle lasers enabled, we swap sides as each one is added so we have the dual barrel machine gun effect (very nice indeed):
// Check if Right cursor or the D key is pressed IF scanKeyboard (scanRight) OR scanKeyboard (scanD) THEN IF moveCount = 0 THEN oldPlayerAngle = playerAngle playerAngle = playerAngle + gap moveCount = moveDelay IF playerAngle > 360 THEN playerAngle = gap / 2 ENDIF ENDIF ENDIF // Check if Left cursor or the A key is pressed IF scanKeyboard (scanLeft) OR scanKeyboard (scanA) THEN IF moveCount = 0 THEN oldPlayerAngle = playerAngle playerAngle = playerAngle - gap moveCount = moveDelay IF playerAngle < 0 THEN playerAngle = 360 - (gap / 2) ENDIF ENDIF ENDIF // Check if the Space bar or Return key is pressed IF scanKeyboard (scanSpace) OR scanKeyboard (scanReturn) THEN IF laserCount = 0 THEN IF numLasers < maxLasers THEN FOR i = 0 TO maxLasers CYCLE IF laserDist(i) = radius2 THEN laserDist(i) = SQRT (((centerX playerX) * (centerX - playerX)) + ((centerY playerY) * (centerY - playerY))) laserAngle(i) = playerAngle laserSide(i) = particleLaserSide IF particleLaserSide = 0 THEN particleLaserSide = 1 ELSE particleLaserSide = 0 ENDIF numLasers = numLasers + 1 laserCount = laserReload BREAK ENDIF REPEAT ENDIF ENDIF ENDIF
09
Release the power-up (lines 138 - 156)
We begin the game armed with a single-shot laser. We only release the power-up canister if we haven’t already powered-up. This function is checked with IF particleLaser = FALSE THEN, after which a new one is released and all of its variables initialised.
WorldMags.net
www.linuxuser.co.uk
77
Tutorial
WorldMags.net 10
Move the power-up (lines 157 - 160)
11
Get the power-up (lines 161 - 174)
12
Adjust shot speed (lines 175 - 184)
13
Top-up enemies (lines 185 - 207)
Next it is moved outward towards the player with powerupDist = powerupDist + (radius / 80) and then checked to see if it has arrived at the outer edge (radius).
Then if so, is it in the same place as the player (playerAngle - gap / 2)? If it is then “Particle Laser!” is displayed and particleLaser =TRUE.
This determines shot release frequency. Particle release (laserCount / 1.5) is significantly quicker than standard shot release (laserCount / 1.05).
If we have fewer than the maximum number of enemies on-screen (maxEnemies) and we are within the enemyCount boundaries, then this if statement will introduce a new enemy.
// Check for enemies being present IF tempTime > 3000 THEN IF enemies = TRUE THEN IF enemyCount > 0 THEN enemyCount = enemyCount - 1 ELSE IF enemyCount = 0 THEN IF numEnemies < maxEnemies THEN FOR i = 0 TO maxEnemies CYCLE IF enemyDist(i) = radius THEN enemyDist(i) = radius2 enemyAngle(i) = (RND (vertices 1) * gap) + (gap / 2) enemyHealth(i) = 100 numEnemies = numEnemies + 1 enemyCount = enemyDelay BREAK ENDIF REPEAT ENDIF ENDIF ENDIF ENDIF ENDIF
14
Plot the stars (lines 208 - 231)
The star field routine runs through the maxStars variable, increasing the distance from the centre (starsDist) until it travels completely off the screen. At that point they’re reset back to the middle with a random factor so they don’t all appear dead in the centre. They are drawn with a simple PLOT (starX, starY) command.
// Routine to plot the stars COLOUR = White WHILE starNum < maxStars CYCLE starX = starsDist(starNum) * COS (stars(starNum)) starX = starsCenterX + starX starY = starsDist(starNum) * SIN (stars(starNum)) starY = starsCenterY + starY starsDist(starNum) = starsDist(starNum) + starsSpeed(starNum) IF starX < 0 THEN starsDist(starNum) = RND (15) + 5
78
ENDIF IF starX > gWidth THEN starsDist(starNum) = RND (15) + 5 ENDIF IF starY < 0 THEN starsDist(starNum) = RND (15) + 5 ENDIF IF starY > gHeight THEN starsDist(starNum) = RND (15) + 5 ENDIF PLOT (starX, starY) starNum = starNum + 1 REPEAT starNum = 0
15
Draw playing field (lines 232 - 251)
We explained this concept in detail last month and very little has changed. Basically, the playing field is drawn in segments around the circumference of a circle using just three LINE statements.
// Draw the playing field COLOUR = Blue WHILE angle < 360 CYCLE x = radius * COS (angle) x = centerX + x y = radius * SIN (angle) y = centerY + y x2 = radius2 * COS (angle) x2 = centerX2 + x2 y2 = radius2 * SIN (angle) y2 = centerY2 + y2 LINE (x, y, x2, y2) LINE (x, y, oldX, oldY) LINE (x2, y2, oldX2, oldY2) oldX = x oldY = y oldX2 = x2 oldY2 = y2 angle = angle + gap REPEAT
16
Calculate and draw player (lines 252 - 267)
The player position is becoming more complex. The first stage introduces a new function: DEF FN lerp(a,b,c). This is going to be used a lot from now on, so:
DEF FN lerp(a, b, c) result = a + c * (b - a) = result This takes two numbers, A and B, and interpolates between them so that C can be used as a distance or angle, or even a colour step. This enables us to work out a step in between two points on the screen and calculate an equal step between. This is then used to ensure smooth movement is made for any object anywhere on the screen, regardless of its size or location. Very clever indeed!
// Calculate the player position IF playerAngle - oldPlayerAngle < ((0 - 360) + gap) + 1 THEN tmpPlayerAngle = FN lerp(playerAngle + 360, oldPlayerAngle, moveCount / moveDelay)
Again covered in detail last month, this section shifts the playing field perspective depending on which corner of the screen the player is positioned.
The IF powerupDist < radius THEN line makes sure the power-up exists as it has not yet reached the outer radius. Notice the use of the FN lerp function again, but this time to calculate the smooth gradient between two colours. See, we told you this would be a useful thing to learn to do! The colour evenly fades between random cycles. The power-up’s distance from the centre is updated and is then drawn using a simple CIRCLE statement: CIRCLE (powerupX, powerupY, u, TRUE).
19
Calculate and draw lasers (lines 313 - 354)
20
Check for laser hits (lines 355 - 389)
While the actual movement of the lasers remains the same – apart from speed, that is – we need to calculate the laser positions depending on whether we are in single shot or particle mode. If in particle mode then both sides need to be calculated. Finally, the lasers are drawn with a polyPlot function.
Each laser is checked to see if it enters the same airspace as an enemy and if so, the enemy is dealt with accordingly. At this stage all lasers have the same power so a single hit reduces their health by 100. Next month we’ll be taking you through how to update this so that if an enemy is hit, it will be removed and the player’s score will then increase by 10. Once a laser reaches the inner circle (radius2) then it is removed from the playing field.
21
Calculate and draw enemies (lines 390 - 415)
You should be getting familiar with the process now: cycle through the number of items, in this case enemies (maxEnemies), check to see if they’ve reached the outer rim, and if not then use FN lerp to calculate a smooth movement step and apply it. EnemyDist is the distance from the inner circle (radius2) and enemyAngle is the direction it is heading. We use COS and SINE to work out the position and then a ployPlot function to draw the enemy.
22
Check for enemy hits (lines 416 - 442)
23
Calculate and draw player (lines 443 - 457)
24
Display messages (lines 458 - 478)
25
Check frame rate & recalibrate (lines 479 - 584)
26
Main positional variables (lines 585 - 663)
27
TBC…
Next we test the enemy position against the player position and if they are the same, “Game Over” is displayed and the game ends… for now. Finally, we check the current angle of the enemy and make sure it is heading towards the player. Also the angle is tested and reset if it goes around the clock.
This is rather simple now that everything else has been done. COS and SINE with U (the outer distance) again are used to determine the new angle and we finish off with a sequence of polyPlot commands to draw the player.
The next block displays the score and any messages that might be in play, like “Particle Laser!” (more next month).
This next section is huge but actually very straightforward. First off, check to see if we are below minfps and if so, recalibrate everything accordingly. All the key measures are reset for the new resolution so the inner and outer radii are scaled to match the new size and so on. The opposite happens if we are over maxfps, in that the resolution increases – if we go over maxxres then we keep it there, in this case the maximum resolution was set at the beginning at 1920 x 1080.
Another long chunk but again very simple. This last but one block initialises the positions and values at the start of each game – this will become more important when we introduce level progression. The final block is the DEF FN lerp(a,b,c) function that we referred to earlier.
And that’s it for now! At this stage you have the basic shell of the game. Next month we will tidy everything up, introduce progressive scoring and levels, add awesome sounds, develop the difficulty settings, include a start-up screen and any other finishing touches. See you next month! To find out more about FUZE BASIC and the FUZE in general, please visit www.fuze.co.uk
WorldMags.net
www.linuxuser.co.uk
79
WorldMags.net
Competition
WIN!
FUZE
Special Edition
Closing date for entries
10 February 2016 With decidedly retro roots, the award winning FUZE is a programmable computer and electronics workstation. Born out of a passion for programming and love of electronics, the solid aluminium case features an integrated keyboard, a specially designed GPIO header and a breadboard for simple electronics. Housing the Raspberry Pi safely below the keyboard, the FUZE comes complete with FUZE BASIC already installed. FUZE BASIC is an advanced, modernised version of the BASIC programming language, widely accepted as the easiest beginner language to teach and learn. Featuring a redesigned interface and advanced graphics
support, including sprite and image scaling, angle and alpha controls and rotation, FUZE BASIC is fully configured to run with all models of the Raspberry Pi. The whole FUZE system is slick and intuitive, and more than capable of programming web and mobile games. The FUZE Special Edition pays tribute to home computers of the Eighties like the BBC Micro, and comes complete with a robotic arm kit to integrate with your projects. The FUZE Special Edition also provides you with a projects pack and directs you to plenty of online resources at the FUZE Lair, giving you everything you need to get started. For more information, head over to www.fuze.co.uk.
Enter the competition For a chance to win a FUZE T2-SE-R – a Special Edition with the robotic arm kit – just go to the link below. Answer the simple question provided on that webpage and then add your contact details to enter the competition. The three winners will then be announced in the New Year – good luck!
Answer the question on this webpage for a chance to win!
www.linuxuser.co.uk/news/win-a-fuze-se TERMS & CONDITIONS This competition is open to residents of the United Kingdom and Ireland. Imagine Publishing has the right to substitute the prize for a similar item of equal or higher value. Employees of Imagine Publishing (including freelancers), their relatives or any agents are not eligible to enter. The editor’s decision is final and no correspondence will be entered into. Prizes cannot be exchanged for cash. Full terms and conditions are available upon request. By entering the competition you give consent for Imagine Publishing to send you monthly email newsletters and occasional special offers. You can unsubscribe from this at any time by clicking on the unsubscribe link of any email received.
80
WorldMags.net
Special offer for readers in North America
WorldMags.net
6issuesFREE FREE
When you subscribe
resource downloads in every issue
open source thority for fessionals developers
Order hotline +44 (0)1795 418661 Online at www.imaginesubs.co.uk/lud *Terms and conditions This is a US subscription offer. You will actually be charged £80 sterling for an annual subscription. This is equivalent to $120 at the time of writing – exchange rate may vary. 6 free issues refers to the USA newsstand price of $16.99 for 13 issues being $220.87, compared with $120 for a subscription. Your subscription starts from the next available issue and will run for 13 issues. This offer expires 29 February 2016.
WorldMags.net
Quote
USA
for this exclusive offer!
WorldMags.net
SCREEN TIME
Raspberry Pi 7” Touchscreen Display
WWW.NEWIT.CO.UK WorldMags.net
WorldMags.net 84 Group test | 88 NeuG | 90 EduKit 3 | 91 UPS PIco | 92 Free software
Google Chrome
Chromium
Opera
Vivaldi
GROUP TEST
Chromium browsers There are projects that have taken Chromium code to roll out their own web browsers. But can anyone beat Google’s flagship Chrome browser?
Google Chrome Chromium
Opera
Vivaldi
Reported to be the most widely used browser on Earth, but under the hood it is just the same Chromium browser with extras. For instance, Chrome includes licensed codecs for proprietary media formats, giving access to a wider variety of media content, including AAC and MP3encoded audio.
Chromium is the only open source browser in our test and many things are obvious: Chromium should work the same as Chrome since we don’t need to use licensed stuff. However, it is still interesting to look closer at the existing feature set in Chromium and pit the browser performance against other competitors.
Opera is a respected browser for Linux with a long history. It used to implement its own Presto rendering engine, but since 2013 Opera has dropped that in favour of Blink, a rendering engine forked from WebKit components by Google. Opera released its new browser for Linux in summer 2014, with significant delay.
Vivaldi emerged as an initiative to bring back the super-rich feature set of the old Opera 12.xx to the new level, again taking the Chromium code as a basis. Vivaldi directly competes with Opera in an attempt to bring back users that were disgruntled by Opera’s transition from Presto to the Blink rendering engine.
Download: bit.ly/1BxukGG
Download: bit.ly/1MVhy6k
Download: opr.as/1fVGz2O
Download: bit.ly/1CgqDG3
WorldMags.net
www.linuxuser.co.uk
83
Chromium browsers WorldMags.net
Review
Google Chrome
Chromium
The king of the hill – but we’re still not sure it’s actually the best…
Chromium is one of the most advanced OSS browsers
Q It’s more than just a browser, thanks to web apps that many people already can’t live without
Q Visually, Chromium is nearly identical to Chrome and has a comparable set of features
Extensions
Extensions
There are no openly available current Google Web Store stats, but it used to contain over 50,000 extensions and that number has definitely grown. The amount of extensions is huge – in fact, there is an endless line of productivity extensions, like helpers for filling in web forms, to-do lists, integration tools for various cloud services and much more.
In most cases Chromium can use the same extensions as Chrome, but while Chromium works correctly with extensions in the store, it often fails in more complex situations. Things like Remote Desktop will install correctly, but refuse to work. Anyway, these are only rare exceptions in a splendid scene of extensions support in Chromium.
User experience
User experience
There’s really nothing to complain about, other than it lacks character. The default home page for newbies would be Google’s sign-in prompt. Chrome is great for syncing content among all your browser instances, which it does instantly and silently, delivering a positive experience.
Again, let’s point out the differences from Chrome. First, Chromium doesn’t annoy you with Google’s sign-in page but you can always log in using a small link, which is good. Second, Chromium can use the modern Pepper Flash plug-in, which is extracted from Google Chrome and redistributed as a standalone pay package in many Linux distros.
Multimedia support Google Chrome is a proprietary product and is therefore bundled with commercial codecs, like AAC, MP3, and also some other goodies, like a built-in PDF viewer. So it turns out that Chrome is the most complete browser out there, making its users unaware that their trouble-free web experience may not be guaranteed with other browsers.
Extra features
Multimedia support The current state of media formats support in Chromium depends on compile options. By default, Chromium is missing AAC, MP3, MP4 and H.264. This is severe, because H.264 is widely used in YouTube in HTML5 videos. The best workaround would be to install the chromium-codecsffmpeg-extra package.
Extra features
Extras are commonly added via Google Web Store, which also sells web apps marketed as a replacement to real desktop apps. Of course this is very questionable, but still you can turn your web browser into a fullfeatured copy of Chrome OS, with office apps, various players, menus and other cool stuff, like Remote Desktop Viewer, IM clients and more.
Chromium can easily go on par with Chrome in terms of extra goodies, even though there are some problems. A good example is the Archon framework, which enables Chrome/Chromium to run Android apps. The feature works flawlessly in Chrome but is rejected in Chromium.
Overall
Overall
It’s hard to overpraise Google Chrome, but an average user will find it difficult to imagine a feature that is not yet present in the browser or its web store. Chrome also boasts the best multimedia support among browsers.
84
9
Chromium wants you to take extra actions to bring back the latest Flash plug-in, codecs support, PDF viewer and the other missing features. Still, it should not be considered as 100 per cent Chrome compatible.
WorldMags.net
6
WorldMags.net Opera
Vivaldi
It has a refreshed brand but is there a reason to make Opera your browser?
The browser from ex-Opera CEO Jon von Tetzchner is getting much better…
Q Boasting its own add-ons store and access to Chrome’s extensions as well, Opera is very customisable
QVivaldi adapts its interface colour to the average tone of the current web page and features a nice and trendy flat UI
Extensions
Extensions
Opera has its own extensions catalogue with over 1,500 titles, but it has the cool ‘Download Chrome Extension’, which opens the world of Chrome Web Store, even though it is limited. If an extension depends on something Google-specific, it may not work with Opera, but this Norwegian browser team has done a good job of boosting the number of extensions.
Vivaldi is a new web browser that arrived in early 2015, so it’s understandable why so many things are still missing. There is no official extensions catalogue, but it is possible to use Chrome extensions. To do so, enable the Developer Mode in vivaldi://chrome/extensions and manually load an unpacked extension (get it using http://chrome-extensiondownloader.com). This is an experimental feature, so take care.
User experience Opera officially only ships a DEB package for 64-bit Ubuntu. If your system is not Debian or any Ubuntu clone, you’ll have to split up the package and repack it. There are certain third-party packages for other distros, but it’s obvious that no one tested Opera on them. Due to limited availability of Opera for Linux, your experience may vary depending on your distro.
User experience Vivaldi delivers a good experience thanks to its design. The browser inherits the spirit of the old Opera, but there is no sync feature, no mail and no extensions. However, the Vivaldi team focuses on usability tweaks, eg the ability to stack tabs, annotate web pages, and add notes to bookmarks.
Multimedia support
Multimedia support
Opera resembles Chromium in terms of supported formats and codecs, but the Opera package already includes the local FFMPEG copy, which fixes quite a lot of media-related issues in Linux. Opera also ships with PDF Viewer and can successfully use plug-ins that are part of Chrome, particularly the latest version of Flash.
The browser ships with all the essential support for codecs and formats, including HTML5 with H.264, proper recognition of Pepper Flash, MP3, AAC and MP4 support – nearly the same set as the one found in Chrome. These features appeared recently; before that, Vivaldi lagged behind Chrome in that aspect.
Extra features
Extra features
Opera developers created their own sync service, not dependent on anything from Google but nearly equally good. Opera also has some unique features like Opera Turbo (for slow Internet connections), mouse gestures, a password manager, animated themes and more.
The rise of interest in Vivaldi is largely explained by the positive media hype. Vivaldi is more about community and social life, than about extra features. Unfortunately, the browser doesn’t yet have any extras other than promises from devs.
WorldMags.net
www.linuxuser.co.uk
85
Chromium browsers WorldMags.net
Review
In brief: compare and contrast our verdicts Google Chrome
Extensions
Google’s Web Store is the largest and the greatest catalogue of browser extensions
User experience
Nothing really fresh or special to speak of, but everything works quick and reliably
Multimedia support
Chrome is already bundled with everything you’ll need for the modern web
Extra features
The power of web apps, such as Google Docs, is at your service with Chrome
Overall
We’ve got almost nothing to complain about here – it’s a fantastic browser
Chromium
9
Most Google Chrome extensions will work, but there are some minor exceptions
7
Get ready to do some post-installation tweaks if you want to cook it just right
9
Due to license restrictions, Chromium has limited support for online media
9
Shares many extra features with Chrome, but again some things don’t work
9
Feels similar to Chrome, but is limited in enough ways to make it inferior
Opera
7
Opera has its own official catalogue, plus tons of compatible Chrome extensions
6
Sadly, Opera only works well inside Ubuntu and its various flavours, but it shows promise
6
Opera is another browser that ships with all the codecs, which is very good
6
Has Opera Turbo, its own syncing service and lots of other good stuff thrown into the mix
6
A strong rival of Chrome with its own power points. Definitely one to keep an eye on
Vivaldi
8
There’s no store, but some manually converted Chrome extensions might work
4
6
Vivaldi has a smart and polished UI, including many usability improvements
8
9
Multimedia support was recently fixed and Vivaldi is now on par with Chrome
9
8
We will have to wait until Vivaldi matures a bit more to get hold of some extra features
3
8
Too young to compete with the big guys at this stage, Vivaldi still shows some promise
5
AND THE WINNER IS… Google Chrome You may have noticed that we skipped the performance test on this one, so you may be left wondering how the browsers feel in realworld tasks. Surely it would have made more sense if, say, we included Firefox or any other non-Chromium browser in this group test. But it turns out that three of our four contenders are based on Chromium 46.0.2490.80 and Vivaldi Beta is a little older with 45.0.2454.99. We ran several tests, both synthetic and realworld, including Octane 2.0 (for JavaScript) and Speedometer 1.0 (for web apps and HTML5). All four browsers performed similarly, with differences being as little as 1-3 per cent. It means that whichever browser you choose, it will feel snappy and robust. The undisputed winner of this group test is Google Chrome, which is stable and armed with the best support for media codecs and formats, be it DRM-protected content, a streaming radio channel or an H.264-encoded video. Opera was thoroughly tested only as a .amd64.deb package, meaning that the repackaged versions for other Linux flavours will most likely cause issues. On the other hand, there’s nothing more to complain about, and
86
Q It’s not just a browser, but a mini-version of full featured Chrome OS
Opera can save you money on pre-paid web traffic thanks to its gorgeous Turbo technology. Chromium takes the third place, mostly because it takes some effort to set things up correctly and there is no actual guarantee that all the goodies on the Google Web Store will behave themselves and work.
The Vivaldi browser is a very promising project, which has already showcased an astonishing speed of development and lots of fixed UI paper cuts. Still, it is deservedly marked as ‘beta’ because many features are yet to arrive, including major things like cloud sync. Alexander Tolstoy
WorldMags.net
Classified Advertising 01202 586442
WorldMags.net
WorldMags.net
NeuG WorldMags.net
Review
Below The NeuG gathers the sampling noise of analogue-to-digital converters and uses it as the base for its entropy creation
Can adding a tiny, ARM-based 32-bit computer to your Linux box really be all you need to improve cryptographic security? Entropy – the contents, basically, of /dev/random – isn’t something to which most Linux users give a second thought, but it keeps server administrators and cryptographers awake at night. A system starved of entropy or, worse, filled with poor-quality entropy, can suffer everything from performance issues to security holes – and it’s a problem that becomes much larger when you get into the topic of virtualisation. When working with entropy, users will be familiar with the issue of starvation: attempting to copy 1MB of data, say, from /dev/random will rapidly grind to a halt; it’s for this reason that the non-blocking /dev/urandom is the default source for most programs’ entropy. One way of dealing with the starvation of /dev /random and how slow it can be to refill, especially on headless systems, is the use of a hardware random number generator. These are typically expensive devices, but the Free Software Foundation has
launched a budget model based on the Flying Stone Tiny (FST) microcomputer: the NeuG. The brainchild of Niibe Yutaka, NeuG is free software designed to run on top of the FST-01’s ARM M3 processor. When connected to a host system via USB, the NeuG-enabled FST-01 appears as a serial port; connect to the port and you’ll find a flood of random characters filling your console session. Yutaka’s implementation of a supposedly true random number generator (TRNG) is simple enough: readings from analogue sensors connected to the STM32F103 processor are taken, paired, passed through a CRC-based scrambling system, then conditioned using a hashing algorithm before being output over the device’s built-in USB serial port. This stream of entropy can then be used however you see fit. Installation is simple enough: plug the device into your system’s USB port, and a piece of firmware dubbed
WorldMags.net
WorldMags.net Below Eject the device, but leave it plugged in, and the NeuG firmware will start up Inset With its simple USB key design, the NeuG is portable enough to keep on you
The NeuG is more than capable of shoving entropy into /dev/random at a rate far higher than the operating system’s own entropy-gathering activities Fraucheky turns it into a removable storage device containing a handy readme. Glancing through this reveals the design, usage, and principle behind NeuG, but it’s missing one vital piece of information, so the next step of the process is to eject the removable drive. When ejected, the FST-01 switches to the NeuG firmware. The drive disappears and is replaced with a serial device – /dev/ttyACM0. This needs to be tweaked with stty before use, with the instructions from the readme: the port needs setting to raw mode with echo disabled as a minimum, and it’s also possible to switch between three operation modes. These modes have a distinct effect on the NeuG’s operation. In its default mode, the NeuG is able to output entropy at a rate of around 81KB/s through an SHA-256 conditioning algorithm; switching to an alternative CRC-32 algorithm may weaken the quality of the entropy somewhat, but boosts throughput to
around 288KB/s. The final mode outputs the raw data from the sensors, with no attempt to ensure that it is in any way random. In either of its random-number-generation modes, the NeuG is more than capable of shoving entropy into /dev/random at a rate far higher than the operating system’s own entropy-gathering activities. Installing rng-tools onto the system and pointing the rngd daemon at the NeuG’s serial port sees the available entropy shoot up the instant it’s loaded. For servers, that’s great news – and for virtualised servers, where access to traditional entropy sources may not be available, it can potentially spell the difference between a secure system and one generating insecure keys. It also serves as a handy alternative to closed-source hardware RNGs built into modern processors, like Intel’s Ivy Bridge and newer. Gareth Halfacree
WorldMags.net
Pros Easy to install, compact, and capable of filling a system’s entropy pool as fast as you can empty it
Cons Modern processors often have random number generators; not yet independently audited for security
Summary It’s hard to judge the quality of a stream of supposedly random data, but the NeuG flew through the usual barrage of tests including ent and a visualisation check. It’s also easy to use and affordable.
www.linuxuser.co.uk
89
Pi accessories WorldMags.net
Review
COMPONENTS LIST Motor controller board x2 DC motors x2 wheels Ball castor Mini breadboard Double-sided tape AA battery holder Ultrasonic sensor Line follower sensor Resistors Jumper cables
ROBOTICS
CamJam EduKit 3 Transform a Raspberry Pi into a robot with this straightforward, easy-to-assemble kit It might not yet be Judgement Day, but the machines are rising – even the Raspberry Pi! The CamJam EduKit 3 enables you to install your Pi as the controller of a compact robot, which is ideal for hobbyists, kids and even Raspberry Pi newbies. The lightweight box contains everything you need to get started, apart from the Raspberry Pi itself. The two motors are a little bulky, but everything else is lightweight – just as you’d expect once you check the online project sheets and see that the suggested chassis for your robot is the CamJam EduKit 3 box lid! Adhesive strips are also provided for you to secure the motors and the battery case, along with the obligatory breadboard and cables. By progressing through the project sheets, you’ll move from having a bunch of components on your desk to building a robot capable of following a line printed on a sheet of paper. However, don’t assume that this is a simple Kano-style snap-everything-together-and-itworks project; there’s a little more to it than that.
90
The core component is a motor controller that sits across the GPIO, utilising GPIO pins 9 and 10 to initiate movement. However, connecting the wires can be frustrating; while the wires from the motors have been coatedwithsoldertogivethemthenecessarybulktosit snugly in the connectors, those from the battery case have not. We found these regularly slipped out without tinning, occasionally when the robot was in motion. Another limit is the power source. Without customisation, motion is limited by the length of your USB cable. Happily, the motors seem to have enough power to carry a portable power supply, whether a lightweight phone recharger or a custom-built solution combining six AA batteries and an UBEC (see issue 154). But these are niggles; CamJam’s robotics kit is a must-have at under £20! Christian Cawley Price £17 (camjam.me/edukit)
WorldMags.net
Summary The combination of having instructions that are easy to follow and the ability to turn any small box or lightweight building toy into a Pi-controlled robot, means that the CamJam EduKit 3 robot is absolutely unmissable for anyone – adult or child – who has an interest in robotics or extending the possibilities of their Raspberry Pi. And it is affordable!
UPS PIco Power outages will no longer corrupt your SD card thanks to this useful UPS from PIco It happened again – your Raspberry Pi’s SD card was corrupted and your project ruined, thanks to a power failure. If you didn’t make a backup, a new Raspbian image is likely needed, with the project needing to be reinstalled and reconfigured. But with a UPS, none of that would have happened. ModMyPi has released a compact UPS PIco HAT, which includes I2C control and comes with optional extras, such as a CPU fan and large capacity battery. Unboxed, the device comes in two parts: the HAT and therechargeablebattery.Thiswillneedtobeconnected to the small white connection on the underside of the UPS PIco before the HAT is mounted on your powereddown Raspberry Pi’s GPIO header. A small adhesive strip on the battery will enable you to secure it in place. The next stage is to download two Python scripts to the home directory and edit /etc/rc.local to refer to one of them. Set-up isn’t for Pi amateurs! Although no guides are included in the box (apart from brief steps printed on the label of the anti-static bag holding the
HAT), instructions are available to download from the ModMyPi listing page. Once connected and your Pi is booted, red, green and blue LEDs will flash as the UPS initialises, with a flashing green LED indicating that the device is ready. A range of buttons are mounted on the board. The RPiR button will reset the Pi, but only if a gold reset pin is included on the board. Alongside this, the UPSR button will reset the UPS itself, while the FSSD will initiate the File Safe Shutdown. Two additional buttons are provided for user applications. In the end, you have a device that is powered and the battery charge maintained via the GPIO cables on the Pi. When power failure occurs, the 300 mAh battery kicks in. Opting for a larger battery? The 3000 mAh cell gives up to eight hours. Christian Cawley
PIco Stack (unassembled)
Summary Although it is tricky to set up for a beginner or anyone who may be unfamiliar with Python, the UPS PIco HATs should be of interest to anyone looking for a way to avoid data corruption during an unordered shutdown. A nice bonus to the UPS PIco is the various configurations available thanks to the add-ons, but be warned that soldering may be required.
Price £22 (bit.ly/1QTVXkK)
WorldMags.net
www.linuxuser.co.uk
91
Free software WorldMags.net
Review
NOTE TAKING
CherryTree 0.35.11 Sortyourlifeoutinthenew yearwiththisexcellentapp Giuseppe Penone’s CherryTree is not just a way of taking notes, but a way of organising all of your digital ‘stuff’. Right from the start, it tries to capture your thoughts. When first opened, it asks not for a name to givethedocument,butjustthenode–thatisthebranch of thought or project that you are trying to capture. All nodes are saved in a single document (Sqlite or XML), avoiding the directory sprawl of some other apps. The manual is a native CherryTree document, and is thusagoodexampleofbuildingaCherryTreedocument, as well as being a helpful and well-written introduction. CherryTree doesn’t get in your way; it’s easy to start jotting a few thoughts, create a few subnodes, then build links, headings and other features later. The powerful search facilities will rescue you from any forgetfulness in labelling. It’s a Python app, so downloading the RPM or deb file is a painless option; this release sees improvements in HTML export and import, and various bug fixes. If your New Year’s resolution is to be more organised, you really should give CherryTree a try.
Above Get your thoughts in order with the wonder that is CherryTree
Pros Fast, lightweight, and great
Cons There are no real cons as such,
search tools. It will ensure you can make notes easily and then be able to find them
aside from the fact that maybe you think you don’t have time to learn to use another app
Great for... Organising everything – and finding it again! giuspen.com/cherrytree
LANGUAGE
GNU Guile 2.1.1 Program and extend desktop, web and command line apps If you’ve worked your way through the demanding MIT Introduction to Programming, 6.001, and the accompanying tome Structure And Interpretation Of Computer Programs (the ‘Wizard book’), you may now want to take Scheme into the real world. Two versions of Scheme designed for extensibility do that: Racket (formerly PLT Scheme), which runs the Hacker News website news. ycombinator.com, and GNU Guile. Guile is short for the GNU Ubiquitous Intelligent Language for Extensions, and is used in projects that may already be on your PC. Guile enables the programmer to implement new data types and
92
subroutines in C, extending Guile with these new primitives, and building on the base code at the higher level of abstraction offered by Lisp family languages. Guile 2.1.1 is the first step on the road to the 2.2 branch, which aims to run much faster and in less memory. Guile’s emphasis on interactive and incremental programming should also see you becoming more productive. Other improvements include better thread safety, better locale support, a fully Emacs-compatible Elisp implementation, plus optimised hash functions and generic array facility. You can keep Guile 2.0 alongside 2.1/2.2 – follow the short guide to parallel installation in the excellent manual on the Guile website.
WorldMags.net
Pros Guile 2.1.1 is extensible, it is simple to get to grips with and it is continuing to get faster
Cons Lisp syntax, which is based on symbolic expressions with unseemly amounts of parenthesis
Great for... Extending the language to do what you need gnu.org/software/guile
WorldMags.net PARTITION EDITOR
GParted 0.24.0 A friendlier interface for creating, resizing and modifying disk partitions The excitement of the arrival of a new computer or a bigger disk drive may be tempered by caution as you approach disk partitioning. And not just partitioning a disk with important data, but partitioning a new disk – you might well worry about stomping all over one of the others in the PC by mistake. If you’re installing something minimal, which relies on fdisk for any manual partitioning, then GParted, with its friendly graphical interface, can be run on the disk before install, and is much clearer and easier to understand. GParted is a front-end to GNU Parted, and uses libparted in detecting and managing devices and partitiontables.Italsoallowsextrafilesystems,plugging
in various tools for ReiserFS, JFS, etc. GParted 0.24.0 adds ZFS file system detection and better information on logical volumes, as well as more robustness in handling invalid partitions and problem disk partition labels. The project website contains links to many tutorials on using GParted in different circumstances. Contemporaneous with GParted, comes GParted Live 0.24.0. This is a small, bootable GNU/Linux distribution (for Intel-compatible hardware) usable from CD or USB disk – or even from hard disk or over the wire via PXE boot, should the first two options not be available on your PC. GParted Live is based upon recent Debian Sid, and carries other useful tools beyond GParted, including ddrescue and efibootmgr.
Pros If the thought of partitioning a disk brings you out in cold sweats, this gives a clear view of the process
Cons It won’t work with everything – for example, non-x86 architecture such as the Raspberry Pi
Great for... Removing some of the fear from disk partitioning gparted.org
GAMING
SuperTuxKart 0.9.1 Spot characters based on Free Software mascots Over a decade ago, SuperTuxKart (STK) rose out of the ashes of 3D kart racing game TuxKart.Sincethenithasgrownmore polishedandrealistic,butstayedinthespirit of Mario Kart, in that fun is more important than realism. With improvements to existing tracks, and new ones availableasadd-ons, STK hasplentytooffer. Thanks to its (modified) Irrlicht Engine, earlier versions of STK ran well on low horsepower hardware, like first generation netbooks. Since 0.9.0 it has needed a bit more oomph, but we tested it on a seven-year-old laptop and despite a little choppiness, it was playable. If you want the best experience on older hardware, STK 0.8.1 is still in the repository of Linux Mint Debian Edition, and many other distros,soeveryonecanjointhefun. The latest release adds more tracks, but most of the work is hidden away in the game engine, in improved kart stability, font rendering and audio, as well as allowing track designers to use AngelScript. Playability of challenges has been improved, and work continues on developments in networking, which should bear fruit in the next release, when you’ll be able to play against STKers across the LAN and, eventually, the Internet.
Above Immerse yourself in this fun and addictive game
Pros Quick and easy to get started,
Cons Although you could probably
with plenty of options to ensure gameplay is enjoyable and that you want to come back again
get away with using it on old, slow hardware, it is much better suited to new kit
WorldMags.net
Great for... When your brain is full of code and you just want some fun supertuxkart.sf.net
www.linuxuser.co.uk
93
Hosting listings
Sponsorship opportunity
WorldMags.net Bring attention to your brand by sponsoring this section. Contact Luke Biddiscombe on +44(0)1202 586431
LOG IN TO WWW.FILESILO.CO.UK/LINUXUSER AND DOWNLOAD THE LATEST DISTROS AND FREE SOFTWARE TODAY
LATEST DISTROS
YOUR BONUS RESOURCES ON FILESILO THIS ISSUE,FREE AND EXCLUSIVE FOR LINUX USER & DEVELOPER READERS, YOU’LL FIND THESE GREAT RESOURCES… » Four latest distros,including the first release candidate of Solus 1.0, a completely new distro and desktop.
Fedora 23
KNOPPIX 7.6
» 20 excellent FOSS packages, including everything in our reviews section plus the tutorial tools. » Watch The Linux Foundation, Red Hat and Raspberry Pi video guides, tutorials and webinars. » Code and assets for this issue’s tutorials, including everything you need for the Raspberry Pi feature.
Solus OS 1.0 RC1
TOP LINUX FOSS
96
ClearOS 7.1
20HOURS
LENGTH OFVIDEO TUTORIALS:
TUTORIAL CODE & ASSETS
WorldMags.net
WorldMags.net FILESILO – THE HOME OF PRO RESOURCES DISCOVER YOUR FREE ONLINE ASSETS A rapidly growing library Updated continually with cool resources Lets you keep your downloads organised Browse and access your content from anywhere No more torn disc pages to ruin your magazines
No more broken discs Print subscribers get all the content Digital magazine owners get all the content too! Each issue’s content is free with your magazine Secure online access to your free resources This is the new FileSilo site that replaces your disc. You’ll find it by visiting the link on the following page. The first time you use FileSilo you’ll need to register. After that, you can use the email address and password you provided to log in.
The most popular downloads are shown in this carousel, so see what your fellow readers are enjoying!
If you’re looking for a particular type of content like distros or Python files, use these filters to refine your search.
Green open padlocks show the issues you have accessed. Red closed padlocks show the ones you need to buy or unlock. Top Downloads are listed here, so you can get an instant look at the most popular downloaded content. Check out the Highest Rated list to see the resources that other readers have voted for as the best!
Find out more about our online stores, and useful FAQs like our cookie and privacy policies and contact details.
Discover our amazing sister magazines and the wealth of content and information that they provide.
WorldMags.net
www.linuxuser.co.uk
97
WorldMags.net
FileSilo
HOW TO USE
EVERYTHING YOU NEED TO KNOW ABOUT ACCESSING YOUR NEW DIGITAL REPOSITOR
Follow the instructions on-screen to create an account with our secure FileSilo system, then log in and unlock the issue by answering a simple questionabout the magazine. You can access the content for free with your issue.
02
If you’re a print subscriber, you can easily unlock all the content by entering your unique Web ID. Your Web ID is the eight-digit alphanumeric code printed above your address details on the mailing label of your subscription copies. It can also be found on your renewal letters.
03
You can access FileSilo on any desktop, tablet or smartphone device using any popular browser (such as Firefox, Chrome or Safari). However, we recommend that you use a desktop to download content, as you may not be able to download files to your phone or tablet.
04
If you have any problems with accessing content on FileSilo, or with the registration process, take a look at the FAQs online or email filesilohelp@ imagine-publishing.co.uk
MORE TUTORIALS AND INSPIRATION Finished reading this issue? There’s plenty more free and open source goodness waiting for you on the Linux User & Developer website. Features, tutorials, reviews, opinion pieces and the best open source news are uploaded on a daily basis, coverin Linux kernel development, the hottest new distros and FOSS, Raspberry Pi projects and interviews, programming guides and more. Join our burgeoning community of Linux users and developers and discover new Linux tools today.
Issue 161 of 98
is on sale 14 January 2016 from GreatDigitalMags.com WorldMags.net