Building my first legit computer! woo! I've bought plenty of parts in the past for my own and other client's computer, but have never built my own. Here goes nothing!
I decided to be a bit adventurous and try to shove a reasonably higher end desktop into a 18" x 14" x 6" aluminum carrying case. What I want out of this pc is to host lots of VMs which in turn host many intensive services and compile large programs. Here are the specs:
Aluminum Breif Case:
http://www.amazon.com/gp/product/B0052PJ39C/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1
Intel i7-4790k
http://www.newegg.com/Product/Product.aspx?Item=N82E16819117369
EVGA NVIDIA GTX760 4gb
http://www.newegg.com/Product/Product.aspx?Item=N82E16814130949
Corsair H100i
http://www.newegg.com/Product/Product.aspx?Item=N82E16835181032
MSI Z97M micro ATX Gaming
http://www.newegg.com/Product/Product.aspx?Item=N82E16813130773
Mushkin Redline 1866 2x8gb model number 997119
http://www.newegg.com/Product/Product.aspx?Item=N82E16820226533
Corsair AX760
http://www.newegg.com/Product/Product.aspx?Item=N82E16817139042
Subtotal with discounts and rebates (black friday and cyber monday) : about $1050
Updates to come!
Wednesday, December 10, 2014
Tuesday, December 2, 2014
PXE Boot without hosting DHCP server (proxyDHCP)
proxyDHCP and PXE
My goal is to implement a way to create the most portable and plug-n-play method for PXE booting on an existing network regardless of subnet constraints.
A simple DHCP request is more than just acquiring an IP address, it has the capability to deliver information that aids in proper network configuration encompassing lots of special parameters in order to bring this DHCP requestor online. Most just need an IP address and that's it.
A list of these special parameters can be analyzed here: http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml
PXE booting is a method of booting off of the network. This technology is not widely used because it is not enterprise reliable but is available in most standard bios configurations from major manufacturers of computers.
Why is PXE booting not reliable? When a DHCP request is made and it contains information containing PXE information, an IP address for the PXE host is delivered to the DHCP requester. This IP address contains the bootable image or code that is to be transmitted over TFTP. TFTP is very unreliable because there is no checking and is simply a game of chance. Similiar to UDP, TFTP transfers do not check with the server to see if the packets were received so they could be lost in translation. Also, TFTP does check to see if these packets are in order.
How can we make PXE reliable? Transfer the bootable image or code over HTTP. As we all should know, HTTP is very reliable as it corrects both faults found in TFTP.
Unless the DHCP is configured otherwise, by nature computers don't care where the DHCP information comes from as long as they get it. This sounds really insecure because it is. This is also standard protocol for most internet connected "things". On the flip side, if there is more than one DHCP server on the network actively leasing IP addresses a race will ensue to see which DHCP server can deliver the IP lease the quickest. This is not an ideal situation for a stable and predictable network which is why PXE booting is almost always going to configured along side with the DHCP server.
What if there was a way to deliver PXE DHCP information without leasing an IP address or disrupting the natural flow of a predictable network and also reliably deliver bootable images and/or code? There is, and I introduce proxyDHCP.
ProxyDHCP allows this very thing and, if configured properly, can adapt to the network in order to deliver a completely dynamic PXE server. My post title was a little misleading in that a DHCP was not required... In fact it is required but in a state that would not interfere or modify the existing network.
I have done a lot of research and have concluded that psychomario's implementation of a runnable python-based PXE DHCP combo server is the best way for making this scenario a reality.
Huge props to psychomario for developing this awesome tool.
https://github.com/psychomario/PyPXE
However, there are a few caveats to these series of scripts. The TFTP server and HTTP server python implementations only allow for 1 connection at a time. The alternative would be to host your own TFTP and HTTP servers that can handle multiple connections such as a node.js simple server and "TFTP server for mac". I have used both of these and successfully booted 20 computers at the same time in about 5 minutes with TinyCore Plus.
References, research, and code segments:
https://github.com/psychomario/PyPXE
http://www.fogproject.org/
http://www.fogproject.org/wiki/index.php/Using_FOG_with_an_unmodifiable_DHCP_server/_Using_FOG_with_no_DHCP_server
http://ipxe.org/gsoc
http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html
https://rom-o-matic.eu/
https://github.com/xbgmsharp/ipxe-buildweb/
My goal is to implement a way to create the most portable and plug-n-play method for PXE booting on an existing network regardless of subnet constraints.
A simple DHCP request is more than just acquiring an IP address, it has the capability to deliver information that aids in proper network configuration encompassing lots of special parameters in order to bring this DHCP requestor online. Most just need an IP address and that's it.
A list of these special parameters can be analyzed here: http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml
PXE booting is a method of booting off of the network. This technology is not widely used because it is not enterprise reliable but is available in most standard bios configurations from major manufacturers of computers.
Why is PXE booting not reliable? When a DHCP request is made and it contains information containing PXE information, an IP address for the PXE host is delivered to the DHCP requester. This IP address contains the bootable image or code that is to be transmitted over TFTP. TFTP is very unreliable because there is no checking and is simply a game of chance. Similiar to UDP, TFTP transfers do not check with the server to see if the packets were received so they could be lost in translation. Also, TFTP does check to see if these packets are in order.
How can we make PXE reliable? Transfer the bootable image or code over HTTP. As we all should know, HTTP is very reliable as it corrects both faults found in TFTP.
Unless the DHCP is configured otherwise, by nature computers don't care where the DHCP information comes from as long as they get it. This sounds really insecure because it is. This is also standard protocol for most internet connected "things". On the flip side, if there is more than one DHCP server on the network actively leasing IP addresses a race will ensue to see which DHCP server can deliver the IP lease the quickest. This is not an ideal situation for a stable and predictable network which is why PXE booting is almost always going to configured along side with the DHCP server.
What if there was a way to deliver PXE DHCP information without leasing an IP address or disrupting the natural flow of a predictable network and also reliably deliver bootable images and/or code? There is, and I introduce proxyDHCP.
ProxyDHCP allows this very thing and, if configured properly, can adapt to the network in order to deliver a completely dynamic PXE server. My post title was a little misleading in that a DHCP was not required... In fact it is required but in a state that would not interfere or modify the existing network.
I have done a lot of research and have concluded that psychomario's implementation of a runnable python-based PXE DHCP combo server is the best way for making this scenario a reality.
Huge props to psychomario for developing this awesome tool.
https://github.com/psychomario/PyPXE
However, there are a few caveats to these series of scripts. The TFTP server and HTTP server python implementations only allow for 1 connection at a time. The alternative would be to host your own TFTP and HTTP servers that can handle multiple connections such as a node.js simple server and "TFTP server for mac". I have used both of these and successfully booted 20 computers at the same time in about 5 minutes with TinyCore Plus.
References, research, and code segments:
https://github.com/psychomario/PyPXE
http://www.fogproject.org/
http://www.fogproject.org/wiki/index.php/Using_FOG_with_an_unmodifiable_DHCP_server/_Using_FOG_with_no_DHCP_server
http://ipxe.org/gsoc
http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html
https://rom-o-matic.eu/
https://github.com/xbgmsharp/ipxe-buildweb/
/* ignore emacs backups and dotfiles */
|
|
if (len == 0 ||
|
|
ent->d_name[len - 1] == '~' ||
|
|
(ent->d_name[0] == '#' && ent->d_name[len - 1] == '#') ||
|
|
ent->d_name[0] == '.')
|
|
continue;
|
pypxe is the winner
Saturday, November 8, 2014
MR3020 Configuration After Installing a Snapshot
The MR3020 is a really neat and cheap router with a ton of capability if it's running Openwrt. There have been a few formal releases over the past couple of years that are now considered obsolete. The packages that are available to those releases are hardly ever upgraded as they were designed to work as they were at the time of the release. If one would like to acquire and use the latest packages available to the Openwrt line, they would have to install a snapshot. A snapshot is a nightly build of the very active Openwrt source. These snapshots can be very unstable and possibly unusable, but the tradeoff can be worth it. For me it was since I use dnsmasq a lot.
If you install a snapshot version on your router, you will not have luci. This means everything has to be done by hand. Here is how you do it on a MR3020.
Here are all of the packages currently installed with my snapshot:
If you install a snapshot version on your router, you will not have luci. This means everything has to be done by hand. Here is how you do it on a MR3020.
Here are all of the packages currently installed with my snapshot:
base-files - 156-r43124
busybox - 1.22.1-3
dnsmasq - 2.72-1
dropbear - 2014.65-2
firewall - 2014-09-19
fstools - 2014-10-27-d71297353dc45eaf8f7c252246490746708530f9
hostapd-common - 2014-10-25-1
ip6tables - 1.4.21-1
iptables - 1.4.21-1
iw - 3.15-1
iwinfo - 2014-10-27.1-d5dc3d0605f76fbbbad005d998497e53a236aeda
jshn - 2014-10-14-464e05e33b4c086be0bd932760a41ddcf9373187
jsonfilter - 2014-06-19-cdc760c58077f44fc40adbbe41e1556a67c1b9a9
kernel - 3.10.58-1-8ba75c28f46d1c58a922f1e15f98d811
kmod-ath - 3.10.58+2014-10-08-1
kmod-ath9k - 3.10.58+2014-10-08-1
kmod-ath9k-common - 3.10.58+2014-10-08-1
kmod-cfg80211 - 3.10.58+2014-10-08-1
kmod-crypto-aes - 3.10.58-1
kmod-crypto-arc4 - 3.10.58-1
kmod-crypto-core - 3.10.58-1
kmod-gpio-button-hotplug - 3.10.58-1
kmod-ip6tables - 3.10.58-1
kmod-ipt-conntrack - 3.10.58-1
kmod-ipt-core - 3.10.58-1
kmod-ipt-nat - 3.10.58-1
kmod-ipv6 - 3.10.58-1
kmod-ledtrig-usbdev - 3.10.58-1
kmod-lib-crc-ccitt - 3.10.58-1
kmod-mac80211 - 3.10.58+2014-10-08-1
kmod-nf-conntrack - 3.10.58-1
kmod-nf-conntrack6 - 3.10.58-1
kmod-nf-ipt - 3.10.58-1
kmod-nf-ipt6 - 3.10.58-1
kmod-nf-nat - 3.10.58-1
kmod-nf-nathelper - 3.10.58-1
kmod-nls-base - 3.10.58-1
kmod-ppp - 3.10.58-1
kmod-pppoe - 3.10.58-1
kmod-pppox - 3.10.58-1
kmod-slhc - 3.10.58-1
kmod-usb-core - 3.10.58-1
kmod-usb-ohci - 3.10.58-1
kmod-usb2 - 3.10.58-1
libblobmsg-json - 2014-10-14-464e05e33b4c086be0bd932760a41ddcf9373187
libc - 0.9.33.2-1
libgcc - 4.8-linaro-1
libip4tc - 1.4.21-1
libip6tc - 1.4.21-1
libiwinfo - 2014-10-27.1-d5dc3d0605f76fbbbad005d998497e53a236aeda
libjson-c - 0.11-2
libjson-script - 2014-10-14-464e05e33b4c086be0bd932760a41ddcf9373187
libnl-tiny - 0.1-3
libubox - 2014-10-14-464e05e33b4c086be0bd932760a41ddcf9373187
libubus - 2014-09-17-4c4f35cf2230d70b9ddd87638ca911e8a563f2f3
libuci - 2014-04-11.1-1
libxtables - 1.4.21-1
mtd - 20
netifd - 2014-10-24-b46a8f3b9794efed197ffd2f6f62eb946de5f235
odhcp6c - 2014-10-25-940e2141ab13727af6323c4d30002f785e466318
odhcpd - 2014-10-18-b461334ab277b6e8fd1622ab7c8a655363bd3f6c
opkg - 9c97d5ecd795709c8584e972bfdf3aee3a5b846d-7
ppp - 2.4.7-3
ppp-mod-pppoe - 2.4.7-3
procd - 2014-10-30-07c7864d49723b1264ee8bcd6861ea92f679ee98
swconfig - 10
uboot-envtools - 2014.07-1
ubox - 2014-10-06-0b274c16a3f9d235735a4b84215071e1e004caa9
ubus - 2014-09-17-4c4f35cf2230d70b9ddd87638ca911e8a563f2f3
ubusd - 2014-09-17-4c4f35cf2230d70b9ddd87638ca911e8a563f2f3
uci - 2014-04-11.1-1
wpad-mini - 2014-10-25-1
You will notice luci is absent.
Here we go:
Finished, enjoy! I hope this helped somebody and if you need any help with any of the above steps I'll the best I can to assist.
- First, here is the default configuration:
- The ethernet is configured with a dhcp server with a lan'd firewall with a static ip of 192.168.1.1
- The wifi is disabled and is not configured
- The default configuration for the enabled wireless interface is an access point, we need to get access to the internet so we can update and install the packages available to us in order to get luci.
- It is much easier to setup the wireless as a client with a wan and a gateway then to use the ethernet in the same manner since the majority is already setup to do so.
- The sliding switch (AP, WISP, 3G) is not active and currently does not serve a purpose besides allowing for a method to resort in a "failsafe" mode upon booting
- Connect your computer through ethernet is if it were a typical client accepting a dhcp lease
- Telnet to the static ip and change the root password by doing the following
- telnet 192.168.1.1
- passwd
- enter the desired password
- exit
- Now ssh into the router by doing the following
- ssh root@192.168.1.1
- enter the password
- We are now in the router and will begin performing the configuration
- Enable the wireless
- vim /etc/config/wireless
- comment or delete the line that has "enabled 0"
- wifi down; wifi up
- The wifi has a default configuration as an access point
- see configuration below
- Setup the network configuration for the wireless as a client to get internet
- vim /etc/config/network
- see configuration below and append that to the end of the file
- wifi down; wifi up
- Setup the wireless network to interface with your external router or access point; the location where your local internet connection is coming from
- vim /etc/config/wireless
- see configuration below and edit the existing "radio0" device
- wifi down; wifi up
- Analyze the dmesg of the router. If everything is configured correctly, you will notice that there will be a message that reads "wlan0 associated!"
- Verify that you are connected the internet
- ping google.com
- If you get ping responses then everything is good!
- If not, troubleshooting my need to take place... dmesg is your friend in this case
- opkg update
- We are now connected to the internet. You have 2 options at this point:
- You can read my previous post on how to make an extroot filesystem to allow more space for packages
- You can go ahead and install luci and luci-ssl (both are required in order for it to work correctly) and risk the possibility of a completely full disk
- With luci you can now easily configure your router
- As mentioned in my previous post on how to extroot your filesystem, you can restore the defaults if you screw up somehow by simply entering "firstboot"
- Finished!
Network file configuration to enable the wan firewall configuration for the client router with a couple of my favorite dns servers to get the best results (open dns and google):
config interface 'wan'
option ifname 'wlan0'
option proto 'dhcp'
option peerdns '0'
option dns '208.67.222.222 208.67.220.220 8.8.8.8 8.8.4.4'
Second wifi configuration for associating wireless with access point or router. This is configured to be a client (mode sta) to my wpa2 encrypted (encryption psk2) access point with a wan'd firewall and no dhcp since it will be accepting an ip lease (network wan):
config wifi-device 'radio0'
option type 'mac80211'
option channel '11'
option hwmode '11g'
option path 'platform/ar933x_wmac'
option htmode 'HT20'
config wifi-iface
option device 'radio0'
option network 'wan'
option mode 'sta'
option ssid 'yourwifiSSIDhere'
option encryption 'psk2'
option key 'yourpasswordhere'
Monday, November 3, 2014
How to unbrick TP-Link mr3020 on OSX Yosemite
For reasons that were less than desirable, i accidentally bricked my router; tired, hungry, rushed, not really thinking. Anyway, I installed my other router's firmware (WNDR3700) into the MR3020. Yea, as you might have predicted that didn't go over well. I learned.
Materials used:
Materials used:
- 1 bricked mr3020
- 1 ethernet cable
- 1 mini usb cable
- 1 CP2102 USB to UART breakout board - http://www.amazon.com/gp/product/B009T2ZR6W/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
- 3 female to female jumper wires (mine were supplied with the CP2102)
- 1 soldering iron
- 1 header with 3 prongs or 3 sockets (male or female) - http://www.amazon.com/gp/product/B005HN237S/ref=oh_aui_search_detailpage?ie=UTF8&psc=1
- 1 OSX computer (I used 10.10 Yosemite)
- Download
- Correct openwrt mr3020 firmware
- Download and install
- CoolTerm - http://freeware.the-meiers.org
- TftpServer - http://ww2.unime.it/flr/tftpserver/
- SLAB_USBtoUART - http://www.silabs.com/products/mcu/pages/usbtouartbridgevcpdrivers.aspx
Overview and a brief rundown of what is about to take place:
At this moment the router is rebooting constantly (about once every 2 seconds) because it's trying to load an incompatible firmware and it's not smart enough to do anything else but quit and reboot. What we have to do is gain access to the routers UART pins and send instructions through it using the breakout board and a serial connection. We have to tell it to re-flash itself with a new image from a hosted TFTP server over a standard rj45 ethernet connection.
Steps:
- Open up the MR3020 by taking a really hard piece of plastic or thin screw driver and pry up the edge by the mini usb and ethernet jack. Work your way around until the whole top pops off. It's almost impossible to not nick up or break the casing while prying and bending.
- Once you managed to get the top off, gently lift the board from the edge that is on the opposite side as the ethernet jack until it comes out.
- Holding the board vertical with the ethernet jack pointed upwards, you will notice that there are 4 pins on the bottom with a very small "p1" to the right of them. The 3 rightmost pins are the ones we are going to use. From left to right they are, Ground, RX, TX.
- Solder the 3 pins from the header in those holes, the leftmost hole should be empty.
- Connect your CP2102 breakout board to these pins but you must switch the TX and the RX pins so they are complementing each other, TX should never go with TX. The ground goes with the ground.
- Connect the USB end of the CP2102 breakout board to your computer.
- On the OSX mac, download and install CoolTerm, TftpServer, and the SLAB CP2102 usb driver for the breakout board. Installation is straightforward for all programs.
- Open up CoolTerm and click options. The "Serial Port" option should be lit with a bunch of options on the right including baudrate. Set the port to SLAB_USBtoUART, baudrate to 115200, data bits 8, parity none, and stop bits 1. Then click on the "Terminal" option and select "Line Mode" for "Terminal Mode".
- Click connect. Nothing should be showing because we don't have the router powered up.
- Open up the Settings app in OSX and create an ethernet connection with a manual address of 192.168.1.100 with a subnet mask of 255.255.255.0
- Open the TftpServer and select "Reveal" at the top. An empty finder window should open up. Drag your newly downloaded openwrt firmware to the finder window. Now your Tftp server has a file to upload.
- Towards the upper right quadrant of the screen you should see a dropdown drop with at least one network interface. Make sure that the 192.168.1.100 address is selected.
- Start the Tftp Server by pressing the "Start TFTP" button at the top right hand corner of the screen.
- Connect the ethernet cable from the router to your mac
- Plug in the router and you should see output in the CoolTerm window. If you do not see any output or the "RX" virtual Green light is not blinking, then diagnose your connection. It is possible that you may have the TX and RX wires switched up.
- Once you start seeing output, you will notice that it goes down for reboot quite frequently.
- Type the letters "tpl" into the CoolTerm command line and press enter. This will cause the firmware to recognize connection and wait for further instruction. You will know when this works when it populates the word "hornet >"
- Type in the following (wait for each command to fully finish, some take longer than others):
- setenv ipaddr 192.168.1.111
- setenv serverip 192.168.1.100
- tftpboot 0x80000000 openwrt-ar71xx-generic-tl-mr3020-v1-squashfs-factory.bin
- erase 0x9f020000 +0x3c0000
- cp.b 0x80000000 0x9f020000 0x3c0000
- bootm 9f020000
- Your router is now rebooting and unbricked! yay!
I hope this helps you and if you have any questions I'll be more than happy to answer them. Happy hacking!
Props to the developers of CoolTerm and TftpServer.
Great documentation:
http://blog.waysquare.com/how-to-debrick-tl-mr3020/
Friday, September 26, 2014
Upgrade Beaglebone Black (BBB) kernel
An excellent post was written up by Marcos Miranda from element 14 instructing users on how to upgrade the kernel on the BBB.
http://www.element14.com/community/blogs/mirandasoft/2014/04/02/beaglebone-black-upgrading-the-linux-kernel
In a nutshell, it's 3 basic scripts. Before starting, make a directory to put all your scripts. You may name the following files and scripts however you want to. I use generic file names for readability.
1. Create your scripts:
I am naming mine test.sh, upgrade.sh, and clean.sh.
Use vim and save the separate scripts into a directory of your choosing.
mkdir ~/upgradekernel
cd ~/upgradekernel
vim test.sh
_____________________________________________________________________________
#!/bin/bash
wget -nd -p -E https://rcn-ee.net/deb/sid-armhf/LATEST-omap-psp -O ./LatestBBBKernels.txt
cat ./LatestBBBKernels.txt
_____________________________________________________________________________
vim upgrade.sh
_____________________________________________________________________________
#!/bin/bash
# TO BE EXECUTED UNDER ROOT ACCOUNT
# STANDARD DISCLAIMER APPLIES.
set -e
wget -nd -p -E https://rcn-ee.net/deb/sid-armhf/LATEST-omap-psp -O ./LatestBBBKernels.txt
echo -e "#!/bin/bash" > download.sh
echo -ne "wget -c " >> download.sh
sed -n -e 's/ABI:1 STABLE //p' LatestBBBKernels.txt >> download.sh
echo "sh ./install-me.sh" >> download.sh
chmod +x download.sh
sh ./download.sh
rm LatestBBBKernels.txt
vim clean.sh
_____________________________________________________________________________
#!/bin/bash
set -e
# rm -r /root/install-me.sh
# rm -r /root/download.sh
# rm -rf /boot/*-bone41
# rm -rf /boot/uboot/*bak
# rm -f /boot/uboot/tools/restore_bak.sh
# rm -rf /lib/modules/3.8.13-bone41
# apt-get remove --purge -y linux-image-3.8.13-bone41
# apt-get clean all
_____________________________________________________________________________
1. Decide on which kernel you would like to install. The keywords are STABLE, TESTING, or EXPERIMENTAL
Run the following to determine which kernel version are available to you.
cd ~/scriptdir
vim test.sh
(copy and paste the following into test.sh):
type in vim to quit and save - :wq
chmod +x test.sh
./test.sh
_____________________________________________________________________________
You should receive an output resembling the following:
--2014-09-26 15:06:58-- https://rcn-ee.net/deb/sid-armhf/LATEST-omap-psp
Resolving rcn-ee.net (rcn-ee.net)... 69.163.222.213
Connecting to rcn-ee.net (rcn-ee.net)|69.163.222.213|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 234 [text/plain]
Saving to: `./LatestBBBKernels.txt'
100%[=======================================================================================================>] 234 --.-K/s in 0s
2014-09-26 15:07:03 (659 KB/s) - `./LatestBBBKernels.txt' saved [234/234]
FINISHED --2014-09-26 15:07:03--
Total wall clock time: 4.7s
Downloaded: 1 files, 234 in 0s (659 KB/s)
root@beaglebone:~/Projects# cat ./LatestBBBKernels.txt
ABI:1 TESTING https://rcn-ee.net/deb/sid-armhf/v3.16.3-bone6/install-me.sh
ABI:1 EXPERIMENTAL https://rcn-ee.net/deb/sid-armhf/v3.17.0-rc6-bone4/install-me.sh
ABI:1 STABLE https://rcn-ee.net/deb/sid-armhf/v3.8.13-bone67/install-me.sh
_____________________________________________________________________________
Notice the TESTING, EXPERIMENTAL, and STABLE. Also notice their associated kernel versions.
2. Run the scripts and modify accordingly.
Run test.sh
You should receive output that resembles the following:
_____________________________________________________________________________
--2014-09-26 15:06:58-- https://rcn-ee.net/deb/sid-armhf/LATEST-omap-psp
Resolving rcn-ee.net (rcn-ee.net)... 69.163.222.213
Connecting to rcn-ee.net (rcn-ee.net)|69.163.222.213|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 234 [text/plain]
Saving to: `./LatestBBBKernels.txt'
100%[=======================================================================================================>] 234 --.-K/s in 0s
2014-09-26 15:07:03 (659 KB/s) - `./LatestBBBKernels.txt' saved [234/234]
FINISHED --2014-09-26 15:07:03--
Total wall clock time: 4.7s
Downloaded: 1 files, 234 in 0s (659 KB/s)
root@beaglebone:~/Projects# cat ./LatestBBBKernels.txt
ABI:1 TESTING https://rcn-ee.net/deb/sid-armhf/v3.16.3-bone6/install-me.sh
ABI:1 EXPERIMENTAL https://rcn-ee.net/deb/sid-armhf/v3.17.0-rc6-bone4/install-me.sh
ABI:1 STABLE https://rcn-ee.net/deb/sid-armhf/v3.8.13-bone67/install-me.sh
_____________________________________________________________________________
Analyzing the output, choose which type of kernel installation you would like. Personally I like TESTING so I will modify the upgrade script to replace "STABLE" with "TESTING"
Once you modify upgrade.sh, run it.
./upgrade.sh
Once the upgrade is complete, run clean.sh
Reboot your system and the kernel is now updated! Enjoy!
Props to Marcos for the nice scripts.
http://www.element14.com/community/blogs/mirandasoft/2014/04/02/beaglebone-black-upgrading-the-linux-kernel
In a nutshell, it's 3 basic scripts. Before starting, make a directory to put all your scripts. You may name the following files and scripts however you want to. I use generic file names for readability.
1. Create your scripts:
I am naming mine test.sh, upgrade.sh, and clean.sh.
Use vim and save the separate scripts into a directory of your choosing.
mkdir ~/upgradekernel
cd ~/upgradekernel
vim test.sh
_____________________________________________________________________________
#!/bin/bash
wget -nd -p -E https://rcn-ee.net/deb/sid-armhf/LATEST-omap-psp -O ./LatestBBBKernels.txt
cat ./LatestBBBKernels.txt
_____________________________________________________________________________
vim upgrade.sh
_____________________________________________________________________________
#!/bin/bash
# TO BE EXECUTED UNDER ROOT ACCOUNT
# STANDARD DISCLAIMER APPLIES.
set -e
wget -nd -p -E https://rcn-ee.net/deb/sid-armhf/LATEST-omap-psp -O ./LatestBBBKernels.txt
echo -e "#!/bin/bash" > download.sh
echo -ne "wget -c " >> download.sh
sed -n -e 's/ABI:1 STABLE //p' LatestBBBKernels.txt >> download.sh
echo "sh ./install-me.sh" >> download.sh
chmod +x download.sh
sh ./download.sh
rm LatestBBBKernels.txt
_____________________________________________________________________________
vim clean.sh
_____________________________________________________________________________
#!/bin/bash
set -e
# rm -r /root/install-me.sh
# rm -r /root/download.sh
# rm -rf /boot/*-bone41
# rm -rf /boot/uboot/*bak
# rm -f /boot/uboot/tools/restore_bak.sh
# rm -rf /lib/modules/3.8.13-bone41
# apt-get remove --purge -y linux-image-3.8.13-bone41
# apt-get clean all
_____________________________________________________________________________
1. Decide on which kernel you would like to install. The keywords are STABLE, TESTING, or EXPERIMENTAL
Run the following to determine which kernel version are available to you.
cd ~/scriptdir
vim test.sh
(copy and paste the following into test.sh):
type in vim to quit and save - :wq
chmod +x test.sh
./test.sh
_____________________________________________________________________________
You should receive an output resembling the following:
--2014-09-26 15:06:58-- https://rcn-ee.net/deb/sid-armhf/LATEST-omap-psp
Resolving rcn-ee.net (rcn-ee.net)... 69.163.222.213
Connecting to rcn-ee.net (rcn-ee.net)|69.163.222.213|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 234 [text/plain]
Saving to: `./LatestBBBKernels.txt'
100%[=======================================================================================================>] 234 --.-K/s in 0s
2014-09-26 15:07:03 (659 KB/s) - `./LatestBBBKernels.txt' saved [234/234]
FINISHED --2014-09-26 15:07:03--
Total wall clock time: 4.7s
Downloaded: 1 files, 234 in 0s (659 KB/s)
root@beaglebone:~/Projects# cat ./LatestBBBKernels.txt
ABI:1 TESTING https://rcn-ee.net/deb/sid-armhf/v3.16.3-bone6/install-me.sh
ABI:1 EXPERIMENTAL https://rcn-ee.net/deb/sid-armhf/v3.17.0-rc6-bone4/install-me.sh
ABI:1 STABLE https://rcn-ee.net/deb/sid-armhf/v3.8.13-bone67/install-me.sh
_____________________________________________________________________________
Notice the TESTING, EXPERIMENTAL, and STABLE. Also notice their associated kernel versions.
2. Run the scripts and modify accordingly.
Run test.sh
You should receive output that resembles the following:
_____________________________________________________________________________
--2014-09-26 15:06:58-- https://rcn-ee.net/deb/sid-armhf/LATEST-omap-psp
Resolving rcn-ee.net (rcn-ee.net)... 69.163.222.213
Connecting to rcn-ee.net (rcn-ee.net)|69.163.222.213|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 234 [text/plain]
Saving to: `./LatestBBBKernels.txt'
100%[=======================================================================================================>] 234 --.-K/s in 0s
2014-09-26 15:07:03 (659 KB/s) - `./LatestBBBKernels.txt' saved [234/234]
FINISHED --2014-09-26 15:07:03--
Total wall clock time: 4.7s
Downloaded: 1 files, 234 in 0s (659 KB/s)
root@beaglebone:~/Projects# cat ./LatestBBBKernels.txt
ABI:1 TESTING https://rcn-ee.net/deb/sid-armhf/v3.16.3-bone6/install-me.sh
ABI:1 EXPERIMENTAL https://rcn-ee.net/deb/sid-armhf/v3.17.0-rc6-bone4/install-me.sh
ABI:1 STABLE https://rcn-ee.net/deb/sid-armhf/v3.8.13-bone67/install-me.sh
_____________________________________________________________________________
Analyzing the output, choose which type of kernel installation you would like. Personally I like TESTING so I will modify the upgrade script to replace "STABLE" with "TESTING"
Once you modify upgrade.sh, run it.
./upgrade.sh
Once the upgrade is complete, run clean.sh
Reboot your system and the kernel is now updated! Enjoy!
Props to Marcos for the nice scripts.
Thursday, September 11, 2014
Removing or Editing File Associations through the Windows 7/8 registry
The registry is an incredibly sensitive file that contains a vast amount of highly unorganized data in the form of folders and keys. My patience was really tested when a user decided to open a special file extension in MS word. Now the file's properties displays an undesired program as the default. Also, I've seen people try to open a file but the program doesn't open properly signifying that the parameters are incorrect. If anyone has made this mistake before and would like to resolve it, I have some instructions and places to look.
I will be using the file extension ".ext" as an example for the steps below"
First, everyone should know about the "default programs" section in control panel. Unfortunately, it doesn't paint the whole picture.
In command prompt, type "assoc .ext". This will display the name of a program that it is associated with this extension. This data comes from the "classroot" section in the registry.
Also in command prompt, type "ftype nameofprogram". The "nameofprogram" is taken from the output of the command "assoc .ext". This will display the command used to describe the behavior about how the file is opened. Normally you will have a %1 next to it letting you know that it is taking in the file that is associated with it as a parameter.
If you don't get any clues from above and need to go deeper, then here is where the registry comes in.
***BACKUP THE REGISTRY BEFORE MODIFYING THE REGISTRY***
There are numerous places to look for a misbehaving file extension, all are taken from this very well written article:
http://www.mydigitallife.info/how-to-unassociate-remove-or-delete-programs-from-open-with-or-recommended-programs-list/
The following places to look are:
HKEY_CURRENT_USER\Software\Microsoft\Windows\
CurrentVersion\Explorer\FileExts\.<extension>\OpenWithList
HKEY_CURRENT_USER\Software\Microsoft\Windows\
CurrentVersion\Explorer\FileExts\.<extension>\OpenWithProgIDs
HKEY_CLASSES_ROOT\.<extension>\OpenWithList
HKEY_CLASSES_ROOT\.<extension>\OpenWithProgIDs
HKEY_LOCAL_MACHINE\SOFTWARE\Clients\<Program Type>\
<Program Name>\Capabilities\FileAssociations
HKEY_CLASSES_ROOT\Applications\<application executable name>\SupportedTypes
HKEY_CLASSES_ROOT\SystemFileAssociations\<Perceived Type>\OpenWithList
HKEY_CLASSES_ROOT\SystemFileAssociations\.<extension>\OpenWithList
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths
The most popular one would be this: HKEY_CURRENT_USER\Software\Microsoft\Windows\
CurrentVersion\Explorer\FileExts\.<extension>\OpenWithList
Remove the OpenWithList should fix the default "Open With" program listed in the file's properties.
If a file is not opening properly or if you're crafty and would like to modify how your specific program opens, this is in the "classroot" section of the registry. This section will also modify this for every user of the computer, not just yourself.
HKEY_CLASSES_ROOT\.<application name>\Shell\Open\Command
Then edit the (default) key data.
Remember, every time you edit the registry, restarting the explorer.exe process doesn't always work as it depends on where you edit the registry that would be taking effect. For all of the above, save yourself the headache and just reboot the computer.
I will be using the file extension ".ext" as an example for the steps below"
First, everyone should know about the "default programs" section in control panel. Unfortunately, it doesn't paint the whole picture.
In command prompt, type "assoc .ext". This will display the name of a program that it is associated with this extension. This data comes from the "classroot" section in the registry.
Also in command prompt, type "ftype nameofprogram". The "nameofprogram" is taken from the output of the command "assoc .ext". This will display the command used to describe the behavior about how the file is opened. Normally you will have a %1 next to it letting you know that it is taking in the file that is associated with it as a parameter.
If you don't get any clues from above and need to go deeper, then here is where the registry comes in.
***BACKUP THE REGISTRY BEFORE MODIFYING THE REGISTRY***
There are numerous places to look for a misbehaving file extension, all are taken from this very well written article:
http://www.mydigitallife.info/how-to-unassociate-remove-or-delete-programs-from-open-with-or-recommended-programs-list/
The following places to look are:
HKEY_CURRENT_USER\Software\Microsoft\Windows\
CurrentVersion\Explorer\FileExts\.<extension>\OpenWithList
HKEY_CURRENT_USER\Software\Microsoft\Windows\
CurrentVersion\Explorer\FileExts\.<extension>\OpenWithProgIDs
HKEY_CLASSES_ROOT\.<extension>\OpenWithList
HKEY_CLASSES_ROOT\.<extension>\OpenWithProgIDs
HKEY_LOCAL_MACHINE\SOFTWARE\Clients\<Program Type>\
<Program Name>\Capabilities\FileAssociations
HKEY_CLASSES_ROOT\Applications\<application executable name>\SupportedTypes
HKEY_CLASSES_ROOT\SystemFileAssociations\<Perceived Type>\OpenWithList
HKEY_CLASSES_ROOT\SystemFileAssociations\.<extension>\OpenWithList
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths
The most popular one would be this: HKEY_CURRENT_USER\Software\Microsoft\Windows\
CurrentVersion\Explorer\FileExts\.<extension>\OpenWithList
Remove the OpenWithList should fix the default "Open With" program listed in the file's properties.
If a file is not opening properly or if you're crafty and would like to modify how your specific program opens, this is in the "classroot" section of the registry. This section will also modify this for every user of the computer, not just yourself.
HKEY_CLASSES_ROOT\.<application name>\Shell\Open\Command
Then edit the (default) key data.
Remember, every time you edit the registry, restarting the explorer.exe process doesn't always work as it depends on where you edit the registry that would be taking effect. For all of the above, save yourself the headache and just reboot the computer.
Tuesday, July 22, 2014
Allow java swing to send all error messages to a JOptionPane
By default java swing will send error messages to the java console. This would be ok if the java console would pop up from a runnable java application, but it does not and have not found a way to do so. If otherwise instructed, java will hide all messages (even fatal ones) from the user during runtime. This is not acceptable behavior, so I have found a solution. Route the messages to JOptionPanes like a popup.
During the main(String[] args) procedure, insert the following code:
public static void main(String[] args) {
System.out.println(SwingUtilities.isEventDispatchThread());
SwingUtilities.invokeLater(new Runnable() {
public void run() {
System.out.println(SwingUtilities.isEventDispatchThread());
try {
Thread.currentThread().setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
@Override
public void uncaughtException(Thread t, Throwable e) {
// TODO Auto-generated method stub
JOptionPane.showMessageDialog(null,
e.toString(),
"Error",
JOptionPane.ERROR_MESSAGE);
e.printStackTrace();
}
});
****START CODE HERE****
System.out.println(SwingUtilities.isEventDispatchThread());
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
During the main(String[] args) procedure, insert the following code:
public static void main(String[] args) {
System.out.println(SwingUtilities.isEventDispatchThread());
SwingUtilities.invokeLater(new Runnable() {
public void run() {
System.out.println(SwingUtilities.isEventDispatchThread());
try {
Thread.currentThread().setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
@Override
public void uncaughtException(Thread t, Throwable e) {
// TODO Auto-generated method stub
JOptionPane.showMessageDialog(null,
e.toString(),
"Error",
JOptionPane.ERROR_MESSAGE);
e.printStackTrace();
}
});
****START CODE HERE****
System.out.println(SwingUtilities.isEventDispatchThread());
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
Create dynamically loading java swing objects
I gotta say, java swing isn't the greatest framework for creating interfaces with objects that need to be updated depending on selections. I came across this issue when I trying to create a panel that would dynamically allocate objects depending on the class that this panel was inheriting by the users choice from a radio button selection. When a panel was created with the allocated objects, the containing panel would not recognize those changes even after repainting and revalidating. The trick is knowing how the event driven swing framework works. Since the objects were not originally created by the event dispatch thread a.k.a. AWT-EventQueue, they will not take effect until an EventQueue owned object "affects" it. So how would the EventQueue thread "own" the interface objects that were not created in the first place?
In comes the SwingWorker. This is an abstract class that allows dynamic background processes to update the object being modified even if the process is a concurrent running process. The SwingWorker can publish the object at any time even in the middle of a process since it publishes itself to the even dispatch thread which will then take ownership of the object in need of updating. Now loading bars and live dynamic object allocation are possible. I've created a class below that extends the SwingWorker class, but it can take any Swing parent component (the one that contains the object needed to be modified) and a method associated with a child Swing component. Depending on what the child method does, this will update the interface of the parent that will be containing the childs actions.
For example, I have a Jpanel that contains another Jpanel that will update some JComboBoxes with different values. The kicker is that depending on the child class, there can different quantities of JComboBoxes.
Sample Code:
public class DocTypeContainPanel extends JPanel{
...
SwingObjectWorker temp1 = null;
SwingObjectWorker temp2 = null;
temp1 = new SwingObjectWorker(pnlSearchPropsContainer, pnlSearchProps.getPropsPanel());
temp1.execute();
temp2 = new SwingObjectWorker(pnlDocPropsContainer, pnlDocProps.getPropsPanel());
temp2.execute();
revalidate();
repaint();
...
}
class SwingObjectWorker extends SwingWorker<JComponent, Void> {
private JComponent parentComp;
private JComponent childComp;
public SwingObjectWorker (JComponent inparentComp, JComponent inchildComp){
parentComp = inparentComp;
childComp = inchildComp;
final SwingObjectWorker temp = this;
this.addPropertyChangeListener(new PropertyChangeListener() {
@Override
public void propertyChange(PropertyChangeEvent arg0) {
if (StateValue.DONE == temp.getState()) {
try {
parentComp.removeAll();
parentComp.add(get(), "cell 0 0,grow");
parentComp.setVisible(true);
parentComp.revalidate();
parentComp.repaint();
System.out.println("Swing Worker Done!");
// TODO: insert code to run on the EDT after move determined
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
});
}
@Override
public JComponent doInBackground() {
return childComp;
}
@Override
public void done() {
}
}
In comes the SwingWorker. This is an abstract class that allows dynamic background processes to update the object being modified even if the process is a concurrent running process. The SwingWorker can publish the object at any time even in the middle of a process since it publishes itself to the even dispatch thread which will then take ownership of the object in need of updating. Now loading bars and live dynamic object allocation are possible. I've created a class below that extends the SwingWorker class, but it can take any Swing parent component (the one that contains the object needed to be modified) and a method associated with a child Swing component. Depending on what the child method does, this will update the interface of the parent that will be containing the childs actions.
For example, I have a Jpanel that contains another Jpanel that will update some JComboBoxes with different values. The kicker is that depending on the child class, there can different quantities of JComboBoxes.
Sample Code:
public class DocTypeContainPanel extends JPanel{
...
SwingObjectWorker temp1 = null;
SwingObjectWorker temp2 = null;
temp1 = new SwingObjectWorker(pnlSearchPropsContainer, pnlSearchProps.getPropsPanel());
temp1.execute();
temp2 = new SwingObjectWorker(pnlDocPropsContainer, pnlDocProps.getPropsPanel());
temp2.execute();
revalidate();
repaint();
...
}
class SwingObjectWorker extends SwingWorker<JComponent, Void> {
private JComponent parentComp;
private JComponent childComp;
public SwingObjectWorker (JComponent inparentComp, JComponent inchildComp){
parentComp = inparentComp;
childComp = inchildComp;
final SwingObjectWorker temp = this;
this.addPropertyChangeListener(new PropertyChangeListener() {
@Override
public void propertyChange(PropertyChangeEvent arg0) {
if (StateValue.DONE == temp.getState()) {
try {
parentComp.removeAll();
parentComp.add(get(), "cell 0 0,grow");
parentComp.setVisible(true);
parentComp.revalidate();
parentComp.repaint();
System.out.println("Swing Worker Done!");
// TODO: insert code to run on the EDT after move determined
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
});
}
@Override
public JComponent doInBackground() {
return childComp;
}
@Override
public void done() {
}
}
Friday, July 11, 2014
Using a TP-Link TL-WN725N USB Wifi Adapter on a Raspberry Pi
For those that purchased a TP-Link TL-WN725N because it was really cheap and didn't bother to check the linux driver support for it... oops
Well here's your fix.
Well here's your fix.
This is been tested on raspbian on the rpi (raspberry pi), your mileage may vary on other operating systems and architectures.
First, in order to compile drivers for linux, we need the linux headers. Running rpi-update does not provide the headers in with the update since the kernel is precompiled. What we have to do is find the headers for the current commit of the kernel installed. We do this by running an awesome tool called rpi-source which downloads the correct headers for your specific kernel version that is currently installed on your rpi.
run in a directory of your choice:
sudo wget https://raw.githubusercontent.com/notro/rpi-source/master/rpi-source -O /usr/bin/rpi-source && sudo chmod +x /usr/bin/rpi-source && /usr/bin/rpi-source -q --tag-update
If you get a message that states "gcc version check: mismatch between gcc (4.6.3) and /proc/version (4.7.2) Skip this check with --skip-gcc", you can simply ignore it by running "rpi-source --skip-gcc"
What this will do is create and install the headers and modules in your /lib/modules/ folder. Now that you have the kernel headers installed, we can move on to compiling the driver. The following comes straight from the examples used for the "rpi-source" repository on github.
You may read and follow the directions here:
https://github.com/notro/rpi-source/wiki/Examples-on-how-to-build-various-modules#tp-link-tl-wn725n-version-2-lwfinger
Otherwise just follow the instructions below which is copied and pasted from the repository:
______________________________________________________________________________
$ uname -a
Linux raspberrypi 3.12.21+ #1 PREEMPT Sat Jun 14 13:44:18 CEST 2014 armv6l GNU/Linux
$ git clone https://github.com/lwfinger/rtl8188eu.git
$ cd rtl8188eu
$ make all
$ sudo make install
$ sudo depmod
$ sudo modprobe 8188eu
$ lsmod
Module Size Used by
8188eu 796381 0
______________________________________________________________________________
After that, you should have a working TL-WN725N wifi adapter.
Major props goes to the developer of rpi-source and for the open source 8188eu driver for the wifi adapter.
Enjoy!
Wednesday, June 25, 2014
Resize partitions for SoC computers and other linux devices (raspberry pi, beaglebone black, etc...)
This only uses fdisk and resize2fs. I ran out of space on the beaglebone black so I could not use parted for partition editing but the following works just as well. The same procedure below applies to raspberry pi's too. The bbb is running debian wheezy burned on a 32GB micro sd card. Raspi-config simplifies the following with one step, but it also used parted for it's file recreation. We don't have that package at our disposal so here is the alternative.
enter: df -h
root@beaglebone:/media/Angstrom# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 1.6G 1.6G 0 100% /
udev 10M 0 10M 0% /dev
tmpfs 100M 3.1M 97M 4% /run
/dev/mmcblk0p2 1.6G 1.6G 0 100% /
tmpfs 249M 0 249M 0% /dev/shm
tmpfs 249M 0 249M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/mmcblk0p1 96M 70M 27M 73% /boot/uboot
/dev/mmcblk1p2 1.7G 1.1G 519M 68% /media/Angstrom
/dev/mmcblk1p1 70M 54M 16M 78% /media/
*************************************************************************
Clearly the rootfs has no more space available. We want to find the partition that the rootfs is referencing by looking at the filesystems that start with "/dev/" and are mounted on the same location as the rootfs. In this case (and it most cases with SoC computers) it appears here as /dev/mmcblk0p2.
*************************************************************************
enter: fdisk -l
root@beaglebone:~# fdisk -l
Disk /dev/mmcblk0: 31.9 GB, 31914983424 bytes
4 heads, 16 sectors/track, 973968 cylinders, total 62333952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 2048 198655 98304 e W95 FAT16 (LBA)
/dev/mmcblk0p2 198656 62333951 31067648 83 Linux
Disk /dev/mmcblk1: 1920 MB, 1920991232 bytes
255 heads, 63 sectors/track, 233 cylinders, total 3751936 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk1p1 * 63 144584 72261 c W95 FAT32 (LBA)
/dev/mmcblk1p2 144585 3743144 1799280 83 Linux
Disk /dev/mmcblk1boot1: 1 MB, 1048576 bytes
4 heads, 16 sectors/track, 32 cylinders, total 2048 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mmcblk1boot1 doesn't contain a valid partition table
Disk /dev/mmcblk1boot0: 1 MB, 1048576 bytes
4 heads, 16 sectors/track, 32 cylinders, total 2048 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mmcblk1boot0 doesn't contain a valid partition table
*************************************************************************
This will tell us where the /dev/mmcblk0p2 partition is located. It appears under the disk /dev/mmcblk0 which makes sense because it has 31.9 GB of total usable space. We have made a direct correlation with the disk needed to be modified and the 32GB card installed in the bbb.
*************************************************************************
enter: fdisk /dev/mmcblk0
root@beaglebone:/media/Angstrom# fdisk /dev/mmcblk0
Command (m for help): p
Disk /dev/mmcblk0: 31.9 GB, 31914983424 bytes
4 heads, 16 sectors/track, 973968 cylinders, total 62333952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 2048 198655 98304 e W95 FAT16 (LBA)
/dev/mmcblk0p2 198656 3481599 1641472 83 Linux
Command (m for help): d
Partition number (1-4): 2
Command (m for help): p
Disk /dev/mmcblk0: 31.9 GB, 31914983424 bytes
4 heads, 16 sectors/track, 973968 cylinders, total 62333952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 2048 198655 98304 e W95 FAT16 (LBA)
Command (m for help): n
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): p
Partition number (1-4, default 2): 2
First sector (198656-62333951, default 198656): (press enter here)
Using default value 198656
Last sector, +sectors or +size{K,M,G} (198656-62333951, default 62333951): (press enter here)
Using default value 62333951
Command (m for help): p
Disk /dev/mmcblk0: 31.9 GB, 31914983424 bytes
4 heads, 16 sectors/track, 973968 cylinders, total 62333952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 2048 198655 98304 e W95 FAT16 (LBA)
/dev/mmcblk0p2 198656 62333951 31067648 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
root@beaglebone:/media/Angstrom# reboot
Broadcast message from root@beaglebone (pts/0) (Wed Jun 25 15:37:07 2014):
The system is going down for reboot NOW!
*************************************************************************
What we did was delete the partition from the partition table, not the actual partition itself so the data still remains intact when we create a new partition. After the second partition has been deleted and recreated, we reboot.
*************************************************************************
log in again and reenter: df -h
root@beaglebone:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 1.6G 1.6G 0 100% /
udev 10M 0 10M 0% /dev
tmpfs 100M 540K 99M 1% /run
/dev/mmcblk0p2 1.6G 1.6G 0 100% /
tmpfs 249M 0 249M 0% /dev/shm
tmpfs 249M 0 249M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/mmcblk0p1 96M 70M 27M 73% /boot/uboot
*************************************************************************
We have free space! ... wait.... huh?
The available space is still 0???
... remember, we just modified the partition table, not the partition itself. Now we have resize the actual partition in accordance with the partition table.
*************************************************************************
enter: resize2fs /dev/mmcblk0p2
root@beaglebone:~# resize2fs /dev/mmcblk0p2
resize2fs 1.42.5 (29-Jul-2012)
Filesystem at /dev/mmcblk0p2 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/mmcblk0p2 is now 7766912 blocks long.
root@beaglebone:~# reboot
*************************************************************************
From the previous output of "df -h", we want to modify the partition itself which would be /dev/mmcblk0p2 since that is the rootfs described in the first step. Now the physical partition is resized.
*************************************************************************
enter: df -h
root@beaglebone:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 30G 1.6G 27G 6% /
udev 10M 0 10M 0% /dev
tmpfs 100M 540K 99M 1% /run
/dev/mmcblk0p2 30G 1.6G 27G 6% /
tmpfs 249M 0 249M 0% /dev/shm
tmpfs 249M 0 249M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/mmcblk0p1 96M 70M 27M 73% /boot/uboot
*************************************************************************
Bam! We have copious amounts of free space. Enjoy!
*************************************************************************
Tuesday, May 20, 2014
Installing Microsoft Office 2013 with many licenses
Leave it to Microsoft to screw up a working system...
New for Office 2013 is a brand new method for installing office.
Whether you have created a installation DVD or install through a web browser, the installation procedure follows the same basic really detoured guideline.
During the installation process after you enter the product key, Office 2013 installation utility (again, doesn't matter how you install, web or DVD) retrieves the newly entered product key and populates the product on the screen without any distinguishing identification of that product. Assuming you only have one product, this process is very simple. If you have more than one product key, say 29 product keys like I do, this process is a very annoying game of russian roulette.
The list that populates with your newely entered product key displays the name of the office product over and over again depending on how many instances of office you have registered under your account. You have to guess to see which office key is the most recent office key you have just entered.
If you have installed a microsoft office product that has been registered and activated on one machine and you mistakenly clicked on the wrong office line, you now have the possibility of an office install that would not be activated and you will have to uninstall and reinstall to get the new key working.
I have a very long solution that may help in this matter.
The following is followed after the first installation and going onto multiples which now involves guessing.
Make a table that looks like this:
Initial Product Key New Product Key Computer Identifier Line Number
While keeping track of which product key goes to which computer, go to https://office.microsoft.com/en-us/MyAccount.aspx
and click "Install from a disc" under one of the products, click "I have a disc", then click view your product key.
This key is NOT going to be the same key you have entered in from the office installation. It appears they create a whole new product key that stems from the original. Write this product key as the replacement for the old product key that was used in the office installation and will be called the New Product key in our table.
In your records, find the line# (literally count the instances) in your "My Account page" that does not contain a previously recorded product key.
Using the following line in command prompt to record the last 5 characters of your product key.
For 32 bit Windows:
cscript "C:\Program Files\Microsoft Office\Office15\OSPP.VBS" /dstatus
For 64 bit Windows (assuming you are using 32 bit Office):
cscript "C:\Program Files (x86)\Microsoft Office\Office15\OSPP.VBS" /dstatus
Record this product key with the computer identifier so you have a link from that computer to the product key.
Using this method there will still be guessing, but a lot less of it.
When you need to change the product key, here are the commands for that
For 32 bit Windows:
cscript "C:\Program Files\Microsoft Office\Office15\OSPP.VBS" /inpkey:yourkeygoeshere
For 64 bit Windows (assuming you are using 32 bit Office):
cscript "C:\Program Files (x86)\Microsoft Office\Office15\OSPP.VBS" /inpkey:yourkeygoeshere
If you are reading this and have not bought 2013 but are thinking of doing so, please don't.
Buy or download anything else, MS Office 2010 or lower, Libre office, Open office...
New for Office 2013 is a brand new method for installing office.
Whether you have created a installation DVD or install through a web browser, the installation procedure follows the same basic really detoured guideline.
During the installation process after you enter the product key, Office 2013 installation utility (again, doesn't matter how you install, web or DVD) retrieves the newly entered product key and populates the product on the screen without any distinguishing identification of that product. Assuming you only have one product, this process is very simple. If you have more than one product key, say 29 product keys like I do, this process is a very annoying game of russian roulette.
The list that populates with your newely entered product key displays the name of the office product over and over again depending on how many instances of office you have registered under your account. You have to guess to see which office key is the most recent office key you have just entered.
If you have installed a microsoft office product that has been registered and activated on one machine and you mistakenly clicked on the wrong office line, you now have the possibility of an office install that would not be activated and you will have to uninstall and reinstall to get the new key working.
I have a very long solution that may help in this matter.
The following is followed after the first installation and going onto multiples which now involves guessing.
Make a table that looks like this:
Initial Product Key New Product Key Computer Identifier Line Number
While keeping track of which product key goes to which computer, go to https://office.microsoft.com/en-us/MyAccount.aspx
and click "Install from a disc" under one of the products, click "I have a disc", then click view your product key.
This key is NOT going to be the same key you have entered in from the office installation. It appears they create a whole new product key that stems from the original. Write this product key as the replacement for the old product key that was used in the office installation and will be called the New Product key in our table.
In your records, find the line# (literally count the instances) in your "My Account page" that does not contain a previously recorded product key.
Using the following line in command prompt to record the last 5 characters of your product key.
For 32 bit Windows:
cscript "C:\Program Files\Microsoft Office\Office15\OSPP.VBS" /dstatus
For 64 bit Windows (assuming you are using 32 bit Office):
cscript "C:\Program Files (x86)\Microsoft Office\Office15\OSPP.VBS" /dstatus
Record this product key with the computer identifier so you have a link from that computer to the product key.
Using this method there will still be guessing, but a lot less of it.
When you need to change the product key, here are the commands for that
For 32 bit Windows:
cscript "C:\Program Files\Microsoft Office\Office15\OSPP.VBS" /inpkey:yourkeygoeshere
For 64 bit Windows (assuming you are using 32 bit Office):
cscript "C:\Program Files (x86)\Microsoft Office\Office15\OSPP.VBS" /inpkey:yourkeygoeshere
If you are reading this and have not bought 2013 but are thinking of doing so, please don't.
Buy or download anything else, MS Office 2010 or lower, Libre office, Open office...
Wednesday, May 7, 2014
Install Node.js from git
node.js did not work properly for me through apt-get so I installed it from source.
git clone https://github.com/joyent/node.git
./configure
make
make install
npm is already included with node so there is not need to install.
To solve the error:
This is because "npm install" needs node's source for binary compilation. Make sure that directory is permanent. I have mine so that I can git pull a newer version right in that directory or change versions without any other major modifications. The first line represents the current user's npm config file and the second represents the global config file.
git clone https://github.com/joyent/node.git
./configure
make
make install
npm is already included with node so there is not need to install.
To solve the error:
Error: "pre" versions of node cannot be installed, use the --nodedir flag instead
use the following commands:npm config set nodedir /directory/to/node
npm config set nodedir /directory/to/node --global
This is because "npm install" needs node's source for binary compilation. Make sure that directory is permanent. I have mine so that I can git pull a newer version right in that directory or change versions without any other major modifications. The first line represents the current user's npm config file and the second represents the global config file.
Monday, May 5, 2014
Macbook Pro 5,5 Brightness Control in Ubuntu
I have verified that the latest installment of Ubuntu 14.04 with the latest Nvidia proprietary drivers fixed the brightness controls
All you have to do is add
Option "RegistryDwords" "EnableBrightnessControl=1"
to /etc/X11/xorg.conf under the device section.
OR
If you do not see a valid xorg.conf, maybe something that looks like this "xorg.conf~", more than likely the new method for Xorg configurations files is being used which involves a conglomeration of "conf" files inside of the xorg.conf.d folder. Make and add the following:
mkdir /usr/share/X11/xorg.conf.d/20-nvidia.conf
vim 20-nvidia.conf
paste the following:
Section "Device"
Identifier "NVIDIA"
Driver "nvidia"
Option "NoLogo" "True"
Option "RegistryDwords" "EnableBrightnessControl=1"
EndSection
save it by hitting "esc" then ":wq" and finally "enter".
Reboot, then try the f1 and f2 keys. Your brightness should be adjusting accordingly.
All you have to do is add
Option "RegistryDwords" "EnableBrightnessControl=1"
to /etc/X11/xorg.conf under the device section.
OR
If you do not see a valid xorg.conf, maybe something that looks like this "xorg.conf~", more than likely the new method for Xorg configurations files is being used which involves a conglomeration of "conf" files inside of the xorg.conf.d folder. Make and add the following:
mkdir /usr/share/X11/xorg.conf.d/20-nvidia.conf
vim 20-nvidia.conf
paste the following:
Section "Device"
Identifier "NVIDIA"
Driver "nvidia"
Option "NoLogo" "True"
Option "RegistryDwords" "EnableBrightnessControl=1"
EndSection
save it by hitting "esc" then ":wq" and finally "enter".
Reboot, then try the f1 and f2 keys. Your brightness should be adjusting accordingly.
Sunday, May 4, 2014
Nexus 7 Linux Deploy on OTG USB 32GB Flash Drive
I own a Nexus 7 (2013) and have successfully installed Kali on a 32 GB flash drive.
Linux Deploy is a really neat tool that takes full advantage of chroot to boot many linux distros including Ubuntu, Arch, Gentoo, and OpenSuse.
Your mileage may vary with other tablets, but the following describes my working implementation.
Linux Deploy is a really neat tool that takes full advantage of chroot to boot many linux distros including Ubuntu, Arch, Gentoo, and OpenSuse.
Your mileage may vary with other tablets, but the following describes my working implementation.
- In your favorite android rom, acquire Linux Deploy.
- Format your flash drive to Ext4 using another computer. (I don't believe any android apps have this capability yet)
- Insert the flash drive into the OTG cable and then into the Nexus.
- Start up Linux Deploy and create a new profile. A profile contains all of the settings required to start and maintain the desired linux distro. 1 distro per profile.
- You can set whatever linux distro you'd like, but the key to keeping the distro on the flash drive is in the installation type.
- Select "partition" as the installation type and Ext4 as the format.
- Now to set the installation path.
- You will see 3 vertical dots on the top right hand corner. Select this, then press status.
- A bunch of directories and settings will populate. On the bottom you will see the device path(s) for the usb flash drive partitions.
- Copy the device path you want to install the operating system on and paste it into the installation path.
- Select the Install button at the top of the list of configurations and you should be good to go!
I hope this helped somebody! Props to the developer of Linux Deploy, excellent work.
Thursday, April 24, 2014
Recover Ubuntu from perl catastrophe
In a clueless blind attempt to fix a small perl issue. I ended up removing the entire module library for perl. Not my smartest move. This broke aptitude so that I could not successfully reinstall perl-base anymore.
I downloaded the original package from an ubuntu mirror with the following command:
After that I could finally apt-get update without error. Some other commands are having issues because they reference modules that currently don't exist. I applied the same logic to the other perl modules by finding the ubuntu package that contains them.
The next package was doc-base but that depends on libuuid-perl, so complete the same steps as above but with this mirror instead:
Also, libyaml-tiny-perl
Then for doc-base:
... dpkg -i all of these downloaded files.
As you start running programs and get errors such as
in my case, apt-file search the AdduserCommon.pm file and reinstall that package
If you find yourself apt-getting without errors you can execute apt-get --reinstall install *package name* instead of downloading the file the dpkg -i the package.
I downloaded the original package from an ubuntu mirror with the following command:
wget http://mirrors.kernel.org/ubuntu/pool/main/p/perl/perl-base_5.18.2-2ubuntu1_amd64.deb
then for the install
sudo dpkg -i perl-base_5.18.2-2ubuntu1_amd64.deb
After that I could finally apt-get update without error. Some other commands are having issues because they reference modules that currently don't exist. I applied the same logic to the other perl modules by finding the ubuntu package that contains them.
The next package was doc-base but that depends on libuuid-perl, so complete the same steps as above but with this mirror instead:
wget http://mirrors.kernel.org/ubuntu/pool/main/libu/libuuid-perl/libuuid-perl_0.05-1_amd64.deb
Also, libyaml-tiny-perl
wget http://mirrors.kernel.org/ubuntu/pool/main/liby/libyaml-tiny-perl/libyaml-tiny-perl_1.56-1_all.deb
Then for doc-base:
wget http://mirrors.kernel.org/ubuntu/pool/main/d/doc-base/doc-base_0.10.5_all.deb
... dpkg -i all of these downloaded files.
As you start running programs and get errors such as
Can't locate Debian/AdduserCommon.pm in @INC (you may need to install the Debian::AdduserCommon module)
in my case, apt-file search the AdduserCommon.pm file and reinstall that package
If you find yourself apt-getting without errors you can execute apt-get --reinstall install *package name* instead of downloading the file the dpkg -i the package.
Tuesday, April 1, 2014
Fastest DNS server for your area
I have noticed a vast improvement after running this tool called namebench. It really does a great job of finding the DNS servers with the best performance for your area.
you can find this tool here: https://code.google.com/p/namebench/
After running this, I have found that using opendns' 208.67.220.222 server is 23.6% faster than my old google 8.8.8.8 server.
If you are wondering how you can change my dns routing, get openwrt. I have a post on that outlining openwrt on mr3020. Your router will most likely not be a tp-link mr3020 but roughly the same instructions apply. I advise you visit openwrt's website for more info: https://openwrt.org/
As usual, hats off to the developers of namebench and openwrt. This would not be possible without you.
you can find this tool here: https://code.google.com/p/namebench/
After running this, I have found that using opendns' 208.67.220.222 server is 23.6% faster than my old google 8.8.8.8 server.
If you are wondering how you can change my dns routing, get openwrt. I have a post on that outlining openwrt on mr3020. Your router will most likely not be a tp-link mr3020 but roughly the same instructions apply. I advise you visit openwrt's website for more info: https://openwrt.org/
As usual, hats off to the developers of namebench and openwrt. This would not be possible without you.
Monday, March 31, 2014
Install Zentyal 3.5 from apt repository
This is arguably the easiest install I've ever done. Hats off to the Zentyal team for making a very clean and robust piece of software.
This works on Ubuntu 13.10 and 14.04, I have not tested on any other distro so use at your own risk!
add the source:
deb http://archive.zentyal.org/zentyal 3.5 main extra
to /etc/apt/sources.list
run:
sudo apt-get update
sudo apt-get install zentyal
open up your favorite web browser (Zentyal recommends firefox), and navigate to the ip address of your server.
for example, mine is https://192.168.3.186/
should be greeted with a login screen for Zentyal.
enter in your main user just as if you were logging in remotely.
Just a note, some modules are processor intensive. Be aware of what you are enabling.
And that's it! Enjoy!
**I recently upgraded to 14.04 and zentyal does not work, conflicting perl versions are preventing zentyal-core from installing properly as seen here:
https://bugs.launchpad.net/ubuntu/+source/zentyal-core/+bug/1310694
UPDATE 05/29/2014:
Zentyal now works with ubuntu 14.04.
using the repository deb http://archive.zentyal.org/zentyal 3.5 main extra
(notice the 3.5, not 3.4)
do a "sudo apt-get update && sudo apt-get install zentyal"
This works on Ubuntu 13.10 and 14.04, I have not tested on any other distro so use at your own risk!
add the source:
deb http://archive.zentyal.org/zentyal 3.5 main extra
to /etc/apt/sources.list
run:
sudo apt-get update
sudo apt-get install zentyal
open up your favorite web browser (Zentyal recommends firefox), and navigate to the ip address of your server.
for example, mine is https://192.168.3.186/
should be greeted with a login screen for Zentyal.
enter in your main user just as if you were logging in remotely.
Just a note, some modules are processor intensive. Be aware of what you are enabling.
And that's it! Enjoy!
**I recently upgraded to 14.04 and zentyal does not work, conflicting perl versions are preventing zentyal-core from installing properly as seen here:
https://bugs.launchpad.net/ubuntu/+source/zentyal-core/+bug/1310694
UPDATE 05/29/2014:
Zentyal now works with ubuntu 14.04.
using the repository deb http://archive.zentyal.org/zentyal 3.5 main extra
(notice the 3.5, not 3.4)
do a "sudo apt-get update && sudo apt-get install zentyal"
Saturday, March 29, 2014
Automated Water Softener
Using an arduino, a relay, and some buttons I have created an alternative for a very expensive control unit for a water softener.
The control unit had time and a set schedule for cycling the water softener refresh process. I don't need a set time for the water softener to recycle, just someone to press a button on the thing.
So the code is posted here https://github.com/slimjim2234/Automated-Water-Softener
enjoy!
The control unit had time and a set schedule for cycling the water softener refresh process. I don't need a set time for the water softener to recycle, just someone to press a button on the thing.
So the code is posted here https://github.com/slimjim2234/Automated-Water-Softener
enjoy!
Friday, March 28, 2014
Install Xen Orchestra from source
Instructions to come.
git clone https://github.com/vatesfr/xo-server.git
cd xo-server
npm install
git clone https://github.com/vatesfr/xo-web.git
cd xo-web
npm install
node.js did not work properly for me through apt-get so I installed it from source.
git clone https://github.com/joyent/node.git
./configure
make
make install
To solve the error:
this is because npm install needs node's source as well as the node executable. Make sure that directory is permanent. I have mine so that I can git pull a newer version right in that directory without any other major modifications. The first line represents the current user's npm config file and the second represents the global config file.
Also, make sure you are running the latest node version of 10.x, 11.x is not compatible.
git clone https://github.com/vatesfr/xo-server.git
cd xo-server
npm install
git clone https://github.com/vatesfr/xo-web.git
cd xo-web
npm install
node.js did not work properly for me through apt-get so I installed it from source.
git clone https://github.com/joyent/node.git
./configure
make
make install
To solve the error:
Error: "pre" versions of node cannot be installed, use the --nodedir flag instead
use the following commands:npm config set nodedir /directory/to/node
or
npm config set nodedir /directory/to/node --global
this is because npm install needs node's source as well as the node executable. Make sure that directory is permanent. I have mine so that I can git pull a newer version right in that directory without any other major modifications. The first line represents the current user's npm config file and the second represents the global config file.
Also, make sure you are running the latest node version of 10.x, 11.x is not compatible.
Thursday, March 27, 2014
Home Configuration
Router: WNDR3700 running barrier breaker (I upgrade about once a week.
I use this with ddns (http://www.dnsdynamic.org/) to access my home network from anywhere. Also use for port forwarding, static hostnames, ssh tunneling, and network monitoring.
I use this with ddns (http://www.dnsdynamic.org/) to access my home network from anywhere. Also use for port forwarding, static hostnames, ssh tunneling, and network monitoring.
Server (not that great): I use an old dual core amd-based server with a couple hdd attached just for testing/hosting purposes. I am running Ubuntu 13.10 as my base OS. Just a couple running services; Jenkins, RabbitMQ, Owncloud 6, Seafile 2.1.5, Codiad from git, cloud9, SoftEther VPN from git, webmin, zentyal (really neat software)... and much more.
Raspberry Pi #1: Running raspbian, solely used for testing bluez from git.
Raspberry Pi #2: Running raspbian, CUPS server connected to Canon S530d.
Raspberry Pi #3: Running raspbian, testing other packages like owncloud.
Beaglebone black A5C #3: Running ubuntu 13.10 on the eMMC and Fedora on the micro sd card
Raspberry Pi #3: Running raspbian, testing other packages like owncloud.
Beaglebone black A5C #3: Running ubuntu 13.10 on the eMMC and Fedora on the micro sd card
Macbook pro 13" 2009 5,5:
2 x 2 GB g.skill ram
240GB Kingston SSD
750GB WD HDD (replaced super drive for second internal HDD)
Running:
osx 10.9.2
Windows 7
Ubuntu 14.04
Kali 1.0.6
Opensuse 13
2 x 2 GB g.skill ram
240GB Kingston SSD
750GB WD HDD (replaced super drive for second internal HDD)
Running:
osx 10.9.2
Windows 7
Ubuntu 14.04
Kali 1.0.6
Opensuse 13
Friday, March 21, 2014
Cups 1.7.5, 2.0, or 2.1 on Raspberry Pi Raspbian
In my attempt to turn the raspberry pi into a wireless printer server for my usb connected Canon s530d printer (yes, i'm well overdue for a 21st century printer upgrade, just wanted to see if I could make it work), I have concluded that cups and a bunch of other requirements need to be installed. To accomplish this, the prerequisite list is long and compiling is time consuming, so patience must be a prerequisite to these prerequisites. I chose to compile all of these from scratch instead of apt-get all of them because the raspbian and ubuntu-based distros have versions of cups that are highly outdated. As of writing this I believe they are still on 1.5.3, not good. To stay up-to-date, you should go with my route to ensure you are running the latest cups software.
Add the following at the top of /etc/init.d/cups by sudo vim /etc/init.d/cups
then run "sudo update-rc.d"
apt-get install lynx
lynx http://localhost:631
Use the arrows to navigate to the administration word, type "y" to enable the cookie, and a new page should populate. In the middle of the page you should see "Allow remote adminstration". Navigate with the "down" arrow to the bottom and press enter on the desired highlighted box. It will ask for authentication, use the root or current username and password as long as it is part of the lpadmin group described in the CUPS link.
sudo cupsctl --remote-admin
Trial and error resolutions:
If you receive the error:
syntax error near unexpected token `win32-dll' trying to run ./configure
for qpdf or poppler, run the command:
aclocal
then
autoreconf -i
try ./configure again and you should be good to go
___________________________________________________________________
If you receive the error
libqpdf/SecureRandomDataProvider.cc:92:4: error: #error "Don't know how to generate secure random numbers on this platform. See random number generation in the top-level README"
trying to configure qpdf, configure using:
./configure --enable-insecure-random
___________________________________________________________________
If you receive the error:
/home/pi/Projects/cups-filters-1.0.48/filter/pdftoraster.cxx:1807: undefined reference to `GfxColorSpace::setDisplayProfile(void*)'
Make sure you have libopenjpeg-dev installed. Configure, recompile, and install poppler again. Then cups-filters again.
___________________________________________________________________
If you receive an error that contains anything with an undefined reference to pwg* like pwgMediaForLegacy
The compiler is linking to an older library which means you may have a previous version of cups or a cups library installed.
When I received this error I had to remove all cups libraries in the following folder:
/usr/lib/arm-linux-gnueabihf/
to find all references to the libraries for your computer, use locate to find installation files don't appear to be in the correct or "ls -l" a folder to check the date and see if it's rather old. The files I had removed had the date of October 13th, 2013. They definitely should not be there and should be removed.
apt-get remove cups*
rm /usr/lib/arm-linux-gnueabihf/"cups libraries"
and you should be good to go
___________________________________________________________________
If you receive the error "Error: Success" (or something along those lines) when trying to automatically find the driver for the printer or "lpinfo -m" doesn't populate any drivers, most likely the foomatic drivers are conflicting with the gutenprint drivers.
To fix it, locate every "foomatic" reference and "sudo rm -r" it.
Reinstall the cups-filters and gutenprint drivers and run "lpinfo -m" if any drivers populate you should be good to go.
OR
you could change the permissions of the foomatic driver database by "chmod 644 /usr/lib/cups/driver/foomatic-db-driver"
___________________________________________________________________
A great source for looking up the proper way to install the programs is linuxfromscratch.org.
Update 12/23/2014
cups 1.7 is no longer maintained.
choose either cups 2.0 or 2.1
Few disclaimers:
*** Needs adequate power ***
Some printers are power hungry (like mine) and the pi will incidentally freeze without warning due to too much current drawl through the host USB port. I have been working just fine with plugging in a pair of iphone usb chargers totaling approximately 2 amps joint by a usb y-cable. This one exactly http://www.amazon.com/gp/product/B0047AALS0/ref=wms_ohs_product?ie=UTF8&psc=1
*** Will not work with all printers ***
My Canon s530d requires experimental gutenprint drivers so I have included those steps below
*** Lengthy compile times ***
Seriously, if you are compiling all of the sources below, you're gonna have a lot of down time. It usually takes me about 2-3 hours from the first step to the last for a full installation.
Update 12/23/2014
cups 1.7 is no longer maintained.
choose either cups 2.0 or 2.1
Few disclaimers:
*** Needs adequate power ***
Some printers are power hungry (like mine) and the pi will incidentally freeze without warning due to too much current drawl through the host USB port. I have been working just fine with plugging in a pair of iphone usb chargers totaling approximately 2 amps joint by a usb y-cable. This one exactly http://www.amazon.com/gp/product/B0047AALS0/ref=wms_ohs_product?ie=UTF8&psc=1
*** Will not work with all printers ***
My Canon s530d requires experimental gutenprint drivers so I have included those steps below
*** Lengthy compile times ***
Seriously, if you are compiling all of the sources below, you're gonna have a lot of down time. It usually takes me about 2-3 hours from the first step to the last for a full installation.
Download the following sources:
qpdf "git clone https://github.com/qpdf/qpdf.git"
poppler "wget http://poppler.freedesktop.org/poppler-0.24.5.tar.xz"
cups-filters "http://www.openprinting.org/download/cups-filters/cups-filters-1.0.54.tar.bz2"
cups "https://www.cups.org/software/1.7.5/cups-1.7.4-source.tar.bz2" or "git clone -b branch-1.7 http://www.cups.org/cups.git cups-1.7" or "git clone http://www.cups.org/cups.git cups-2.0" or
foomatic-db - "wget http://www.openprinting.org/download/foomatic/foomatic-db-current.tar.gz"
gutenprint - "http://sourceforge.net/projects/gimp-print/files/gutenprint-5.2/5.2.10/gutenprint-5.2.10.tar.bz2"
Bonus! Google Cloud Print!
Cloud Print "git clone git://github.com/simoncadman/CUPS-Cloud-Print.git"
"git clone http://www.cups.org/cups.git cups-2.1"
ghostscript - "git clone http://git.ghostscript.com/ghostpdl.git"foomatic-db - "wget http://www.openprinting.org/download/foomatic/foomatic-db-current.tar.gz"
gutenprint - "http://sourceforge.net/projects/gimp-print/files/gutenprint-5.2/5.2.10/gutenprint-5.2.10.tar.bz2"
Bonus! Google Cloud Print!
Cloud Print "git clone git://github.com/simoncadman/CUPS-Cloud-Print.git"
To start off:
So the process goes smoothly, remove all references to cups, ghostscript, foomatic, cups-filters, and gutenprint in the apt package manager and file system. To do this I normally use apt-get remove cups* and "locate cups" on the command line to find the files and remove the files manually. That way I am not accidentally ruining my system by removing files that should not be removed.
The process is nearly the same with all of the installs (with a few excepts on a few of them) and using sudo before every command is highly recommended
- autoconf
- ./configure
- make
- sudo make install
Initial cleanup:
Find unnecessary preinstalled packages by running the following:
dpkg --get-selections | grep -v deinstall | grep cups
dpkg --get-selections | grep -v deinstall | grep poppler
dpkg --get-selections | grep -v deinstall | grep ghostscript
dpkg --get-selections | grep -v deinstall | grep foomatic-db
or you could simply run this:
dpkg --get-selections | grep -v deinstall | egrep '^(cups|poppler|ghostscript|foomatic-db)' | awk '{print $1 }' | tr '\n ' ' ''' | xargs sudo apt-get remove
dpkg --get-selections | grep -v deinstall | grep ghostscript
dpkg --get-selections | grep -v deinstall | grep foomatic-db
or you could simply run this:
dpkg --get-selections | grep -v deinstall | egrep '^(cups|poppler|ghostscript|foomatic-db)' | awk '{print $1 }' | tr '\n ' ' ''' | xargs sudo apt-get remove
Remove all preinstalled packages that look like this:
cups-*
poppler-*
So the compiler doesn't find any precompiled libraries find any files that have the reference of libcups* and libpoppler*, then remove them.
For me, I had to do the following:
sudo rm /usr/lib/arm-linux-gnueabihf/libcups*
sudo rm /usr/lib/arm-linux-gnueabihf/libpoppler*
sudo find . -name cups* -not -path "./home/pi/*" -exec rm -rf {} \;
sudo find . -name cups* -not -path "./home/pi/*" -exec rm -rf {} \;
--------------------------------------------------------------------------------------------------------------
1. QPDF
cd qpdf
cd qpdf
autoconf
./configure --enable-doc-maintenance
make (at the end of the make you should see a summary of what is available, resolve any dependencies as you wish like docbooks)
sudo make install
there should be no errors, if there are it is most likely dependency issues. Also, this part has the longest compile time.
--------------------------------------------------------------------------------------------------------------
2. POPPLER
cd poppler
cd poppler
./autogen.sh
./configure --enable-libcurl
./configure --enable-libcurl
make
sudo make install
once again, resolve dependencies as necessary. At the end of the poppler configure you should see a summary that resembles the following code. Make sure you have cairo output, libjpeg, libpng, libtiff, and libopenjpeg checked off with yes. You have to have at least those checked for cups-filters to install properly.
Building poppler with support for:
font configuration: fontconfig
splash output: yes
cairo output: yes
qt4 wrapper: yes
qt5 wrapper: no
glib wrapper: yes
introspection: no
cpp wrapper: yes
use gtk-doc: no
use libjpeg: yes
use libpng: yes
use libtiff: yes
use zlib: yes
use libcurl: yes
use libopenjpeg: yes
use cms: yes
with lcms2
command line utils: yes
test data dir: /home/pi/Projects/poppler-0.24.5/./../test
--------------------------------------------------------------------------------------------------------------
Building poppler with support for:
font configuration: fontconfig
splash output: yes
cairo output: yes
qt4 wrapper: yes
qt5 wrapper: no
glib wrapper: yes
introspection: no
cpp wrapper: yes
use gtk-doc: no
use libjpeg: yes
use libpng: yes
use libtiff: yes
use zlib: yes
use libcurl: yes
use libopenjpeg: yes
use cms: yes
with lcms2
command line utils: yes
test data dir: /home/pi/Projects/poppler-0.24.5/./../test
--------------------------------------------------------------------------------------------------------------
3. CUPS
Just follow these instructions http://www.linuxfromscratch.org/blfs/view/svn/pst/cups.html
They did a good job, and i'd basically be copy and pasting. You don't have to do the patch so here are the instructions without it.
For <username> , insert an admin user you would like to have control over the printer settings. I used my default "pi" user. You can add as many as you would like.
useradd -c "Print Service User" -d /var/spool/cups -g lp -s /bin/false -u 9 lp
groupadd -g 19 lpadmin
usermod -a -G lpadmin <username>
cd cups-1.7.5
or
cd cups-2.0*
./configure
make
sudo make install
echo "ServerName /var/run/cups/cups.sock" > /etc/cups/client.conf
--------------------------------------------------------------------------------------------------------------
4. GHOSTSCRIPT
cd ghostpdl/gs
sudo apt-get install libxt-dev
./autogen.sh
./configure
make
sudo make install
sudo make install-so
--------------------------------------------------------------------------------------------------------------
They did a good job, and i'd basically be copy and pasting. You don't have to do the patch so here are the instructions without it.
For <username> , insert an admin user you would like to have control over the printer settings. I used my default "pi" user. You can add as many as you would like.
useradd -c "Print Service User" -d /var/spool/cups -g lp -s /bin/false -u 9 lp
groupadd -g 19 lpadmin
usermod -a -G lpadmin <username>
cd cups-1.7.5
or
cd cups-2.0*
./configure
make
sudo make install
echo "ServerName /var/run/cups/cups.sock" > /etc/cups/client.conf
--------------------------------------------------------------------------------------------------------------
4. GHOSTSCRIPT
cd ghostpdl/gs
sudo apt-get install libxt-dev
./autogen.sh
./configure
make
sudo make install
sudo make install-so
5.CUPS-FILTERS
cd cups-filters-1.0.54
cd cups-filters-1.0.54
autoconf
./configure
make
sudo make install
--------------------------------------------------------------------------------------------------------------
****This step is optional and is dependent upon the requirements of your printer driver****6. FOOMATIC-DB
cd ghostpdl
sudo apt-get install libxt-dev
./configure
make
sudo make install
--------------------------------------------------------------------------------------------------------------
****This step is optional and is dependent upon the requirements of your printer driver****
7. GUTENPRINT
cd gutenprint-5.2.10
sudo apt-get install texlive-fonts-extra doxygen
./configure
make
sudo make install
--------------------------------------------------------------------------------------------------------------
8. Google Cloud Print
cd CUPS-Cloud-Print
./configure
make install
--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------
****This step is optional and is dependent upon the requirements of your printer driver****6. FOOMATIC-DB
cd ghostpdl
sudo apt-get install libxt-dev
./configure
make
sudo make install
--------------------------------------------------------------------------------------------------------------
****This step is optional and is dependent upon the requirements of your printer driver****
7. GUTENPRINT
cd gutenprint-5.2.10
sudo apt-get install texlive-fonts-extra doxygen
./configure
make
sudo make install
--------------------------------------------------------------------------------------------------------------
8. Google Cloud Print
cd CUPS-Cloud-Print
./configure
make install
--------------------------------------------------------------------------------------------------------------
Add the following at the top of /etc/init.d/cups by sudo vim /etc/init.d/cups
then run "sudo update-rc.d"
### BEGIN INIT INFO
# Provides: cups
# Required-Start: $syslog $remote_fs
# Required-Stop: $syslog $remote_fs
# Should-Start: $network avahi
# Should-Stop: $network
# X-Start-Before: samba
# X-Stop-After: samba
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: CUPS Printing spooler and server
### END INIT INFO
this prevents the following loop message when trying to install other packages and invoking update-rc.d:
"insserv: Starting cups depends on plymouth and therefore on system facility `$all' which can not be true!"
"insserv: Starting cups depends on plymouth and therefore on system facility `$all' which can not be true!"
---
The installation should be finished with no errors, now we have to fix some issues with cups because the new cups-filters uses newer methods for test banners.
Follow the instructions to fix this here: http://www.bsmdevelopment.com/Reference/Tech_20130002.html
Easiest way to get to the web interface without editing the conf file for cups is use lynx and do the following:
Trial and error resolutions:
If you receive the error:
syntax error near unexpected token `win32-dll' trying to run ./configure
for qpdf or poppler, run the command:
aclocal
then
autoreconf -i
try ./configure again and you should be good to go
___________________________________________________________________
If you receive the error
libqpdf/SecureRandomDataProvider.cc:92:4: error: #error "Don't know how to generate secure random numbers on this platform. See random number generation in the top-level README"
trying to configure qpdf, configure using:
./configure --enable-insecure-random
___________________________________________________________________
If you receive the error:
/home/pi/Projects/cups-filters-1.0.48/filter/pdftoraster.cxx:1807: undefined reference to `GfxColorSpace::setDisplayProfile(void*)'
Make sure you have libopenjpeg-dev installed. Configure, recompile, and install poppler again. Then cups-filters again.
___________________________________________________________________
If you receive an error that contains anything with an undefined reference to pwg* like pwgMediaForLegacy
The compiler is linking to an older library which means you may have a previous version of cups or a cups library installed.
When I received this error I had to remove all cups libraries in the following folder:
/usr/lib/arm-linux-gnueabihf/
to find all references to the libraries for your computer, use locate to find installation files don't appear to be in the correct or "ls -l" a folder to check the date and see if it's rather old. The files I had removed had the date of October 13th, 2013. They definitely should not be there and should be removed.
apt-get remove cups*
rm /usr/lib/arm-linux-gnueabihf/"cups libraries"
and you should be good to go
___________________________________________________________________
If you receive the error "Error: Success" (or something along those lines) when trying to automatically find the driver for the printer or "lpinfo -m" doesn't populate any drivers, most likely the foomatic drivers are conflicting with the gutenprint drivers.
To fix it, locate every "foomatic" reference and "sudo rm -r" it.
Reinstall the cups-filters and gutenprint drivers and run "lpinfo -m" if any drivers populate you should be good to go.
OR
you could change the permissions of the foomatic driver database by "chmod 644 /usr/lib/cups/driver/foomatic-db-driver"
___________________________________________________________________
If you receive the following error in your error page under the administrative menu:
no profiles specified in PPD
Then you need to install ghostscript. See step 4.
Subscribe to:
Posts (Atom)