Category Archives: Hot Computer

HOW TO KNOW MY COMPUTER HAS A VIRUS

A virus attack on home computers these days is actually more of a rare occurrence than it used to be. A virus will easily infect a computer, if that computer has no anti virus protection or a very new type of infection is released on the internet, then picked up by your unprotected computer.

Unless a virus is detected by software designed to find such, one doesn’t know that their computer is infected until strange events start to occur. These types of events show how computer irregularity may be picked up so that you can identify a behavior that points to a virus infection.

• Program or more than one program may take an unusual amount of time to start
• A program may refuse to start at all
• A program may not behave in a normal manner
• If not connected the internet may not connect when you attempt a connection
• Anti virus or anti malware programs may refuse to start or operate
• Your email program refuses to operate or may send messages of its own accord.
• Computer may continually close down and restart without your intervention
• The computer may take an unusual amount of time to perform any function

To ensure no further damage is to occur, it is best not operate the computer any further. Ensure you are not connected to the internet as well; otherwise you may infect other computers or your own with more infections.

Options to ensure a return to normal operation may involve running one anti-virus program and multiple anti-malware software one after another with full scans after restarting a computer in safe mode.

Alternatively, you may need to seek the services of a computer professional or technician to ensure complete removal and to save time and future costs. If you get to a point where a complete re-installation of your operating system is required, if you have not backed up your precious data, the loss of data is incalculable.

HOW DO YOU KNOW ABOUT ANTI VIRUS SOFTWARE

Many people have anti-virus software installed on their computers. They trust their anti-virus software and rely on these intelligent and powerful tools to protect their system security. However, how do you know about anti-virus software? Is this virus removal tool always mighty in protecting your computer? See what you will get here.

Anti-virus Software, or Safe-defend Software, is a kind of program tool being able to get rid of all the program codes, such as virus and Trojans, etc., which are harmful to a computer.

In recent years, many terms such as Internet Security Suite, Overall Security Suite appear one after another, are also a kind of software used to clean computer viruses, Trojans and malicious software. Anti-virus tool usually integrates utilities including monitoring and identifying, virus scan and removal, as well as automatic update, etc., while some other anti-virus software also has utility on data restoring.

1. It is impossible for anti-virus program to remove all the viruses.

2. The anti-virus software is not always able to remove any virus it has found.

3. On a computer, for each operating system, there should not be two or more anti-virus program installed at the same time.(Exception is for anti-virus program that are compatible or produced by the same developer.)

4. At present, there are several ways that anti-virus program may use to deal with the infected file:

1) Clean up. The software does not delete the infected file, but tries to clean up the virus codes from the infected file. Therefore, this way is also called Repair.
2) Remove. The software removes the infected file detected completely.
3) Rename. The software renames the infected file, so that the virus program won’t be able to execute as it can’t find a corresponding file.
4) Ignore. The software can not clean up the infected file with different methods, or users choose not to deal with the file.)

To sum up, users should be clear that the anti-virus tool is always lagging behind the computer viruses. Therefore, along with updating the version of anti-virus software at real time and scanning your computer regularly, users should learn more about the computer security and network security. And, do not open an unknown file or an unsafe website; do not visit the unhealthy website; remember to update your own private password if necessary, etc. Only in this way can your computer and network security be protected and guaranteed.

Specifically for the rapid growing of all kinds of viruses, Trojans or spyware, etc., there are many anti-virus/anti-spyware software in the security market. Kaspersky, AVG are some of them which lead ahead, while there are also some new growing star, such as Spyware Cease, which has impressed many computer users by its profound security utilities and dedicated after-sales service.

5 Cool Mouse Operations You Can Use In Windows

Here are five windows operations that you can use on some occasions with windows or associated software.

1 – Open new links in brand new tabs on Windows Internet Explorer

If your mouse has three buttons – then use the middle one to open new tabs. Hover the mouse pointer over the link and press the mouse wheel to open up new tabs.  All you need to do is place the mouse pointer over a link and then press down on the middle mouse button (the mouse wheel).

The middle mouse button is able to roll forward or back, however, it is also able to be pressed down and clicked just like a button.  If you do this on a link then it will open up that link in a new tab.  This is a lot quicker than pressing right-click and clicking on “open in a new tab.”  It is an easier way to research certain items by simply clicking in order to open new tabs.

If you are feeling the super lazy you can hold CTRL and press Tab to scroll through your tabbed windows – or you can even hold Alt and press Tab to see a screen of windows – which shows all of the items you have active at the moment, including your tabbed windows.

2 – You may find hidden menus within context menus on Windows

Some article and icon buttons on Microsoft Windows may be right-clicked on to reveal a context menu. Some icons you are able to hold shift upon in order to reveal an even bigger listed menu.  You should try it on your hard drive file.  This is a very nice little trick to use if you are a hardcore windows user.

3 – You are able to select columns of text with some Windows applications

On some applications you are able to select text on a vertical level as opposed to on a horizontal level.  You do this by holding the ALT key and then dragging the mouse across the text you would like to highlight.  This may be done on some versions of Microsoft word and many advanced editors that word have created.  You will even find that you can use this technique on the fantastic code writing software known as Notepad++.

4 – You are able to drag and drop items into some menus

When you right click on the bottom taskbar/icon bar, a contextual menu pops up. In many cases whilst this menu is open you are able to grab certain icons and add them in there. For example if you right clicking the folder icon on in the bottom left of the taskbar (next to the start menu), you will see a list of your most recent file accesses.

Click and hold onto an icon on your desktop and drag it into the open menu to pin it to the folder menu. Every time you right-click the folder you will see two lists. One list is your usage and the other is the list you created. You can use this instead of having to search through the directories on your computer to find files.

5 – Sometimes you are able to select chunks of text.

If you would like to select a big chunk of text, you can hold the CTRL button and highlight the section of text whilst avoiding the other end selected texts pieces.  This is a nice alternative to having to highlight in a horizontal direction only, because the horizontal selection process is all-inclusive and does not allow you to omit certain parts of text.

About the author – My name is Sonia Jackson. I represent the web-site www.writing-research-papers.org. We’ll help you to solve all problems with writing different essays and research papers in a short time; we’ll answer all your questions and give you useful advice.

The Battle for Access

If governments fund scientific research, should for-profit publishers be able to copyright the findings? In 2015, Elsevier, a major publisher of academic journals, filed a lawsuit against Sci-Hub, a website started in 2011 that now houses roughly 60 million pirated articles for free download — a violation of copyright law.

In 2016, the case turned an ongoing debate about access to research in the digital age into a public debate. Open-access advocates, like Sci-Hub’s founder, Alexandra Elbakyan, contend that freely sharing research promotes faster innovation. And it doesn’t exclude scientists who work at institutions that can’t afford journal subscriptions, which range from hundreds to tens of thousands of dollars. But traditional “gatekeeper” publishers like Elsevier worry that sites like Sci-Hub could lower standards and promote irresponsible science. Discover asked all sides to weigh in on the future of scientific publishing.

Fred Fenter

Society produces 2.5 million scientific articles per year, a number that’s growing exponentially. Too many of these are still being validated and disseminated according to processes established during the middle of the 20th century. This situation causes inefficiencies and delays in the communication of scientific discovery. Despite the huge advances in information technology, underlying mentalities are slow to change.

Today, subscriptions are paid for through institutional overheads. Authors are under the impression that publishing in a subscription journal is “free.” The reality is that, on a paper-by-paper basis, the subscription model is very expensive. If funding agencies denied use of their overheads for payment of journal subscriptions, for example, the university community would be confronted with a real debate on how to [publish research within] their budget.

Peter Suber

Open access for research literature is as old as the internet and web. In fact, it’s older. [The internet’s predecessor] was created in the 1960s to share research. The first open-access journals and repositories were launched in the 1970s and ’80s. The term open access was coined by the Budapest Open Access Initiative in 2002. Sci-Hub is a newcomer.

Scholarly journals don’t buy their articles from authors, and they haven’t since scholarly journals were invented in the mid-17th century. Researchers write articles for impact, not for money, which frees them to consent to open access without losing revenue. All new research literature is born digital, and the internet can share it with a global audience at zero marginal cost. If you write for impact and not for money, it’s foolish to pass up this beautiful opportunity.

7-Eleven Drone Deliveries to Rise in 2017

A dozen lucky 7-Eleven customers have already gotten to taste the possibilities of drone food delivery in Reno, Nevada. In November 2016, these customers experienced the futuristic thrill of placing 7-Eleven orders through an app and then watching a hovering delivery drone drop off their order within 10 minutes. Next year, 7-Eleven plans to expand on such drone deliveries in partnership with a company calling itself the “Uber of drone delivery.”

That self-proclaimed “Uber of drone delivery” is not a tech giant such as Amazon or Google, but a delivery drone startup called Flirtey. The startup aims to beat its bigger rivals to the punch by working closely with regulators and large companies around the world to expand its early foothold in the drone delivery market. In the long term, Flirtey is betting that its delivery drones can deliver more convenience at a similar price point as traditional drones.

“Flirtey’s pricing is comparable to current last-mile delivery services,” says Matt Sweeney, CEO of Flirtey. “So Flirtey is a faster and more convenient service at a price point competitive with traditional delivery prices.”

The idea of making drone deliveries as cheap for customers as traditional delivery services is easier said than done. Delivery drones also face certain hurdles in making successful deliveries to customers spread across a wide delivery area on a timely schedule. For example, most smaller drones have a fairly limited range before they need to land and recharge their batteries. Delivering packages to homes or businesses surrounded by tall trees and power lines also poses its own challenge.

Why Amazon Dreams of Flying Warehouses

Amazon gets to play full-time Santa Claus by delivering almost any imaginable item to customers around the world. But the tech giant does not have a magical sleigh pulled by flying reindeer to carry out its delivery orders. Instead, a recent Amazon patent has revealed the breathtaking idea of using giant airships as flying warehouses that could deploy swarms of delivery drones to customers below.

Many patent filings related to new technology often indulge in fantastical flights of fancy. But it’s worth taking a moment to appreciate some of the truly wilder scenarios being imagined within this Amazon patent filing. One scene envisions human or robot workers going to work busily sorting packages aboard airships hovering 45,000 feet above major cities. Another scene imagines the airship’s kitchen whipping up hot or cold food orders that would be loaded onto delivery drones for delivery within minutes.

A third scene anticipates swarms of delivery drones dropping off orders of food or t-shirts to people attending concerts or sports games. Amazon’s patent filing even considers how the airships could fly at much lower altitudes to act as giant billboards or megaphones that advertise and sell items directly to the crowds below.

There is a method to the madness. Amazon currently aims to attract customers with the promise of getting almost anything—clothing, electronics and groceries—delivered within days or even hours. It is currently racing against Google and delivery drone startups such as Flirtey to become the go-to service for customers who expect speedy deliveries of their purchases. The Amazon patent idea for an “airborne fulfillment center” may never become reality, but it speaks to the company’s ambition to enable an “instant gratification” world for customers.

At its heart, Amazon’s idea for flying warehouses aims to solve two problems. First, a mobile warehouse flying high above cities would theoretically enable Amazon to move its packages and products even closer to customers’ homes and businesses and shorten the time needed for last-mile deliveries. The company could even strategically move certain flying warehouses to different locations depending on temporary demand (such as crowds gathering at stadiums for sporting events or concerts).

Second, the flying warehouse scheme tries to tackle the range problem for delivery drones. The small delivery drones being tested by Amazon have fairly limited range of approximately 10 miles (or 20 miles roundtrip). That poses a challenge for Amazon’s Prime Air service, which recently began its first deliveries near Cambridge, UK with the promise of delivering packages within 30 minutes.

IBM’s Watson Replaces 34 ‘White-Collar’ Employees at Japanese Insurance Company

The impact AI and robotics is having on repetitive manual labor is evident — automobile assembly lines and Amazon’s fulfillment centers are just two examples. But many white-collar jobs are similarly repetitive; they can be broken down into steps and decisions that a machine can easily learn.

The bad news is that jobs have been, and will be, eliminated. By 2021, AI systems could gobble up some 6 percent of U.S. jobs, according to a report from Forrester Research. The World Economic Forum predicts advances in AI could eliminate more than 7 million jobs in 15 of the world’s leading economies over several years.

But here’s the upside: Handing repetitive tasks to machines might free us up for higher-level tasks. The same WEF report notes that AI will create 2 million new jobs in computer science, engineering and mathematics. And leaders from tech giants like Google, IBM and Microsoft have said AI will amplify human abilities rather than fully replace us. Instead of sweating time-consuming repetitive tasks, computers will, perhaps, free us up to tackle challenges that require a human touch.

For example, an AI company called Conversica built a system that sends messages to sales leads to get initial conversations started and gauge interest. The most promising leads are then sent to a salesperson to close the deal. IBM’s Watson can dig through medical data and images to find signs of cancer, but the final diagnosis is still in the warm, fleshy hands of a human.

Ovum, a firm that keeps its thumb on the pulse of tech trends, expects AI to be the biggest disruptor for data analytics in 2017. Forrester predicts 2017 will be the year “big data floodgates open,” with investments in AI tripling.

Time will tell if AI lives up to these expectations; in the meantime you can use this helpful tool to determine the likelihood of a computer taking your job.

21st Century Camouflage Confuses Face Detectors

When it comes to disguises, silly mustaches and fake noses won’t cut it anymore.

As facial recognition capabilities grow more sophisticated, cameras and algorithms can to do more with less. Even grainy images, like those you might find on a gas station surveillance camera, can hold enough information to match a face to a database. But there are ways to hide. 

Gathering Knowledge

Your face is garnering a lot of interest these days. Police departments use facial recognition systems to identify criminals. Facebook knows your friends’ faces. Facial recognition is being incorporated into billboards to display ads based on the sex of the person looking at them.

In the not-so-distant future, your face might replace your wallet—a smile will serve as your identification card and credit card. Amazon plans to eliminate the checkout line at its new brick-and-mortar grocery store concept, Amazon Go, in Seattle. How? According to the company’s website:Our checkout-free shopping experience is made possible by the same types of technologies used in self-driving cars: computer vision, sensor fusion, and deep learning. Our Just Walk Out Technology automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart.

How computer vision will work at Amazon Go isn’t exactly clear, but your face may play an important role in the shopping experience. Right now, the store is only open to Amazon employees, but it is expected to open to the public sometime in 2017.

Given all this attention, there may come a day when we want to avoid this kind of computer recognition.

Adam Harvey, a Berlin-based artist, is developing a line of clothing and accessories aimed at disrupting facial recognition software by fighting fire with fire. His forthcoming HyperFace project is a set of wearable patterns that overloads facial recognition software with images of faces to distract from the real person hiding behind it, exploiting weaknesses in the technology.

Faces In A Crowd

A facial recognition system keys in on dominant features and parses them into numeric sequences that are calculated according to the parameters of the algorithm. By crunching the numbers, it can to determine whether it’s “seeing” a human being or not, and who that face belongs to. Some algorithms need no more than 100 pixels—2.5 percent of an Instagram photo—to identify 78 relevant facial characteristics, said Harvey in a 2016 talk at the Chaos Communication Congress.

Harvey’s designs are collages that mimic basic facial features, sending a barrage of information that obscures a real face. Theoretically, worn as a shirt, scarf or shawl, patterns (pictured above) should protect your identity from nosy algorithms.

HyperFace is an extension of Harvey’s NYU thesis project to thwart face detectors with makeup and hair gel. In addition to hairstyles that covered the face, concealing the “T-zone”—the area around the bridge of the nose and eyes—seems to be most important.

The Future of Computing Depends on Making It Reversible

For more than 50 years, computers have made steady and dramatic improvements, all thanks to Moore’s Law—the exponential increase over time in the number of transistors that can be fabricated on an integrated circuit of a given size. Moore’s Law owed its success to the fact that as transistors were made smaller, they became simultaneously cheaper, faster, and more energy efficient. The ­payoff from this win-win-win scenario enabled reinvestment in semi­conductor fabrication technology that could make even smaller, more densely packed transistors. And so this ­virtuous ­circle continued, decade after decade.

Now though, experts in industry, academia, and government laboratories anticipate that semiconductor miniaturization won’t continue much longer—maybe 5 or 10 years. Making transistors smaller no longer yields the improvements it used to. The physical characteristics of small transistors caused clock speeds to stagnate more than a decade ago, which drove the industry to start building chips with multiple cores. But even multicore architectures must contend with increasing amounts of “dark silicon,” areas of the chip that must be powered off to avoid overheating.

Heroic efforts are being made within the semiconductor industry to try to keep miniaturization going. But no amount of investment can change the laws of physics. At some point—now not very far away—a new computer that simply has smaller transistors will no longer be any cheaper, faster, or more energy efficient than its predecessors. At that point, the progress of conventional semiconductor technology will stop.

What about unconventional semiconductor technology, such as carbon-nanotube transistors, tunneling transistors, or spintronic devices?Unfortunately, many of the same fundamental physical barriers that prevent today’s complementary metal-oxide-semiconductor (CMOS) technology from advancing very much further will still apply, in a modified form, to those devices. We might be able to eke out a few more years of progress, but if we want to keep moving forward decades down the line, new devices are not enough: We’ll also have to rethink our most fundamental notions of computation.

Let me explain. For the entire history of computing, our calculating machines have operated in a way that causes the intentional loss of some information (it’s destructively overwritten) in the process of performing computations. But for several decades now, we have known that it’s possible in principle to carry out any desired computation without losing information—that is, in such a way that the computation could always be reversed to recover its earlier state. This idea of reversible computing goes to the very heart of thermo­dynamics and information theory, and indeed it is the only possible way within the laws of physics that we might be able to keep improving the cost and energy efficiency of general-purpose computing far into the future.

In the past, reversible computing never received much attention. That’s because it’s very hard to implement, and there was little reason to pursue this great challenge so long as conventional technology kept advancing. But with the end now in sight, it’s time for the world’s best physics and engineering minds to commence an all-out effort to bring reversible computing to practical fruition.

The history of reversible computing begins with physicist Rolf Landauer of IBM, who published a paper in 1961 titled “Irreversibility and Heat Generation in the Computing Process.” In it, Landauer argued that the logically irreversible character of conventional computational operations has direct implications for the thermodynamic behavior of a device that is carrying out those operations.

Landauer’s reasoning can be understood by observing that the most fundamental laws of physics are reversible, meaning that if you had complete knowledge of the state of a closed system at some time, you could always—at least in principle—run the laws of physics in reverse and determine the system’s exact state at any previous time.

To better see that, consider a game of billiards—an ideal one with no friction. If you were to make a movie of the balls bouncing off one another and the bumpers, the movie would look normal whether you ran it backward or forward: The collision physics would be the same, and you could work out the future configuration of the balls from their past configuration or vice versa equally easily.

The same fundamental reversibility holds for quantum-scale physics. As a consequence, you can’t have a situation in which two different detailed states of any physical system evolve into the exact same state at some later time, because that would make it impossible to determine the earlier state from the later one. In other words, at the lowest level in physics, information cannot be destroyed.

The reversibility of physics means that we can never truly erase information in a computer. Whenever we overwrite a bit of information with a new value, the previous information may be lost for all practical purposes, but it hasn’t really been physically destroyed. Instead it has been pushed out into the machine’s thermal environment, where it becomes entropy—in essence, randomized information—and manifests as heat.

Returning to our billiards-game example, suppose that the balls, bumpers, and felt were not frictionless. Then, sure, two different initial configurations might end up in the same state—say, with the balls resting on one side. The frictional loss of information would then generate heat, albeit a tiny amount.

Today’s computers rely on erasing information all the time—so much so that every single active logic gate in conventional designs destructively overwrites its previous output on every clock cycle, wasting the associated energy. A conventional computer is, essentially, an expensive electric heater that happens to perform a small amount of computation as a side effect.

Someone Else’s Computer: The Prehistory of Cloud Computing

“There is no cloud,” goes the quip. “It’s just someone else’s computer.”

The joke gets at a key feature of cloud computing: Your data and the software to process it reside in a remote data center—perhaps owned by Amazon, Google, or Microsoft—which you share with many users even if it feels like it’s yours alone.

Remarkably, this was also true of a popular mode of computing in the 1960s, ’70s, and ’80s: time-sharing. Much of today’s cloud computing was directly prefigured in yesterday’s time-sharing. Users connected their terminals—often teletypes—to remote computers owned by a time-sharing company over telephone lines. These remote computers offered a variety of applications and services, as well as data storage. The key to such systems was the operating system, built to rapidly switch among the tasks for the many users, giving the illusion of a dedicated machine.

The pioneering firm Tymshare produced the button shown above along with the largest commercial computer network of its era. Called Tymnet, it spanned the globe and was by the late 1970s larger than the ARPANET. Compare this schematic of Tymnet, detailing all of its nodes, with the sparser schematics of the ARPANET [PDF] from the same era. By 1975, Tymshare was handling about 450,000 interactive sessions per month.

Ann Hardy is a crucial figure in the story of Tymshare and time-sharing. She began programming in the 1950s, developing software for the IBM Stretch supercomputer. Frustrated at the lack of opportunity and pay inequality for women at IBM—at one point she discovered she was paid less than half of what the lowest-paid man reporting to her was paid—Hardy left to study at the University of California, Berkeley, and then joined the Lawrence Livermore National Laboratory in 1962. At the lab, one of her projects involved an early and surprisingly successful time-sharing operating system.

In 1966, Hardy landed a job at Tymshare, which had been founded by two former General Electric employees looking to provide time-sharing services to aerospace companies. Tymshare had planned to use an operating system that had originated at UC Berkeley, but it wasn’t designed for commercial use, and so Hardy rewrote it.