Friday, December 5, 2008
Muddiest Point 12/9
With regard to cloud computing-What happens if the company that your documents are saved at goes bankrupt and the website isn't available? Wouldn't the possibility of that instance alone keep Microsoft and others in business for the need of backing up materials?
Sunday, November 30, 2008
Muddiest Point
Would making your personal data that the government collects available and secure to yourself be a viable option for the government to maintain security and privacy issues eliminated?
Wk 13 blog post
You tube video was no longer available due to copyright claims from Viacom.
TIA and Data Mining
I thought this was interesting. I had no idea just how much the government put programs in place to track it's citizens. I have nothing to hide and could care less if they know, yet it is totally an invasion of privacy. I like how many newsworthy stories are on this website that I don't hear about on mainline news.
Notes:
A database by TIA would be populated by transaction data contained in current databases such as financial records, medical records, communication records and travel records as well as new sources of information.
A key component of the TIA project was to develop data mining or knowledge discovery tools that would sort through the massive amounts of information to find patterns and associations.
In September 2003, Congress got rid of the TIA, however other similar programs are still being implemented. Those programs include Novel Intelligence for Massive Data and Transportation Security Administration.
TIA and Data Mining
I thought this was interesting. I had no idea just how much the government put programs in place to track it's citizens. I have nothing to hide and could care less if they know, yet it is totally an invasion of privacy. I like how many newsworthy stories are on this website that I don't hear about on mainline news.
Notes:
A database by TIA would be populated by transaction data contained in current databases such as financial records, medical records, communication records and travel records as well as new sources of information.
A key component of the TIA project was to develop data mining or knowledge discovery tools that would sort through the massive amounts of information to find patterns and associations.
In September 2003, Congress got rid of the TIA, however other similar programs are still being implemented. Those programs include Novel Intelligence for Massive Data and Transportation Security Administration.
Friday, November 21, 2008
Muddiest Point
Are there certain qualifications required to be a part of the "Wikipedia Community"? Do they go through sort of a hiring process to make sure they know what they are talking about?
Week 12 post
Using a wiki to manage a library instruction program: Sharing knowledge to better serve patrons
-creates better information sharing
-facilitates collaboration in the creation of resources
-efficiently divides work loads
-two uses-sharing knowledge and ability to cooperate in creating resources
-Commercial sites abound to help you build your own Wiki includes seedwiki, pbwiki, jotspot, twiki, phpwiki.
-the creator of the wiki decides who has editing rights to the wiki.
-wikis are used to manage public services information, collaborate on and keep track of reference questions and assess databases.
Creating the academic library folksonomy: Put social tagging to work at your institution
Social tagging is a relatively new phenomenon that allows an individual to create bookmarks for web sites and save them online
Tags include subject keywords chosen by the user, brief descriptions of sites
Folksonomy is a taxonomy created by ordinary folks
U of Penn adopted PennTags where UP students, faculty and staff can book mark useful websites
Open source content management software is Drupal
Academic social tagging site is connotea
Jimmy Wales- Wikipedia
Neutrality on issues. If problems occur and opinions are given, they will be asked to leave.
The wikipedia core community meets off line too.
Whenever changes are made, a "wikipedia community person" is sent a copy to double check information and delete what they need to.
Votes for deletion page, to see if something needs deleted.
Next step is to create textbooks on wikipedia. It should take at least 20 years.
-creates better information sharing
-facilitates collaboration in the creation of resources
-efficiently divides work loads
-two uses-sharing knowledge and ability to cooperate in creating resources
-Commercial sites abound to help you build your own Wiki includes seedwiki, pbwiki, jotspot, twiki, phpwiki.
-the creator of the wiki decides who has editing rights to the wiki.
-wikis are used to manage public services information, collaborate on and keep track of reference questions and assess databases.
Creating the academic library folksonomy: Put social tagging to work at your institution
Social tagging is a relatively new phenomenon that allows an individual to create bookmarks for web sites and save them online
Tags include subject keywords chosen by the user, brief descriptions of sites
Folksonomy is a taxonomy created by ordinary folks
U of Penn adopted PennTags where UP students, faculty and staff can book mark useful websites
Open source content management software is Drupal
Academic social tagging site is connotea
Jimmy Wales- Wikipedia
Neutrality on issues. If problems occur and opinions are given, they will be asked to leave.
The wikipedia core community meets off line too.
Whenever changes are made, a "wikipedia community person" is sent a copy to double check information and delete what they need to.
Votes for deletion page, to see if something needs deleted.
Next step is to create textbooks on wikipedia. It should take at least 20 years.
Friday, November 14, 2008
Week 11 Post
Dewey Meets Turing
-Librarians, computer scientists and publishers are interested in beginning the Digital Libraries Initiative in 1994, funded by the National Science Foundation.
-Computer scientist say DLI as a chance to impact society.
-Librarians say DLI as a means to get funding and to insure the libraries continued impact on scholarly work.
-when the web came along, it changed DLIs plans but the need to have better and more complete holdings remains a focus.
-With the web, deals with publishers and copyright restrictions made computer scientists change how they publish their work.
-Now the library was forced to change ideas because many journal publisher's business decision to charge at a premium for digital content computer scientists have named information hubs.
-Opportunities now arise for direct connections between librarians and scholarly authors.
Digital Libraries
-The mantra has been: aggregate, virtually collocate and federate. The goal of seamless federation across distributed, heterogeneous resources remains the holy grail of digital library work.
-DLI-1 funded six university led projects to develop and implement computing and networking technologies that could make large scale electronic test collections accessible and interoperable.
School are: U of MI, Stanford, U of CA-Berkely, U of CA-Santa Barbara, Carnegie Mellon and U of IL-Champaign-Urbana.
-Probably the most significant contribution of the IL project was the transfer of technology to our publishing partners and other publishers.
-A large number of significant digital library standards and technologies have been developed by entities outside of the federally funded projects
publishers
publisher consortium
Bibliographic utilities
W3C
Academic consortium
NISO
LOC
Library integrated system vendors
web search engines
Computer companies
Open Source community
Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age
-The development of institutional repositories emerged as a new strategy that allows universities to apply serious, systematic leverage to accelerate changes taking place in scholarship.
-Online storage costs have dropped significantly; repositories are now affordable.
-Operational responsibility for these services may reasonably be situated in different organizational units at different universities promoting collaboration among librarians, IT people, archives and records managers, faculty and university administrators and policymakers.
-A mature and fully realized institutional repository will contain intellectual works of faculty and students.
-Cautions
administration might try to gain more control over faculty intellectual work
overloading the repository
creating repositories to rapidly
Repositories need to be sure to preserve formats, have identifiers, and documentation and management of rights.
-Librarians, computer scientists and publishers are interested in beginning the Digital Libraries Initiative in 1994, funded by the National Science Foundation.
-Computer scientist say DLI as a chance to impact society.
-Librarians say DLI as a means to get funding and to insure the libraries continued impact on scholarly work.
-when the web came along, it changed DLIs plans but the need to have better and more complete holdings remains a focus.
-With the web, deals with publishers and copyright restrictions made computer scientists change how they publish their work.
-Now the library was forced to change ideas because many journal publisher's business decision to charge at a premium for digital content computer scientists have named information hubs.
-Opportunities now arise for direct connections between librarians and scholarly authors.
Digital Libraries
-The mantra has been: aggregate, virtually collocate and federate. The goal of seamless federation across distributed, heterogeneous resources remains the holy grail of digital library work.
-DLI-1 funded six university led projects to develop and implement computing and networking technologies that could make large scale electronic test collections accessible and interoperable.
School are: U of MI, Stanford, U of CA-Berkely, U of CA-Santa Barbara, Carnegie Mellon and U of IL-Champaign-Urbana.
-Probably the most significant contribution of the IL project was the transfer of technology to our publishing partners and other publishers.
-A large number of significant digital library standards and technologies have been developed by entities outside of the federally funded projects
publishers
publisher consortium
Bibliographic utilities
W3C
Academic consortium
NISO
LOC
Library integrated system vendors
web search engines
Computer companies
Open Source community
Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age
-The development of institutional repositories emerged as a new strategy that allows universities to apply serious, systematic leverage to accelerate changes taking place in scholarship.
-Online storage costs have dropped significantly; repositories are now affordable.
-Operational responsibility for these services may reasonably be situated in different organizational units at different universities promoting collaboration among librarians, IT people, archives and records managers, faculty and university administrators and policymakers.
-A mature and fully realized institutional repository will contain intellectual works of faculty and students.
-Cautions
administration might try to gain more control over faculty intellectual work
overloading the repository
creating repositories to rapidly
Repositories need to be sure to preserve formats, have identifiers, and documentation and management of rights.
Monday, November 10, 2008
Now working
I made a couple adjustments and my web page will work. I discovered that it will work on Internet Explorer but not FireFox.
www.pitt.edu/~lar68
www.pitt.edu/~lar68
Sunday, November 9, 2008
Assignment : web page
Here is my link to my web page. I emailed Lucie because I couldn't get the links to work. I was able to use them within my Publisher document, but not once I uploaded them to Pitt's server. I am placing the link to the web page here, however I am emailing a copy of the Publisher document to Lucie so she can see that it works in Publisher.
www.pitt.edu/~lar68
www.pitt.edu/~lar68
Thursday, November 6, 2008
muddiest point
Is it possible to give the writers of these deep web data the ability to tag their own documents? Wouldn't this help them also to tag data in such a way to help web searchers that have interests that are the same as there own to find accurate information?
Nov 7th notes
The Deep Web: Surfacing Hidden Value
--Most of the Web's information is buried far down on dynamically generated sites, and standard search engines never find it.
--Deep Web sources store their content in searchable databases that only produce results dynamically in response to a direct request.
--Search engines obtain their listings in two ways: Authors may submit their own Web pages, or the search engines "crawl" or "spider" documents by following one hypertext link to another. The latter returns the bulk of the listings
--Cross referencing web sites gives better results ie-google
--BrightPlanet's technology is a "directed-query engine."
--The deep Web is about 500 times larger than the surface Web, with, on average, about three times higher quality based on our document scoring methods on a per-document basis.
--Serious information seekers can no longer avoid the importance or quality of deep Web information. But deep Web information is only a component of total information available. Searching must evolve to encompass the complete Web.
How Things Work--Part one
--Within a data center, clusters or individual servers can be dedicated to specialized functions, such as crawling, indexing, query processing snippet generation, link-graph computations, rusult caching, and insertion of advertising content.
--Currently, the amount of Web data that search engines crawl and index is on the order of 400 tB, placing heavy loads on server and network infrastructure.
--The crawler initializes the queue with one or more seed urls. A good seed url will link to many high-quality web sites.
--Crawling proceeds by making an http request to fetch the page at the first url in the queue. When thecrawler fetches the page, it scans the contents for links to other urls and adds each previously unseen url to the queue. Finally the crawler saves the page content for indexing. Crawling cotinues until the queue is empty.
--Simple crawling algorithm must be extended to address the following issues
----Speed
----Politeness
----Excluded Content-robots.txt file determine whether the webmaster has specified that some or all of the site should be crawled.
----Duplicate Content
----Continuous crawling-carrying out full crawls at fixed intervals would imply slow response to important changes in the web.
----Spam rejection Primitive spamming techniques, such as inserting misleading keywords into pages that are invisible to the viewer.
-------Spammers also engage in cloaking, the process of delivering different content to crawlers than to site visitors.
Web Search Engines: Part 2
--Search engines use an inverted file to rapidly identify indexing terms.
--An indexer can create an inverted file in two phases. Scanning and Inversion
--Scaling up--document partitioning
--Term lookup-The webs vocabulary is unexpectedly large, containing hundreds of millions of distinct terms.
--Compression Indexers can reduce demands on disk space and memory by using compression algorithms for key data structures.
--Phrases Special indexing tricks permit a more rapid response.
--Anchor text Web browsers highlight words in a web page in indicate the presence of a link that users can click on
--Link Popularity Score--Frequency of incoming links
--Query-independent score ranking of websites
--Most of the Web's information is buried far down on dynamically generated sites, and standard search engines never find it.
--Deep Web sources store their content in searchable databases that only produce results dynamically in response to a direct request.
--Search engines obtain their listings in two ways: Authors may submit their own Web pages, or the search engines "crawl" or "spider" documents by following one hypertext link to another. The latter returns the bulk of the listings
--Cross referencing web sites gives better results ie-google
--BrightPlanet's technology is a "directed-query engine."
--The deep Web is about 500 times larger than the surface Web, with, on average, about three times higher quality based on our document scoring methods on a per-document basis.
--Serious information seekers can no longer avoid the importance or quality of deep Web information. But deep Web information is only a component of total information available. Searching must evolve to encompass the complete Web.
How Things Work--Part one
--Within a data center, clusters or individual servers can be dedicated to specialized functions, such as crawling, indexing, query processing snippet generation, link-graph computations, rusult caching, and insertion of advertising content.
--Currently, the amount of Web data that search engines crawl and index is on the order of 400 tB, placing heavy loads on server and network infrastructure.
--The crawler initializes the queue with one or more seed urls. A good seed url will link to many high-quality web sites.
--Crawling proceeds by making an http request to fetch the page at the first url in the queue. When thecrawler fetches the page, it scans the contents for links to other urls and adds each previously unseen url to the queue. Finally the crawler saves the page content for indexing. Crawling cotinues until the queue is empty.
--Simple crawling algorithm must be extended to address the following issues
----Speed
----Politeness
----Excluded Content-robots.txt file determine whether the webmaster has specified that some or all of the site should be crawled.
----Duplicate Content
----Continuous crawling-carrying out full crawls at fixed intervals would imply slow response to important changes in the web.
----Spam rejection Primitive spamming techniques, such as inserting misleading keywords into pages that are invisible to the viewer.
-------Spammers also engage in cloaking, the process of delivering different content to crawlers than to site visitors.
Web Search Engines: Part 2
--Search engines use an inverted file to rapidly identify indexing terms.
--An indexer can create an inverted file in two phases. Scanning and Inversion
--Scaling up--document partitioning
--Term lookup-The webs vocabulary is unexpectedly large, containing hundreds of millions of distinct terms.
--Compression Indexers can reduce demands on disk space and memory by using compression algorithms for key data structures.
--Phrases Special indexing tricks permit a more rapid response.
--Anchor text Web browsers highlight words in a web page in indicate the presence of a link that users can click on
--Link Popularity Score--Frequency of incoming links
--Query-independent score ranking of websites
Saturday, October 18, 2008
Amateur Radio (Ham)
My virtual shelf is titled Amateur Radio (Ham)
The link is
http://pitt5.opacwc.liblime.com/cgi-bin/koha/opac-shelves.pl?viewshelf=31
The link is
http://pitt5.opacwc.liblime.com/cgi-bin/koha/opac-shelves.pl?viewshelf=31
Friday, October 17, 2008
Readings for 10/21/08
www.w3schools.com
Introduction to HTML
This is a step by step guide to HTML programming. The author thinks ahead about the questions a new programmer might have and answers them in a simple way so the novice can understand.
For example, Why program in lowercase? What browser is the best? What to do if you can't view the page?
www.webmonkey.com
HTML Cheatsheet
This is an excellent guide to make programming HTML easier for the amateur programmer.
Muddiest Question
Why learn HTML when there are so many easier, user friendly programs that are parts of AOL or MSN that make it extremely simple to build you own web pages?
Introduction to HTML
This is a step by step guide to HTML programming. The author thinks ahead about the questions a new programmer might have and answers them in a simple way so the novice can understand.
For example, Why program in lowercase? What browser is the best? What to do if you can't view the page?
www.webmonkey.com
HTML Cheatsheet
This is an excellent guide to make programming HTML easier for the amateur programmer.
Muddiest Question
Why learn HTML when there are so many easier, user friendly programs that are parts of AOL or MSN that make it extremely simple to build you own web pages?
Tuesday, October 7, 2008
Assignment 4, Jing
My video is how to print out only the Jing assignment part of the assignment page on course web.
The screen capture images descibe a trip to Vieques, Puerto Rico.
http://screencast.com/t/VBnaOc9O
The screen capture images descibe a trip to Vieques, Puerto Rico.
http://www.flickr.com/photos/30190803@N08/2921830002/
http://www.flickr.com/photos/30190803@N08/2921836874/
http://www.flickr.com/photos/30190803@N08/2921844178/
http://www.flickr.com/photos/30190803@N08/2921010385/
http://www.flickr.com/photos/30190803@N08/2921869470/
Friday, October 3, 2008
Friday Post for 10/7 class
Google Creators on TED
Google's business ideas are creative. They allow their employees to work 20% of their time on their own ideas. This benefits the company by allowing the employees to be creative and driven to develop other ideas for Google.
Google also believes that the relationship of the employees outside of work benefits the company with better work ethic and communications between employees.
Google is trying to make the internet smarter, meaning they are trying to come up with techniques to help the searches to more accurately represent what the user is trying to search for.
How Internet Infrastructure Works
Internet monitoring happens by the Internet Society, a non-profit group established in 1992.
Every computer connected to the internet connects to other computers via NAP.
1987--NSF created the first backbone-NSFNT of 170 smaller networks
IP Address--internet protocol is the language computers use to communicate over the internet.
Domain Name System (DNS) maps the IP addresses.
Uniform Resource Locator (URL)
Internet servers make the internet possible, clients are those using the server.
Hypertext transfer protocol (HTTP) format that allows the different servers and clients to communicate.
Google's business ideas are creative. They allow their employees to work 20% of their time on their own ideas. This benefits the company by allowing the employees to be creative and driven to develop other ideas for Google.
Google also believes that the relationship of the employees outside of work benefits the company with better work ethic and communications between employees.
Google is trying to make the internet smarter, meaning they are trying to come up with techniques to help the searches to more accurately represent what the user is trying to search for.
How Internet Infrastructure Works
Internet monitoring happens by the Internet Society, a non-profit group established in 1992.
Every computer connected to the internet connects to other computers via NAP.
1987--NSF created the first backbone-NSFNT of 170 smaller networks
IP Address--internet protocol is the language computers use to communicate over the internet.
Domain Name System (DNS) maps the IP addresses.
Uniform Resource Locator (URL)
Internet servers make the internet possible, clients are those using the server.
Hypertext transfer protocol (HTTP) format that allows the different servers and clients to communicate.
Monday, September 22, 2008
Friday, September 19, 2008
Muddiest Question
Is there a way to compress files with the certainty that all the information included can be retrieved?
Week 5 Reading Notes
Data Compression--Wikipedia
Data compression saves data using fewer bits.
ZIP files store many source files in a single destination output file.
Compression helps to reduce the use of expensive resources.
Lossless compression reduces redundancy in order to reduce file size.
"Lossless compression schemes are reversible so that the original data can be reconstructed, while lossy schemes accept some loss of data in order to achieve higher compression."
Lossy is used on digital cameras, DVDs and audios.
Claude Shannon created information theory and rate distortion theory.
DEFLATE is used w/ GIF image.
Jorma Rissanen created arithmetic coding which "achieves superior compression to the better-known Huffman algorithm."
You Tube and libraries: It could be a beautiful relationship
Steps to getting started
create a youtube account
edit your channel information-to identify the library
record video
upload video
Advantages
you can upload videos in any format
Maximum file size is 100MB or 10 minutes
Remember to get permission to upload television shows, music, videos, music concerts or commercials
You can even send a video to a blog or cell phone
RSS feeds can notify patrons of new content.
How to use it?
Storehouse of instructional videos
use to introduce resources to campus students (Library Mystery Tour-Williams College Library)
Screen Capture software can be used to build tutorials for the library
University of Pittsburgh Grant
Grant from IMLS to "create a shared gateway to visual image collections in the Pittsburgh region
Institutions included
Archives Service Center at Pitt
Library and Archives of the Historical Society of Western PA
Carnegie Museum of Art
Characteristics of the Web gateway
Users will be able to
-Conduct a keyword search
-Browse images
-Read about collections and their contents
-Explore the collections by time, place, and theme
-Order image reproductions
Communications between the 3 organizations is one challenge
-solved partly with e-mail distribution lists
-web postings
-meetings
Each institution has different goals too
Image selection is guided be LOC subject headings.
However, the different institutions have more specific resources for controlled vocabulary, so it was decided to use the DC elements with the LOCSH, and each institution can add their own controlled vocabulary if they wish in addition.
Copyright issues
-generic copyright for all items
-more specific copyright given by the institutions for some items
Outcome tests
-Do the collections meet the research needs of the users?
-How many times do each part of the collection get accessed?
Data compression saves data using fewer bits.
ZIP files store many source files in a single destination output file.
Compression helps to reduce the use of expensive resources.
Lossless compression reduces redundancy in order to reduce file size.
"Lossless compression schemes are reversible so that the original data can be reconstructed, while lossy schemes accept some loss of data in order to achieve higher compression."
Lossy is used on digital cameras, DVDs and audios.
Claude Shannon created information theory and rate distortion theory.
DEFLATE is used w/ GIF image.
Jorma Rissanen created arithmetic coding which "achieves superior compression to the better-known Huffman algorithm."
You Tube and libraries: It could be a beautiful relationship
Steps to getting started
create a youtube account
edit your channel information-to identify the library
record video
upload video
Advantages
you can upload videos in any format
Maximum file size is 100MB or 10 minutes
Remember to get permission to upload television shows, music, videos, music concerts or commercials
You can even send a video to a blog or cell phone
RSS feeds can notify patrons of new content.
How to use it?
Storehouse of instructional videos
use to introduce resources to campus students (Library Mystery Tour-Williams College Library)
Screen Capture software can be used to build tutorials for the library
University of Pittsburgh Grant
Grant from IMLS to "create a shared gateway to visual image collections in the Pittsburgh region
Institutions included
Archives Service Center at Pitt
Library and Archives of the Historical Society of Western PA
Carnegie Museum of Art
Characteristics of the Web gateway
Users will be able to
-Conduct a keyword search
-Browse images
-Read about collections and their contents
-Explore the collections by time, place, and theme
-Order image reproductions
Communications between the 3 organizations is one challenge
-solved partly with e-mail distribution lists
-web postings
-meetings
Each institution has different goals too
Image selection is guided be LOC subject headings.
However, the different institutions have more specific resources for controlled vocabulary, so it was decided to use the DC elements with the LOCSH, and each institution can add their own controlled vocabulary if they wish in addition.
Copyright issues
-generic copyright for all items
-more specific copyright given by the institutions for some items
Outcome tests
-Do the collections meet the research needs of the users?
-How many times do each part of the collection get accessed?
Friday, September 12, 2008
Week 4 Reading notes
Database from Wikipedia
"A computer database is a structured collection of records or data that is stored in a computer system"
The software to organize a database is a database management system. (DBMS)
The 90s brought along object oriented databases.
The 2000s brought along XML databases.
Types of database organization:
-Hierarchical-data is organized in an inverted tree structure
-Network-records can be a part of any number of named relationships
A relational database used the relations from a field of set theory notion.
SQL-special database language that users use to ask the database a "question".
A Network model database allows a record to be accessed w/o the one above it being accessed.
Database transactions use the ACID rule
-A=Atomicity-transactions must all be done or none of them be done
-C=Consistency-integrity constraints must be preserved
-I=Isolation-two transactions cannot interfere with one another
-D=Durability-transactions cannot be aborted
Three types of replication
-Master/Slave all requests are first performed by the master then copied by the slave
-Quorum-majority rules!
-Multimaster-syncs via transaction identifier
Security enforcement
-Access control
-auditing
-encryption
Setting the Stage by Anne J Gilliland
Metadata is "data about data".
Content, context and structure are reflected through metadata.
"Library metadata includes indexes, abstracts, and catalog records created according to cataloging rules and structural and content standards such as MARC, LCSH, and AAT.
Types of Metadata-
administrative
descriptive
preservation
technical
use
Attributes and characteristics of metadata
Source of metadata
method of creation
nature of metadata
status
structure
semantics
level
Lifecycle of digital data
-Creation and multiversioning
-Organization-categorizing once an object is digitized
-Searching and retrieval-metadata created for users to search and retrieve the object via computer
-Utilization-object being used in the digital format
-Preservation and disposition-making sure the metadata are usable(upkeep)
An Overview of the Dublin Core Data Model--Eric Miller
"The Dublin Core Metadata Initiative is a international effort designed to foster consensus across disciplines for the discovery-oriented description of diverse resources in an electronic environment"
DCMI focuses on Semantic Clarification and identification of common cross-domain qualifiers.
Basis for DCMI requirements:
-Internationalization
-Modularization/Extensibility
-Element Identity
-Semantic Refinement
-Identification of encoding schemes
-Specification of controlled vocabulary
-Identification of structured compound values
"A computer database is a structured collection of records or data that is stored in a computer system"
The software to organize a database is a database management system. (DBMS)
The 90s brought along object oriented databases.
The 2000s brought along XML databases.
Types of database organization:
-Hierarchical-data is organized in an inverted tree structure
-Network-records can be a part of any number of named relationships
A relational database used the relations from a field of set theory notion.
SQL-special database language that users use to ask the database a "question".
A Network model database allows a record to be accessed w/o the one above it being accessed.
Database transactions use the ACID rule
-A=Atomicity-transactions must all be done or none of them be done
-C=Consistency-integrity constraints must be preserved
-I=Isolation-two transactions cannot interfere with one another
-D=Durability-transactions cannot be aborted
Three types of replication
-Master/Slave all requests are first performed by the master then copied by the slave
-Quorum-majority rules!
-Multimaster-syncs via transaction identifier
Security enforcement
-Access control
-auditing
-encryption
Setting the Stage by Anne J Gilliland
Metadata is "data about data".
Content, context and structure are reflected through metadata.
"Library metadata includes indexes, abstracts, and catalog records created according to cataloging rules and structural and content standards such as MARC, LCSH, and AAT.
Types of Metadata-
administrative
descriptive
preservation
technical
use
Attributes and characteristics of metadata
Source of metadata
method of creation
nature of metadata
status
structure
semantics
level
Lifecycle of digital data
-Creation and multiversioning
-Organization-categorizing once an object is digitized
-Searching and retrieval-metadata created for users to search and retrieve the object via computer
-Utilization-object being used in the digital format
-Preservation and disposition-making sure the metadata are usable(upkeep)
An Overview of the Dublin Core Data Model--Eric Miller
"The Dublin Core Metadata Initiative is a international effort designed to foster consensus across disciplines for the discovery-oriented description of diverse resources in an electronic environment"
DCMI focuses on Semantic Clarification and identification of common cross-domain qualifiers.
Basis for DCMI requirements:
-Internationalization
-Modularization/Extensibility
-Element Identity
-Semantic Refinement
-Identification of encoding schemes
-Specification of controlled vocabulary
-Identification of structured compound values
Tuesday, September 9, 2008
Assignment 2 Flickr Pics
The link to my pictures on Flickr is
http://www.flickr.com/photos/30190803@N08/
http://www.flickr.com/photos/30190803@N08/
Friday, September 5, 2008
Week 3 Readings Post
Linux--
-Unix was widely used by companies or universities until the 1990s when the home PC became fast enough to support it.
-Linus was then developed to use the Unix OS on a PC.
-The writers of Linus were careful to use POSIX standards to provide consistency.
-Bell Labs developed the Unix system with the idea that it could run from a basic Kernal that is system specific then the rest would be open source code that could be used by any system.
-Linux can also be used on PDAs, mobiles and wrist watches.
-Developers have made Linux to resemble windows to make it user friendly.
-Open Source software can be adapted to serve the users needs.
-Linux is free and can be used on any hardware platform, but can be confusing to beginners.
-Linux is written in C programming language.
MAC OS X
-It is am implementation on UNIX in Nextstep.
-It was developed by Steve Jobs.
-It is used on the computers and also on iPhones and iPOD touch
-It concentrates more on the "digital lifestyle" instead of compatibility with other equipment.
-However, because of it's Unix base, it is compatible w/ most.
-The MAC OS X has "widgets" which are in essence, like the icons on a windows pc.
Letter from Bill Veghte
-"Windows XP will be supported until 2014"
-"Windows 7 wll be out in January 2010"
****Muddiest question*****
If Unix is so good, why isn't it used in all computers?
-Unix was widely used by companies or universities until the 1990s when the home PC became fast enough to support it.
-Linus was then developed to use the Unix OS on a PC.
-The writers of Linus were careful to use POSIX standards to provide consistency.
-Bell Labs developed the Unix system with the idea that it could run from a basic Kernal that is system specific then the rest would be open source code that could be used by any system.
-Linux can also be used on PDAs, mobiles and wrist watches.
-Developers have made Linux to resemble windows to make it user friendly.
-Open Source software can be adapted to serve the users needs.
-Linux is free and can be used on any hardware platform, but can be confusing to beginners.
-Linux is written in C programming language.
MAC OS X
-It is am implementation on UNIX in Nextstep.
-It was developed by Steve Jobs.
-It is used on the computers and also on iPhones and iPOD touch
-It concentrates more on the "digital lifestyle" instead of compatibility with other equipment.
-However, because of it's Unix base, it is compatible w/ most.
-The MAC OS X has "widgets" which are in essence, like the icons on a windows pc.
Letter from Bill Veghte
-"Windows XP will be supported until 2014"
-"Windows 7 wll be out in January 2010"
****Muddiest question*****
If Unix is so good, why isn't it used in all computers?
Tuesday, September 2, 2008
Week 1 Assignment
Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture--Clifford Lynch
-content and communication are parts of information literacy
information literacy therefore includes, "authoring, information finding and organization, the research process, and information analysis, assessment and evaluation".
-tools in information technology are word processing, spreadsheets, basic operation of computers and basic Internet tools
-Also understanding how the technologies, systems and infrastructure works is important
-A level of confidence in using the tools is also an important part of Info literacy
-Info Literacy includes text, images and multimedia
Lied Library @four years: technology never stands still--Jason Vaughan
-This article outlines all the changes that UNLVs Lied Library has gone through to keep up with current technological advances from 2001 to 2004.
-In 2001 the Library came into existence with the most current technology possible.
-In 2003, the Library revamped all of the computer system with 600 new computers and new software packages. The students are also able to check out lap tops to use within the Library.
-Deepfreeze software, erases all downloaded information that a student may use on the library's computers. It also takes off all cookies.
-Some challenges included that of space for staff offices, a workroom space and the temperature regulations in the computer area.
-Other challenges were theft of the computers, which they eliminated by using security cameras.
2004 Information Format Trends: Content, Not Containers---OCLC
-"Digital content is often syndicated instead of being prepackaged and distributed, and access is provided on an as-needed basis to the information consumer by providers outside the library space."
-Content can be made available in many formats, tried out by consumers before purchase then when correct information is found the information can be bought in its entirety.
-People can "self-publish" their work to spread the word about what they are doing.
-McLuhan, in 1964, said the medium is the message, not the container in which the info is dispersed.
-People are paying for smaller amounts of content, for example ring tones.
-Some examples of social publishing by individuals or groups w/o controls are wikis and blogs
-Print books and magazines have seen a drop in sales and e-book distribution is rising.
-As more scholarly journals are digitized, more research will be published.
-"Libraries should move beyond the role of collector and organizer of content, print and digital, to one that establishes the authenticity and provenance of content"
****Muddiest Question****
Is it realistic to think that every young person is going to be competent in understanding how the technologies and systems work without specialized training?
-content and communication are parts of information literacy
information literacy therefore includes, "authoring, information finding and organization, the research process, and information analysis, assessment and evaluation".
-tools in information technology are word processing, spreadsheets, basic operation of computers and basic Internet tools
-Also understanding how the technologies, systems and infrastructure works is important
-A level of confidence in using the tools is also an important part of Info literacy
-Info Literacy includes text, images and multimedia
Lied Library @four years: technology never stands still--Jason Vaughan
-This article outlines all the changes that UNLVs Lied Library has gone through to keep up with current technological advances from 2001 to 2004.
-In 2001 the Library came into existence with the most current technology possible.
-In 2003, the Library revamped all of the computer system with 600 new computers and new software packages. The students are also able to check out lap tops to use within the Library.
-Deepfreeze software, erases all downloaded information that a student may use on the library's computers. It also takes off all cookies.
-Some challenges included that of space for staff offices, a workroom space and the temperature regulations in the computer area.
-Other challenges were theft of the computers, which they eliminated by using security cameras.
2004 Information Format Trends: Content, Not Containers---OCLC
-"Digital content is often syndicated instead of being prepackaged and distributed, and access is provided on an as-needed basis to the information consumer by providers outside the library space."
-Content can be made available in many formats, tried out by consumers before purchase then when correct information is found the information can be bought in its entirety.
-People can "self-publish" their work to spread the word about what they are doing.
-McLuhan, in 1964, said the medium is the message, not the container in which the info is dispersed.
-People are paying for smaller amounts of content, for example ring tones.
-Some examples of social publishing by individuals or groups w/o controls are wikis and blogs
-Print books and magazines have seen a drop in sales and e-book distribution is rising.
-As more scholarly journals are digitized, more research will be published.
-"Libraries should move beyond the role of collector and organizer of content, print and digital, to one that establishes the authenticity and provenance of content"
****Muddiest Question****
Is it realistic to think that every young person is going to be competent in understanding how the technologies and systems work without specialized training?
Friday, August 29, 2008
Week 2 Readings Post
In reference to the Wiki article about computer hardware:
-hardware is in most situations not changeable
-software is changeable
-firmware is rarely changeable
-ROM is read only memory which is an example of firmware
-BIOS is Basic Input-Output System used by firmware
-Motherboard has controllers from the hard disk, cd-rom and others
-Graphics card can be on the motherboard or in a separate spot made especially for it
-DVDs store up to 6x the amount of information than the CD
-Sound card is an audio device that uses input and output
-Peripherals are input and/or output devices that run externally such as the mouse, key board, gaming devices, microphones, scanners and webcams
In reference to Moore's law:
-Moore's law was invented by Gordon E Moore in 1965
-Moore's law illustrates how technological and social change grew and is growing exponentially in the 20th and 21st century.
-Moore's second law states "capital cost of the semiconductor fab also increases exponentially over time". Therefore computer companies are looking for new materials to manufacture the computers with.
-hardware is in most situations not changeable
-software is changeable
-firmware is rarely changeable
-ROM is read only memory which is an example of firmware
-BIOS is Basic Input-Output System used by firmware
-Motherboard has controllers from the hard disk, cd-rom and others
-Graphics card can be on the motherboard or in a separate spot made especially for it
-DVDs store up to 6x the amount of information than the CD
-Sound card is an audio device that uses input and output
-Peripherals are input and/or output devices that run externally such as the mouse, key board, gaming devices, microphones, scanners and webcams
In reference to Moore's law:
-Moore's law was invented by Gordon E Moore in 1965
-Moore's law illustrates how technological and social change grew and is growing exponentially in the 20th and 21st century.
-Moore's second law states "capital cost of the semiconductor fab also increases exponentially over time". Therefore computer companies are looking for new materials to manufacture the computers with.
Subscribe to:
Comments (Atom)