kizilay_banner_728X090

Journal Metrics Values

The three different impact metrics
Bu haber 2015-05-23 16:09:52 eklenmiş ve 2779 kez görüntülenmiştir.
https://www.adscientificindex.com/add-profile-correction-form/?ref=banner

Welcome to Journal Metrics from Elsevier

The academic community has long been demanding more transparency, choice and accuracy in journal assessment. Elsevier now provides three alternative, transparent and accurate views of the true citation impact a journal makes:

The three different impact metrics are all based on methodologies developed by external bibliometricians and use Scopus as the data source. Scopus is the largest citation database of peer-reviewed literature and features tools to track, analyze and visualize research output. Via this website, the three journal metrics are provided free of charge

 

Journal Metric Values

In this section you can find the entire collection of journals covered by Scopus (currently the largest database of academic literature with 21,900 journals from 5,000 publishers) along with their SNIP, IPP and SJR metrics going back to 1999.

Download the entire dataset 1999-2013 (November 2014)

Journal Metrics archive

Due to the fact that journal metrics are calculated from Scopus, the journal metric values cannot be fixed in time. Scopus is dynamic: it shows citations per document in real time and is continuously updating historical content, in addition to new content as it comes out. As a consequence, when the values are published, they will take all the historical updates into account as well. This means that with each data refresh, all values (current and previous years) are recalculated and refreshed. This is good if you want up-to-the-minute values, but can make it difficult to validate quoted values.

While we believe that reporting the current state of the database to be more transparent, we also realize that reports have been created based on previous metric sets. For this purpose historical datasets are maintained in this archive thus allowing for validation of past values.

Also, since 2012 SNIP and SJR values are being calculated using a modified algorithm. Read more on these changes on the about journal metrics pages

Download the archive datasets:

Download the entire dataset 1999-2013 (October 2014)

Download the entire dataset 1999-2013 (July 2014)

Download the entire dataset 1999-2012 (September 2013)

Download the entire dataset 1999-2011 (October 2012)

Download the entire dataset 1999-June 2011 (November 2011)

Download the entire dataset 1999-2010 (July 2011)

Download the dataset (September 2010)

Download the first dataset (January 2010)

One of the best ways to learn how these metrics work is by experimenting with the data. If you still have questions, you can visit our Resource Library and/or consult our FAQs.

 

About Impact per Publication (IPP)

The IPP measures the ratio of citations in a year (Y) to scholarly papers published in the three previous years (Y-1, Y-2, Y-3) divided by the number of scholarly papers published in those same years (Y-1, Y-2, Y-3). The IPP metric is using a citation window of three years which is considered to be the optimal time period to accurately measure citations in most subject fields. Taking into account the same peer-reviewed scholarly papers only in both the numerator and denominator of the equation provides a fair impact measurement of the journal and diminishes the chance of manipulation.

The IPP is not normalized for the subject field and therefore gives a raw indication of the average number of citation a publication published in the journal will likely receive. When normalized for the citations in the subject field, the raw Impact per Publication becomes the Source Normalized Impact per Paper (SNIP). Note that in the context of the calculation of SNIP, the raw Impact per Publication is usually referred to as RIP. Like SNIP, the raw Impact per Publication metric was also developed by Leiden University's Centre for Science & Technology Studies (CWTS).

Relevant links

About Source Normalized Impact per Paper (SNIP)

Created by Professor Henk Moed at CTWS, University of Leiden, Source Normalized Impact per Paper (SNIP) measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is given higher value in subject areas where citations are less likely, and vice versa.

As explained by Moed in Measuring contextual citation impact of scientific journals, Journal of Informetrics, 4 (2010), pp 256-277:

"It further develops Eugene Garfield's notions of a field's 'citation potential' defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields."

It is defined as the ratio of a journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in Mathematics, Engineering and Social Sciences tend to have lower values than titles in Life Sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals.

SNIP corrects for such differences. Its strengths and limitations are open to critical debate. All empirical results are derived from the Scopus abstract and indexing database. SNIP values are updated once a year, providing an up-to-date view of the research landscape.

SNIP provides alternative values that bibliometricians can use to create more refined and objective analyses.

It helps editors evaluate their journal and understand how it is performing compared to its competition. SNIP provides more contextual information, and can give a better picture of specific fields, such as Engineering, Computer Science, and/or Social Sciences. It can also help all academics identify which journals are performing best within their subject field so they know where to publish.

For more information, please see our FAQs.

Relevant links

Journal Metric Values

In this section you can find the entire collection of journals covered by Scopus (currently the largest database of academic literature with 21,900 journals from 5,000 publishers) along with their SNIP, IPP and SJR metrics going back to 1999.

Download the entire dataset 1999-2013 (November 2014)

Journal Metrics archive

Due to the fact that journal metrics are calculated from Scopus, the journal metric values cannot be fixed in time. Scopus is dynamic: it shows citations per document in real time and is continuously updating historical content, in addition to new content as it comes out. As a consequence, when the values are published, they will take all the historical updates into account as well. This means that with each data refresh, all values (current and previous years) are recalculated and refreshed. This is good if you want up-to-the-minute values, but can make it difficult to validate quoted values.

While we believe that reporting the current state of the database to be more transparent, we also realize that reports have been created based on previous metric sets. For this purpose historical datasets are maintained in this archive thus allowing for validation of past values.

Also, since 2012 SNIP and SJR values are being calculated using a modified algorithm. Read more on these changes on the about journal metrics pages

Download the archive datasets:

Download the entire dataset 1999-2013 (October 2014)

Download the entire dataset 1999-2013 (July 2014)

Download the entire dataset 1999-2012 (September 2013)

Download the entire dataset 1999-2011 (October 2012)

Download the entire dataset 1999-June 2011 (November 2011)

Download the entire dataset 1999-2010 (July 2011)

Download the dataset (September 2010)

Download the first dataset (January 2010)

One of the best ways to learn how these metrics work is by experimenting with the data. If you still have questions, you can visit our Resource Library and/or consult our FAQs.

ETİKETLER :
Yorumlar
Adınız :
E-Mail :
Başlık :
Yorumunuz :
Güvenlik :
Değiştir  
Toplam 0 yorum. Tüm yorumları okumak için tıklayın.
Diğer JOURNALS haberleri
ÇOK OKUNANLAR
SON YORUMLANANLAR
Arşiv Arama
- -