Title: A mechanism for quality control applied to the EU-India digital platform
1A mechanism for quality control applied to the
EU-India digital platform
- S. R. C. Prasad Challapalli, Paolo
Coppola,Stefano Mizzaro, Michele ZennaroDept.
of Mathematics and Computer Science - University of Udine
- QCDL06, Udine
2Outline
- Introduction
- Electronic Publishing Peer review
- A new proposal
- Informal description example
- Experimental evaluation
- Application to the EU-India digital platform
- Discussion, future work, open problems
3How scientists work
- We all know
- Idea, discovery, hard work, blablabla
- Submission
- Peer review
- If accepted, publication
- Not only scientists ? Scholars
- It has not always been like that
- This is not the only way
4Scholarly communication/publishing
- 1665 1st scholarly journal
- Since then quality
- Reviews by the editor (few papers and topics)
- (1930) Peer review by external referees
- 90es Internet
- Electronic scholarly publishing
- Speed-up (JAIR,), Multimedia,
- E-prints, archives CoRR, ArXiv, ResearchIndex,
- Do-it-yourself scholarly publication
5Electronic scholarly communication 2 positions
- Supporters
- Scholarly publishing as it is today is dead
- Electronic scholarly publishing allows a cheaper,
faster, more effective communication - Authors, editors, referees work for free,
publishers and libraries make money
- Detractors
- Be careful
- Different kinds of electronic journals
- Differences among fields
- Preprint practice
- Peer review takes time
- Standard Model vs. Socio-Technical Network Model
6Peer review 2 positions
- Supporters
- Reasonably effective quality control
- Harnad The invisible hand of peer review
- No better solution, at least so far
- Detractors
- Time
- Bias (e.g., medicine)
- Wrong
- Inadequate (HEP experiments, computer
simulations, ) - Schön affair
- Juan Miguel Campanario
7Alternatives, complements, supplements to peer
review
- Democracy
- Authors publish whatever they want
- Readers read what they judge interesting
- Perhaps with commentary
- Plenty of proposals
- Different publishing models
- Authors pay
- IEEE, Jon Vig Good or bad open access is
happening Its not a matter o if but
when
8A threat to peer review
- Not abstract
- Do-it-yourself publishing
- Physicists do not use journals (?)
- Repositories
- ? without peer review
- Readers of a paper can judge it?
- Referees are experts, readers are not!
- Not only a threat! Opportunity, challenge,
9Our position
- Not supporters, not detractors
- We dont know if peer review is really threatened
- We dont know if Internet is an opportunity to
improve peer review and if alternatives to peer
review are viable - We dont
- Well, we have an idea, were just curious and
want to understand if it is a good idea or not
10Outline
- Introduction
- Electronic Publishing Peer review
- A new proposal
- Informal description example
- Experimental evaluation
- Application to the EU-India digital platform
- Discussion, future work, open problems
11A new proposal
- An example of the opportunities
- A new model for the submission-review-publication
process - Similar to democratic approaches
- Improvement (quality of readers)
- 7 years (not full time!) work
- For the sake of simplicity
- Electronic journal
- Substitute for peer review (more later)
12The proposal (1/3)
- Papers, authors, readers, scores, judgments
- A journal has subscribers (authors readers)
- Each paper is immediately published after its
submission - Each paper has a score, measuring its quality
- The score is initially zero. It changes after
readers read and judge the paper - Paper with high score ? good paper
13The proposal (2/3)
- Each author has a score too
- It changes accordingly to the scores of the
papers published by the author - Publishing good papers leads to higher author
score
14The proposal (3/3)
- Each reader has a score too
- Judgments by high scored readers are more heavy
- (Nothing really new so far)
- Reader score changes
- Accordingly to the goodness of the judgments
expressed - Good (right) judgments lead to a higher score
- Bad (wrong) judgments lead to a lower score
15Good judgment?
- Theoretically,
- equal to the final paper score (the score that
the paper will have at time ?) - In practice,
- the score at time ? is not available
- But we can
- approximate it (with the current score)
- revise the approximation as time goes on and we
get closer to ?
16Last ingredient Steadiness
- Authors/readers/papers have a steadiness value
(how stable a score is) - Papers published a long ago (and very much read)
have a high steadiness value - New authors whose papers are not yet so much
read have a low steadiness - Readers that expressed many judgments have a more
stable reader score - Steadiness values change (increase)
17Summary
- Papers, authors, and readers have a score that
measures their quality - Steadiness how stable the score is
- Virtuous circle (hopefully)
- Authors try to publish good papers
- Readers try to express good judgments
- Score of
- Papers which papers to read
- Authors scientific productivity
- Readers scientific reputation
18An example
r
j
p
j
r
19An example (1/2)
r
j
p
20An example (2/2)
r
j
p
j
r
21Paper score
- Average of readers judgments...
- ... higher weights to better readers
- Paper score
- weighted mean of readers judgments
- weighted by readers scores
22Paper score formula
Rp(t) set of readers that judged p before t
23Paper steadiness
- Number of judgments expressed?
- Judgments by good readers are more important!
- Sum of the scores of its readers
24Author score
- Average of her papers scores...
- ... higher weight to more stable papers
- Author score weighted mean (by paper
steadiness) of her papers scores
- Average of the judgments on her papers
- higher weight to judgments by better readers
- Author score weighted mean (by reader score) of
judgments on her papers
25Author score formula
26Reader score (1/2)
- Average of judgment goodness...
- ... higher weights to judgments on more stable
papers - Goodness distance of the judgment from the
current paper score - Reader score
- weighted mean of the goodnesses of her judgments
- weighted with the steadiness of the judged papers
27Reader score (2/2)
28Steadiness
29Updating formulae
- Long summations ? inefficient?
- Definition of updating formulae
- How to compute sx(t1) by updating sx(t)
- Ill spare you those!
- (see my paper)
30Outline
- Introduction
- Electronic Publishing Peer review
- A new proposal
- Informal description example
- Experimental evaluation
- Application to the EU-India digital platform
- Discussion, future work, open problems
31Evaluation
- Software simulations of the system
- Analysis of typical and critical cases
- Higher ?, slower s changes
- Bad author, good paper
- Bad author, bad paper, late recognized
- Lobbies
- Lazy readers
- More complex simulations
- Twofold aim understand better and more
32Higher ?, slower s changes
- 3 papers p1, p2 and p3
- Initial sp 0.1
- Initial steadiness
- ?p1 1
- ?p2 10
- ?p3 100
- 100 readers with sr 0.5 express a 0.9 judgment
33Higher ?, slower s changes
readers
34Higher ?, slower s changes
35Bad author, good paper
- Author a with score sa 0.1
- Publishes a good paper p
- Readers r1, , rn (sri 0.5)
- Judgments jri,p 0.9
- sa 2, 10, 100
36Bad author, good paper
r1
0.9
0.5
j1
p
r2
j2
0.9
0.5
0.1
rn
jn
0.9
0.5
37Bad author, good paper
38Bad author, bad paper, late recognized
- Author a with score sa 0.1
- a publishes p
- Readers r1, , r10 very good paper (jri,p
0.9) - Readers r11, , r100 very bad paper (jri,p
0.1) - ?a 2, 10, 100
39Bad author, bad paper, late recon
40Lobbies
- People that mutually give high judgments
- Paper with high judgments more read
- If they are bad papers, bad judgments
- Counter lobby
- Maybe dangerous how big has to be an effective
lobby? - Automatic software spotting (clique)
- To pay for judgment expression?
41Lazy readers
- Readers that simply confirm the current sp
- 3 points
- Is it really effective?
- Improve the mechanism
- Measure reader laziness
421. Is laziness really effective?
- Of course, it depends!
- Again bad author, bad paper, late recon
- Author a with score sa 0.1
- a publishes p
- Readers r1, , r10 very good paper (jri,p 0.9)
- Readers r11, , r100 very bad paper (jri,p
0.1) - The laziest r10
- The least lazy r11
43The laziest and the least lazy
442. Improve the mechanism
- Give higher scores to quick readers
- Dont show sp for some time after its publication
453. Measure reader laziness
- Mean of goodness values
- Weighted by paper steadiness
46More complex simulations
- Software agents that simulate readers
- Autonomous agents
- Partly random behavior
- Not easy! (many parameters)
47Simulation 1 kinds of readers (1/2)
- 5 categories of readers
- Random
- Constant (0.5)
- Lazy
- Worst
- Lazy-best (good)
- 60 papers, 100 readers
- 3000 judgments
48Simulation 1 kinds of readers (2/2)
- Results
- Worst 0.2
- Good (Lazy-best) 1
- Random 0.5
- Constant 0.8 (!)
- Lazy 0.7
49Simulation 2 parameters
- 7 continuous parameters of readers
- Goodness
- Laziness
- Activeness
- Selectiveness
- Randomness
- Quickness
- Constantness
- Uniformly distributed, independent
- sp and sr distributions
- 300 papers, 500 readers, 9500 judgments
50Simulation 2 results
- Correlation between sr and parameters
- Goodness 0.16
- Lazyness 0.20
- Activeness 0.27
- Randomness -0.5
- Constantness 0.27
- Quickness -0.1
- Selectiveness 0
51Does it work?
- Difficult to draw a final conclusion
- Social and biological systems tend to exhibit
unexpected behavior - Rather complex game, rules of the game,
- Forecasting is very difficult, particularly in
the future ?
52Outline
- Introduction
- Electronic Publishing Peer review
- A new proposal
- Informal description example
- Experimental evaluation
- Application to the EU-India digital platform
- Discussion, future work, open problems
53The EU-India digital platform
- E-Dvara
- CMS (XML, XSLT)
- Cultural heritage dissemination
- Ancient Indian Science
- http//archiviazione.infofactory.it/india
- A first implementation of the mechanism
54Software Architecture
55User interaction reading
56User interaction judging
57(No Transcript)
58(No Transcript)
59(No Transcript)
60(No Transcript)
61(No Transcript)
62(No Transcript)
63Outline
- Introduction
- Electronic Publishing Peer review
- A new proposal
- Informal description example
- Experimental evaluation
- Application to the EU-India digital platform
- Discussion, future work, open problems
64Summary
- Scores of papers and authors
- Readers acting as referees
- Scores of readers
- Feedback on the readers for achieving good
quality judgments - Good reader good reputation
- EU-India
65Discussion
- Historical trend increase referees
- (interdisciplinary fields)
- (its peer review!?)
- Improvement of
- Pure democratic journal
- Impact factor, citation counts (lt ?p!)
- Collaborative/social filtering (complementary)
- Replace/complement peer review
- Referees do not work for free!
66Open questions and problems
- Malicious strategies
- Lobbies?
- Lazy readers?
- Others
- Human supervisor (? referee!)
- Simple variants to pay each judgment
expression - Technical problems (security, efficiency, )
- Socially accepted?
67Future work improvements (1/2)
- (in random order)
- More scores
- technical soundness, comprehensibility,
originality, - Different formulae
- More journals, different acceptance thresholds
- Better approximation of the final sp (trend?)
- Theoretical analysis (game theory?)
68Future work improvements (2/2)
- More simulations
- Real life experiments
- Final implementation
- Use referee data from journals, etc.
- Find a name
- Fake authors/papers/readers?
- Generalization, application to reputation systems
(e.g., in e-commerce, e-learning)
69Reference
- S. Mizzaro. Quality Control in Scholarly
Publishing A New Proposal, Journal of the
American Society for Information Science and
Technology, 54(11)989-1005, 2003. - (just ask for a copy)
70Thanks to
- Vincenzo Della Mea
- Massimo Di Fant
- Luca Di Gaspero
- Marco Fabbrichesi
- Andrea Fusiello
- Stefania Gentili
- Stevan Harnad
- Paolo Massa
- Marco Mizzaro
- Carla Piazza
- Ivan Scagnetto
- Walter Vanzella
- Paolo Zandegiacomo Riziò
- Referees