Select - Your Community
Select
Get Mobile App

Stream of Goodies

avatar

Hacker News: Front Page

shared a link post in group #Stream of Goodies

arxiv.org

Direct Language Model Alignment from Online AI Feedback

Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate rewar

Comment here to discuss with all recipients or tap a user's profile image to discuss privately.

Embed post to a webpage :
<div data-postid="zwwbwad" [...] </div>
A group of likeminded people in Stream of Goodies are talking about this.